id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.08015 | UHF RFID and NFC Point-of-Care -- Architecture, Security, and
Implementation | Points-of-care (PoCs) augment healthcare systems by performing care whenever
needed and are becoming increasingly crucial for the well-being of the
worldwide population. Personalized medicine, chronic illness management, and
cost reduction can be achieved thanks to the widespread adoption of PoCs.
Significant incentives for PoCs deployment are nowadays given by wearable
devices and, in particular, by RFID (RadioFrequency IDentification) and NFC
(Near Field Communications), which are rising among the technological
cornerstones of the healthcare internet of things (H-IoT). To fully exploit
recent technological advancements, this paper proposes a system architecture
for RFID- and NFC-based PoCs. The architecture comprises in a unitary framework
both interfaces to benefit from their complementary features, and gathered data
are shared with medical experts through secure and user-friendly interfaces
that implement the Fast Health Interoperability Resource (FHIR) emerging
healthcare standard. The selection of the optimal UHF and NFC components is
discussed concerning the employable sensing techniques. The secure transmission
of sensitive medical data is addressed by developing a user-friendly "PoC App"
that is the first web app exploiting attribute-based encryption (ABE). An
application example of the system for monitoring the pH and cortisol levels in
sweat is implemented and preliminarily tested by a healthy volunteer. | Giulio Maria Bianco, Emanuele Raso, Luca Fiore, Vincenzo Mazzaracchio, Lorenzo Bracciale, Fabiana Arduini, Pierpaolo Loreti, Gaetano Marrocco, Cecilia Occhiuzzi | 2023-04-17T06:43:19Z | http://arxiv.org/abs/2304.08015v2 | # UHF RFID and NFC Point-of-Care - Architecture, Security, and Implementation
###### Abstract
Points-of-care (PoCs) augment healthcare systems by performing care whenever needed and are becoming increasingly crucial for the well-being of the worldwide population. Personalized medicine, chronic illness management, and cost reduction can be achieved thanks to the widespread adoption of PoCs. Significant incentives for PoCs deployment are nowadays given by wearable devices and, in particular, by RFID (RadioFrequency IDentification) and NFC (Near Field Communications), which are rising among the technological cornerstones of the healthcare internet of things (H-IoT). To fully exploit recent technological advancements, this paper proposes a system architecture for RFID- and NFC-based PoCs. The architecture comprises in a unitary framework both interfaces to benefit from their complementary features, and gathered data are shared with medical experts through secure and user-friendly interfaces that implement the Fast Health Interoperability Resource (FHIR) emerging healthcare standard. The selection of the optimal UHF and NFC components is discussed concerning the employable sensing techniques. The secure transmission of sensitive medical data is addressed by developing a user-friendly "PoC App" that is the first web app exploiting attribute-based encryption (ABE). An application example of the system for monitoring the pH and cortisol levels in sweat is implemented and preliminarily tested by a healthy volunteer.
Cybersecurity, electrochemical sensors, Fast Health Interoperability Resources, healthcare internet of things systems, Near Field Communication, radiofrequency identification.
## I Introduction
Since \(1990\), steady-paced medical advancements have enclentened life expectancy worldwide; however, ageing, disability, and chronic illnesses have yielded a heavier disability burden on the population [1] and, consequently, more expenses for healthcare systems [2]. To manage the increasingly common chronic medical conditions, cost-effective and continuous treatments are needed. In this context, points-of-care (PoCs) allow for decreasing costs while raising the quality of medical care. PoCs are defined as sites of patient care wherever the care is performed and encompass both testing and monitoring [3]. Since PoCs operate outside the main laboratories [4], they augment healthcare infrastructures by providing more frequent feedback loops closer to the patients, thus enabling precise, timely diagnoses and personalized treatments while lowering the costs [5, 6] (Fig. 1). In addition to more conventional devices, the myriad of up-to-date data gathered through wireless sensors [7, 8] can even support the integration of PoCs with the latest paradigm in medicine of _homespitals_[9] and _expert patients_[10].
Among the available technologies, PoCs can also exploit ultra-high frequency (UHF) RFID (RadioFrequency IDentification) and NFC (Near Field Communication). Indeed, in the last decade, such devices have quickly arisen as versatile enablers of healthcare internet of things (H-IoT) systems by making processes more efficient [19] and implementing pervasive monitoring of health status [20]. A straightforward use of RFID and NFC is augmenting PoCs with wireless identification to improve asset management [21], but more advanced tags can perform even complex sensing. In this latter case, the sensors included in the tags are low-power and low-cost to enable near-patient monitoring and testing. Last-generation sensor tags can sense many on- and off
Fig. 1: Concept of a _PoC cycle_ augmenting an existing healthcare system and timely treating the patient outside principal laboratories.
This is the author's version of an article that has been published in _IEEE Journal of Radio Frequency Identification_. Changes were made to this version by the publisher prior to publication. The final version of record is available at
[http://dx.doi.org/10.1109/JRFID.2023.3268422](http://dx.doi.org/10.1109/JRFID.2023.3268422)
The listed examples suggest that radiofrequency identification devices could get medical data on the health status of a patient; however, to implement a real RFID/NFC-based PoC, many architectural and operative elements are still missing. Particularly, NFC and UHF RFID have different strengths and weaknesses, and since they require different interrogating devices, they are usually not combined. UHF RFID is used if a long reading distance, simultaneous reading of multiple tags, and/or ad-hoc antenna design due to size constraints are required. Using UHF hardware is also optimal if already available reading infrastructures, typically implemented for inventory purposes, can be exploited. On the contrary, NFC devices are deployed if near-contact distances, higher power and/or the highest security and data rates are required. Possible interoperability issues (like missed readings or hindered communications) between the NFC protocols are another weakness of the technology [36]. In PoC scenarios and especially in domestic environments, the simultaneous use of UHF RFID and NFC devices can allow, thanks to their partial complementarity, for maximizing the benefits of the superior read range of the former and the close-range interaction of the latter in order to meet more patient's needs through the same platform. Indeed, a hybrid platform can provide better information redundancy in case of failure of one of the two reading architectures and can even be scaled up more effectively since they can be integrated with existing radiofrequency identification systems.
Besides the generation of data through sensors, PoCs also require an efficient transmission of data to the physician. Secure information sharing between patients and healthcare facilities/staff has not received sufficient attention from security experts yet, and patients often still send their sensitive data using e-mail or instant messaging services [37]. RFID and NFC sensors can generate huge amounts of data worsening the challenge. General-purpose Cloud services have then to
Fig. 2: Proposed architecture of the UHF-RFID/NFC-PoC platform and its logical blocks.
This is the author's version of an article that has been published in _IEEE Journal of Radio Frequency Identification_. Changes were made to this version by the publisher prior to publication. The final version of record is available at
[http://dx.doi.org/10.1109/JRFID.2023.3268422](http://dx.doi.org/10.1109/JRFID.2023.3268422)
be adopted so that two issues related to the privacy of the patient arise: _i_) compliance of the monitoring platform with the regulatory framework on sensitive data, e.g., the European General Data Protection Regulation [38]; _ii_) adoption of proper technical solutions that ensure secure access to the data with no participation of the Cloud service, that, in general, could be an _honest-but-curious_, or even _malicious_, actor. For this reason, many solutions have been proposed in literature trying to deal with the major issue of ensuring privacy and data security in Cloud services [39, 40, 41]. A promising technology in this area is attribute-based encryption (ABE), which offers cryptographic protection of information in combination with data access control directly provided by the technology and managed by data owners [42, 43, 44, 45].
Building on the previous considerations, this paper expands the preliminary work presented in [46] and proposes a complete system architecture for PoCs based on radiofrequency identification for the first time. The architecture includes all the components, from the sensors to the data sharing, and seamlessly integrates UHF RFID and NFC in the same platform thanks to a novel web app. The web app encrypts data through a JavaScript library that wraps an ABE library written in Rust [47] and then transmits files according to the Fast Health Interoperability Resources (FHIR) emerging standard for healthcare files [48]. An implementation example simulating a PoC application exploits recent epidermal boards (from [16, 17, 49, 50]) and shows how to deploy the system architecture.
The paper is organized as follows. The PoC system architecture and how to integrate all the necessary components are discussed in Section II. We describe how the security issues can be addressed and implement an innovative web app which, to the best of our knowledge, is the first web app exploiting ABE (Section III). Finally, Section IV shows an implementation of the architecture concerning the monitoring of sports activity by sensing temperature, pH, and cortisol.
## II Architecture and Deployment
### _System Architecture Overview_
Fig. 2 depicts the system architecture of the UHF-RFID/NFC PoC. The _patient's side users_ are the patient himself/herself and any eventual medical staff assigned to the PoC; these users generate the medical data, transmit them to the doctor, and receive the doctor's feedback. The doctor following the patient's medical history, in turn, uses the PoC platform to receive the gathered data and, hence, provide the patient with personalized treatment or even change the care plan if needed.
To perform the required treatment, the RFID/NFC-PoC exploits a suitable combination of UHF RFID and NFC hardware. The hardware gathers data from medical care thanks to the tags and the readers that together compose the _radiofrequency identification layer_. The healthcare information that can be processed by the RFID layer can be categorized into three main kinds [20]: _i_) drugs and assets management, _ii_) access control, and _iii_) sensing, including environmental and behavioural sensing.
The operating system piloting the reader also runs an ad hoc _PoC App_. Since the PoC exploits a hybrid UHF-NFC radiofrequency identification layer, the web app will run on both the typologies of readers to gather all the data. After the collection, the PoC App provides one first prompt _PoC's feedback_ to the patient's side users, informing them if the care is completed correctly and if any urgent action is needed to compensate for dangerous physiological alterations, for instance, adverse effects to drugs. The reader device formats the medical data to be seamlessly integrated with the information flux about the medical history of the patient in the computer healthcare system. Then, the PoC App encrypts data with a personal key of the patient and completes _Data Storage_ on a Cloud. Even commercial or personal Clouds like Google Drive or Dropbox can be exploited, given that files are already encrypted before upload. After data storage, the PoC App notifies the doctor that new data are available. Via the PoC App itself, the doctor can access the encrypted data on the Cloud, store a copy on a personal device, decrypt the copy using a personal key and, finally, access the medical information.
At this point, the doctor can send the feedback back to the PoC after having reviewed the information carefully. The doctor uses the received data to update the medical history of the patient and can decide if particular actions must be performed from the PoC's side. The classic healthcare system supports the doctor when needed for consulting other experts or completing finer analyses so that the RFID/NFC-PoC is fully integrated with the existing healthcare infrastructure as for the PoC paradigm (see again Fig. 1). Finally, the doctor's feedback is received by the PoC App and delivered to the patient's side users, too, providing the patient with timely and personalized care. The immediate feedback returned by the RFID/NFC-PoC allows for monitoring of the care even if the doctor's feedback is delayed for any reason.
Fig. 3: Example of FHIR Observation based on [51].
This is the author's version of an article that has been published in _IEEE Journal of Radio Frequency Identification_. Changes were made to this version by the publisher prior to publication. The final version of record is available at
[http://dx.doi.org/10.1109/JRFID.2023.3268422](http://dx.doi.org/10.1109/JRFID.2023.3268422)
### _Data Storage, Security, and Representation_
Concerning data security, the tag-reader electromagnetic link has different nature than the reader-doctor link, which exploits the internet. Furthermore, not all data are equally sensitive: asset management information could exploit weaker precautions than vital knowledge like biosignals' measurements.
The radiofrequency wireless link is the first link that could be attacked. The security of radiofrequency identification devices has been a research topic since the early 2000s [52], and several reviews have investigated it, including selenometric analyses on research trends [52] and existing challenges [53]. Several attacks can indeed be performed at this level, for instance, _skimming_ through the establishment of a hidden communication link by a malicious reader concealed to the PoC users, or _eavesdropping_ on the tag-reader communications, possibly through side-channel attacks [54]. The longer the reader-tag distance, the easier performing each attack; thus, the reading range must be as short as possible while the PoC operates. Typical distances of wearable RFID/NFC devices (up to some tens of centimetres) can be hence considered secure if the wearer is vigilant since the attacker should be physically near to the tag and, therefore, easily detectable. The tags must be removed, shielded, or deactivated when they are not expected to be read to avoid skimming. For higher security, tags providing encryption features should be deployed [54].
The reader-doctor link is the second one where an attacker can attempt to obtain or manipulate the data, and it poses a cybersecurity vulnerability. Since the amount of data generated by UHF-RFID/NFC sensors is expected to be large, general-purpose Cloud services have to be adopted, but the Cloud service could be an honest-but-curious or malicious actor. Among many possible solutions, ABE can address this issue without requiring complicated key management, as is shown in the next Section. Another point to be addressed in patient-doctor communications is data representation. By exploiting the sophisticated options offered, the FHIR standard plays an extremely important role in this context by standardizing clinical data into files named _Observations_[48]. Fig. 3 shows an example of one FHIR Observation resource. Thanks to FHIR adoption, healthcare information can be seamlessly shared between systems and devices, even between different Nations and languages.
## III Secure Data Sharing by PoC App
In this Section, a PoC web app for securing the reader-doctor link using ABE is described and implemented.
### _Attribute-based Sharing_
With ABE technology, the patient's side user is able to choose which attributes the person decoding the information must have. For example, he/she can decide that a certain file is readable by all medical and para-medical personnel. In this way, he/she will not be forced to enter the specific credentials of each doctor (like in the case of the classic public key infrastructure solution), which are often not known when the document is shared.
Thus, the following three roles are clearly defined:
1. the _data owner_, who decides the types of users who can access his data;
2. the _key manager_, typically the hospital or public health services, that provides their employees with credentials in which the attributes characterising them are embedded;
3. a _transport and notification system_ that, thanks to the adoption of the ABE encryption technique, operates by ignoring the information being carried or shared.
### _Data Representation_
In order to ease the data exchange and interoperability between UHF RFID and NFC, once data are collected from the tags, the device piloting the RFID/NFC reader converts them into FHIR data. In particular, data collected from the sensors are used to build Observation resources, commonly used to handle, among others, vital signs (e.g., body weight, blood pressure, and temperature), laboratory data (e.g., blood glucose) and device measurements (e.g., EKG data or pulse oximetry data).
### _PoC App Implementation_
#### Iii-C1 App Overview
We implemented the secure data sharing architecture proposed in [54] that adopts the ABE cryptographic scheme to enforce _user-controlled access_ (Fig. 4). In the remote part of the architecture, which is related to secure Cloud-based data sharing, we consider four actors:
1. the _Cloud Provider_, one of the existing commercial providers which offer file storage, sharing and synchronisation service (e.g., Dropbox, Google Drive, etc.);
2. the _Patient_, who uses her smartphone or laptop as the reader to collect data from the tags and has to be able to share them with the Medical Personnel;
3. the _Medical Personnel_ (or _Staff_), whose members have to be able to access data shared by the Patients;
4. the _Medical System_, the entity responsible for the management of the authorisations of the Medical Personnel to access the Patient's data.
Patients and Medical Personnel interact with the Medical System to obtain the cryptographic keys and use them to share protected data using the service offered by the Cloud Provider.
Fig. 4: Secure data sharing architecture. Image adapted from [54].
This is the author's version of an article that has been published in _IEEE Journal of Radio Frequency Identification_. Changes were made to this version by the publisher prior to publication. The final version of record is available at
[http://dx.doi.org/10.1109/JRFID.2023.3268422](http://dx.doi.org/10.1109/JRFID.2023.3268422)
#### Iii-C2 Technology Details
We devised the JavaScript Web Application that is named eCrome, which is used by both the doctor and the patient. The main screens of this application are shown in Fig. 5. The application communicates with the Cloud (Google Drive, in this case) to store the patient's documents and share them with the doctors. The protection of the documents that are shared is achieved by means of Rust RABE [47], an ABE library written in Rust which was suitably modified and compiled in assembly to be used within browsers. Indeed, to implement the web app, we developed a JavaScript library that wraps RABE to make ABE encryption and decryption possible in a web context. All the complexity is hidden from users, and the cryptographic mechanisms are transparent.
We also implemented two network services: _i_) a key management system that provides the system's standard attributes, the public key ABE and, via login, the private keys of the doctors or health personnel; _ii_) a document sharing notification system based on the no-backend Firebase system [55] that allows all the web apps to communicate in real-time and thus create document sharing notifications between patients and doctors or between the doctors themselves.
## IV Example of System Implementation
An implementation example of the PoC platform is detailed in this Section. The architecture is implemented according to the following steps: _i_) the target application is defined to derive the measurands to be monitored, _ii_) suitable sensors are chosen, _iii_) the radiofrequency identification layer is selected, and _iv_) data security is addressed by utilizing the PoC App presented in the previous Section. Finally, the implemented system is preliminarily tested involving a healthy volunteer.
### _Target PoC Treatment and Measurands Individuation_
The first step necessary to deploy a radiofrequency PoC is defining the quantities of interest based on the medical condition of the patient. Among several possible phenomena, skin temperature and sweat's pH and cortisol levels are helpful for monitoring many illnesses like post-traumatic stress disorder [56], dementia [57], and anorexia [58]. Without any loss of generality, the implementation example in the remainder of this Section assumes the supervision of the fitness routine of an acute myocardial infarction (AMI) survivor.
AMI affects more than \(7\) million individuals worldwide, yielding an economic impact of \(450\$\) in the United States because of the direct costs solely [59]. Patients admitted with AMI can also develop subsequent major adverse cardiovascular events (MACE), especially when the cortisol levels in the blood are high [60]. Since cortisol increases under stress [61, 62, 63] in blood and in sweat [64], keeping stress levels low is vital to prevent further MACE events. Physical activity is particularly effective for this aim [65, 66]. Nonetheless, physical exercise is not always advisable for survivors of myocardial infarction as the body-fluid balance could be more difficult to maintain than it is for healthy people, possibly increasing the patient's fatigue and stress [67]. Furthermore, post-traumatic stress disorder following AMI is worryingly common after an invasive intervention [68], so the health status can be complicated by hyperventilation [69] and the consequent respiratory alkalosis [70]. Overall, the stress level and physical activity should be closely monitored to ensure the well-being of myocardial infarction survivors.
### _Sensors Selection_
The domestic PoC platform has to aid an AMI survivor in complying with an exercise routine. Such a platform exploits all the three types of sensing introduced in Section I, namely, physical (specifically, temperature), behavioural, and chemical sensing. A temperature sensor monitors the skin temperature
Fig. 5: Web app screenshots. (a) Patient/Doctor login. (b) Sharing menu. (c) Attribute selection. (d) Shared document list.
Fig. 6: Electrical connections of the ICs deployed for the system. Pin numbers and connections are reported, as well as the connections with internal components like an analog front-end (AFE) and a multiplexer (MUX). (a) Schematic of the UHF board with the SL900A IC and symbol of the chemical sensor detailing the electrodes. (b) Schematic of the epidermal NFC sensor hosting the SIC4341 IC.
This is the author's version of an article that has been published in _IEEE Journal of Radio Frequency Identification_. Changes were made to this version by the publisher prior to publication. The final version of record is available at
[http://dx.doi.org/10.1109/JRFID.2023.3268422](http://dx.doi.org/10.1109/JRFID.2023.3268422)
possible to interrogate the tag in any time and place through the smartphone. Instead, physical activity needs longer reading distances to give the wearer some freedom of movement in the proximity of the reader during real-time monitoring and, accordingly, a UHF RFID board is the optimal choice. Hence, the electrochemical sensors will be connected to the proper electromagnetic interfaces as needed, and a hybrid UHF-NFC system will be implemented.
The NFC responder from [17] and the epidermal UHF board from [50] can be used as body-worn epidermal tags. Fig. 6 reports the schematics of the two tags, whereas the numerical models and prototypes are depicted in Fig. 7. The tags can be worn for up to some hours as required for monitoring physical activity. The NFC coil is manufactured through \(40\)-\(\mu\)m-wires manually posed on a breathable and transparent plaster (Tegaderm by 3M\({}^{\text{TM}}\)). The spiral antenna is soldered to the SIC \(4341\) (from Silicon Craft Technology) IC that can be read by smartphones through the ISO/IEC \(14443-3\)A protocol while performing biosensing. An FR-\(4\) pad hosts the IC SIC \(4341\) and the plug&play connector for the sake of robustness. The chosen NFC reader embedded in a smartphone (Oppo Reno Z; operating system: ColorOs V11.1) did not show any interoperability issue with the NFC sensor, reading it smoothly.
The RFID tag has the SL\(900\)A (by AMS OSRAM) IC, which is used in battery-assisted-passive (i.e., the battery is utilized for power but not for communications) mode to lower the chip sensitivity down to \(-15\) dBm. The antenna is an open-loop (maximum gain: \(-15\) dBi). The board can be read by the portable UHF RFID "USB Plus+" (by ThingMagic) reader having an embedded antenna and maximum equivalent isotropic radiated power of \(24\) dBiL resulting in a maximum reading distance of about \(15\) cm with the selected tag [50]. This distance allows the board's wearer to ride the stationary bicycle comfortably while using the tag. Then, the USB Plus+ reader can be connected to a piloting laptop and fixed to the stationary bicycle [Fig. 8(a)]. In this way, the tag-reader link is the shorter and more secure possible, and the European regulation on the specific absorption rate is respected as the arrangement is similar to the one described in [73].
### _Use of the PoC Platform_
In this example, the doctor is assumed to prescribe to the AMI survivor \(15\) minutes of exercise on a stationary bike as the daily fitness routine while monitoring the skin's temperature and the pH of the sweat to avoid respiratory alkalosis. Afterwards, the survivor should walk for additional \(15\) minutes and, at the end of the routine, check the cortisol level in sweat to inquire if it is in the desired range. The doctor can hence check compliance with the fitness routine based on the timestamps, record the medical data, and monitor the overall psycho-physical well-being of the patient through the PoC App.
The radiofrequency boards were attached to the right and left ventral mid forearms of a healthy volunteer simulating the AMI survivor [Fig. 8(b)]. Ventral mid forearms were chosen as the application points of the sensors since they are among the optimal positions for the targeted sweat sensing [74]. Afterwards, the volunteer walked in a park at his usual walking speed [Fig. 8(c)]. All the retrieved raw data collected by the laptop and the smartphone were given as input to the PoC App, which post-processed them by discarding incorrect readings and performing a moving average to smooth fluctuations due to displacements of the sensors. Accordingly, pH values of the sweat comprised between \(4.5\) and \(7.0\) were considered correct based on [74], and the window for averaging was of \(5\) seconds.
The recorded tracks of pH and temperature are drawn in Fig. 9. During the exercise, the skin temperature increased monotonically by about \(0.5\) Celsius, as expected, suggesting a continuous and physiological physical effort. pH monitoring started after about \(100\) s of cycling when the volunteer started sweating, and the sensing electrode got wet. The pH value stabilized at \(5.5\) after \(300\) s, confirming that the subject is healthy and no adverse event happened during the exercise. After the walk, the cortisol level was checked through chronoamperometry. The current value, which stabilizes after \(130\) s, was obtained by the Chemister app (by Silicon Craft Technology) and then given to the PoC App. After performing the conversion, the measured cortisol in sweat was \(64\) ng/mL (Fig. 10), fully comparable with physiological cortisol levels of healthy subjects reported by the literature, namely, \(8.16-141.7\) ng/mL post-exercise [75]. Even in this case, the
Fig. 11: FHIR observation of the performed measurements of (a) temperature, (b) pH, and (c) cortisol.
This is the author's version of an article that has been published in _IEEE Journal of Radio Frequency Identification_. Changes were made to this version by the publisher prior to publication. The final version of record is available at
[http://dx.doi.org/10.1109/JRFID.2023.3268422](http://dx.doi.org/10.1109/JRFID.2023.3268422)
## V Conclusion
In this paper, we proposed a point-of-care platform utilizing UHF RFID and NFC devices for the secure collection and transmission of medical data. The high-level architecture of the platform, the hardware selection, and data security and representation were analyzed. Data from the two technologies are made homogeneous according to the FHIR healthcare standard and are securely shared by a web app utilizing Cloud storage and ABE. An implementation example regarding sensing cortisol, sweat's pH, and skin temperature through the hybrid platform was deployed and tested.
The architecture and the implementation presented above confirm that thanks to the combination of UHF RFID and NFC, the latest advancements in radiofrequency sensors and healthcare information management can effectively be integrated into fully functioning PoCs that can address the severe challenges faced by healthcare systems worldwide. The investigation proves the concept and the feasibility of this kind of points-of-care that yet need to be tested in real case studies to quantify the expected benefits exactly. However, using the two radiofrequency identification technologies together is still difficult since it requires a complex system architecture. The development of new sensing dual-tags [76] and the latest generation of platforms embedding both NFC and UHF readers [77] can significantly simplify hardware deployment. Furthermore, the tag-reader links cannot be considered secure if long reading distances are exploited. Current signs of progress on reconfigurable wearable metasurfaces could help ensure security and privacy [78], for instance, by changing the radiation pattern of the body-worn antenna to hinder eavesdropping.
## Acknowledgments
The authors thank Dr Carolina Miozzi and Ms Adina Bianca Barba (from RADIO6ENSE srl), and Ms Alessia Riente (from the Pervasive Electromagnetics Lab of the Tor Vergata University of Rome) for their valuable help in completing the implementation example.
|
2301.05304 | A characterization of the $L^2$-range of the Poisson transforms on a
class of vector bundles over the quaternionic hyperbolic spaces | We study the $L^2$-boundedness of the Poisson transforms associated to the
homogeneous vector bundles $ Sp(n,1)\times_{Sp(n)\times Sp(1)} V_\tau$ over the
quaternionic hyperbolic spaces $ Sp(n,1)/Sp(n)\times Sp(1)$ associated with
irreducible representations $\tau$ of $ Sp(n)\times Sp(1)$ which are trivial on
$ Sp(n)$. As a consequence, we describe the image of the section space
$L^2(Sp(n,1)\times_{Sp(n)\times Sp(1)} V_\tau)$ under the generalized spectral
projections associated to a family of eigensections of the Casimir operator. | Abdelhamid Boussejra, Achraf Ouald Chaib | 2023-01-12T21:35:56Z | http://arxiv.org/abs/2301.05304v2 | A characterization of the \(L^{2}\)-range of the Poisson transforms on a class of vector bundles over the quaternionic hyperbolic spaces
###### Abstract
We study the \(L^{2}\)-boundedness of the Poisson transforms associated to the homogeneous vector bundles
\(Sp(n,1)\times_{Sp(n)\times Sp(1)}V_{\tau}\) over the quaternionic hyperbolic spaces \(Sp(n,1)/Sp(n)\times Sp(1)\) associated with irreducible representations \(\tau\) of \(Sp(n)\times Sp(1)\) which are trivial on \(Sp(n)\). As a consequence, we describe the image of the section space \(L^{2}(Sp(n,1)\times_{Sp(n)\times Sp(1)}V_{\tau})\) under the generalized spectral projections associated to a family of eigensections of the Casimir operator.
**Keywords**: Vector Poisson transform, Fourier restriction estimate, Strichartz conjecture.
## 1 Introduction
Let \(G\) be a connected real semisimple noncompact Lie group with finite center, and \(K\) a maximal compact subgroup. Then \(X=G/K\) is a Riemannian symmetric space of noncompact type. Let \(G=KAN\) be an Iwasawa decomposition of \(G\), and let \(M\) be the centralizer of \(A\) in \(K\). We write \(g=\kappa(g)\mathrm{e}^{H(g)}n(g)\), for each \(g\in G\) according to \(G=KAN\). A central result in harmonic analysis (see [17]) asserts that all joint eigenfunctions \(F\) of the algebra \(\mathbb{D}(X)\) of invariant differential operators, are Poisson integrals
\[F(g)=\mathcal{P}_{\lambda}f(g):=\int_{K}\mathrm{e}^{(i\lambda+\rho)H(g^{-1}k)} f(k)\,\mathrm{d}k,\]
of a hyperfunction \(f\) on \(K/M\), for a generic \(\lambda\in\mathfrak{a}_{c}^{*}\) (the complexification of \(\mathfrak{a}^{*}\) the real dual of \(\mathfrak{a}\)).
Since then a characterization of the \(L^{p}\)-range of the Poisson transform was developed in several articles such as [3], [5], [6], [7], [15], [20], [21], [22], [24], [25].
The problem of characterizing the image of the Poisson transform \(\mathcal{P}_{\lambda}\) of \(L^{2}(K/M)\) with real and regular spectral parameter \(\lambda\) is intimately related to Strichartz conjecture [[25], Conjecture 4.5] on the uniform \(L^{2}\)-boundedness of the generalized spectral projections associated with \(\mathbb{D}(X)\). To be more specific, consider the generalized spectral projections \(\mathcal{Q}_{\lambda}\) defined initially for \(F\in C_{c}^{\infty}(X)\) by
\[\mathcal{Q}_{\lambda}F(x)=\mid\mathbf{c}(\lambda)\mid^{-2}\mathcal{P}_{ \lambda}(\mathcal{F}F(\lambda,.)(x),\quad\lambda\in\mathfrak{a}^{*}, \tag{1.1}\]
where \(\mathcal{F}F\) is the Helgason Fourier transform of \(F\) and \(\mathbf{c}(\lambda)\) is the Harish-Chandra \(c\)-function.
**Conjecture** (Strichartz [[25], Conjecture 4.5]). There exists a positive constant \(C\) such that for any \(F_{\lambda}=\mathcal{Q}_{\lambda}F\) with
\(F\in L^{2}(X)\) we have
\[C^{-1}\parallel F\parallel_{L^{2}(X)}^{2}\leq\sup_{R>0,y\in X}\,\int_{\mathfrak{a} _{+}^{*}}\,\frac{1}{R^{r}}\int_{B(y,R)}\mid F_{\lambda}(x)\mid^{2}\;\mathrm{d}x \,\mathrm{d}\lambda\leq C\parallel F\parallel_{L^{2}(X)}^{2}, \tag{1.2}\]
and
\[\parallel F\parallel_{L^{2}(X)}^{2}=\gamma_{r}\lim_{R\to\infty}\int_{ \mathfrak{a}_{+}^{*}}\frac{1}{R^{r}}\int_{B(y,R)}\mid F_{\lambda}(x)\mid^{2}\; \mathrm{d}x\,\mathrm{d}\lambda. \tag{1.3}\]
Conversely, if \(F_{\lambda}\) is any family of joint eigenfunctions for which the right hand side of (1.2) or (1.3) is finite, then there exists \(F\in L^{2}(X)\) such that \(F_{\lambda}=\mathcal{Q}_{\lambda}F\) for a.e. \(\lambda\in\mathfrak{a}_{+}^{*}\).
Here \(r=\mathit{rank}\,X\), and \(B(y,R)\) denotes the open ball in \(X\) of radius \(R\) about \(y\). The constant \(\gamma_{r}\) depends on the normalizations of the measures \(\mathrm{d}x\) and \(\mathrm{d}\lambda\).
The strichartz conjecture has been recently settled by Kaizuka, see [16]. Most of the proof consists in proving a uniform estimate for the Poisson transform. More precisely, the following was proved by Kaizuka [[16], Theorem 3.3]:
Let \(F\) be a joint eigenfunction with eigenvalue corresponding to a real and regular spectral parameter \(\lambda\). Then \(F\) is the Poisson transform by \(\mathcal{P}_{\lambda}\) of some \(f\in L^{2}(K/M)\) if and only if
\[\sup_{R>1}\frac{1}{R^{r}}\int_{B(0,R)}\mid F(x)\mid^{2}\;\mathrm{d}x<\infty.\]
Moreover there exists a positive constant \(C\) independent of such \(\lambda\),
\[C^{-1}\mid\mathbf{c}(\lambda)\mid^{2}\parallel f\parallel_{L^{2}(K/M)}^{2} \leq\sup_{R>1}\,\frac{1}{R^{r}}\int_{B(0,R)}\mid\mathcal{P}_{\lambda}f(x)\mid^ {2}\;\mathrm{d}x\leq C\mid\mathbf{c}(\lambda)\mid^{2}\parallel f\parallel_{L^ {2}(K/M)}^{2}.\]
The generalization of these results to vector bundles setting has only just begin. In [8] we extend Kaizuka result to homogeneous line bundles over non-compact complex Grassmann manifolds (See also [4]).
Our aim in this paper is to generalize theses results to a class of homogeneous vector bundles over the quaternionic hyperbolic space \(G/K\), where \(G\) is the symplectic group \(Sp(n,1)\) with maximal compact subgroup \(K=Sp(n)\times Sp(1)\). To state our results in rough form, let us first introduce the class of the homogenous vector bundles that we consider in this paper. Let \(\tau_{\nu}\) be a unitary irreducible representation of \(Sp(1)\) realized on a \((\nu+1)\)-dimensional Hilbert space \((V,(.,.)_{\nu})\). We extend \(\tau_{\nu}\) to a representation of \(K\) by setting \(\tau_{\nu}\equiv 1\) on \(Sp(n)\). As usual the space of sections of the homogeneous vector bundle \(G\times_{K}V\) associated with \(\tau_{\nu}\) will be identified with the space \(\Gamma(G,\tau_{\nu})\) of vector valued functions \(F:G\to V_{\nu}\) which are right \(K\)-covariant of type \(\tau_{\nu}\), i.e.,
\[F(gk)=\tau_{\nu}(k)^{-1}F(g),\quad\forall g\in G,\quad\forall k\in K. \tag{1.4}\]
We denote by \(C^{\infty}(G,\tau_{\nu})\) and \(C^{\infty}_{c}(G,\tau_{\nu})\) the elements of \(\Gamma(G,\tau_{\nu})\) that are respectively smooth, smooth with compact support in \(G\), and by \(L^{2}(G,\tau_{\nu})\) the elements of \(\Gamma(G,\tau_{\nu})\) such that
\[\parallel F\parallel_{L^{2}(G,\tau_{\nu})}=\left(\int_{G/K}\parallel F(g) \parallel_{\nu}^{2}\;\mathrm{d}g_{K}\right)^{\frac{1}{2}}<\infty.\]
In above \(\parallel.\parallel_{\nu}\) is the norm in \(V_{\nu}\) and \(\parallel F(gK)\parallel_{\nu}=\parallel F(g)\parallel_{\nu}\) is well defined for \(F\) satisfying (1.4).
Let \(\sigma_{\nu}\) denote the restriction of \(\tau_{\nu}\) to the group \(M\simeq Sp(n-1)\times Sp(1)\). Over \(K/M\) we have the associated homogeneous vector bundle \(K\times_{M}V_{\nu}\) with \(L^{2}\)-sections identified with \(L^{2}(K,\sigma_{\nu})\) the space of all functions \(f:K\to V_{\nu}\) which are \(M\)-covariant of type \(\sigma_{\nu}\) and satisfy
\[\parallel f\parallel_{L^{2}(K,\sigma_{\nu})}^{2}=\int_{K}\parallel f(k) \parallel_{\nu}^{2}\;\mathrm{d}k<\infty,\]
where \(\mathrm{d}k\) is the normalized Haar measure of \(K\).
For \(\lambda\in\mathbb{C}\) and \(f\in L^{2}(K,\sigma_{\nu})\), the Poisson transform \(\mathcal{P}^{\nu}_{\lambda}f\) is defined by
\[\mathcal{P}^{\nu}_{\lambda}f(g)=\int_{K}\mathrm{e}^{-(i\lambda+\rho)H(g^{-1}k) }\tau_{\nu}(\kappa(g^{-1}k))f(k)\,\mathrm{d}k\]
Let \(\Omega\) denote the Casimir element of the Lie algebra \(\mathfrak{g}\) of \(G\), viewed as a differential operator acting on \(C^{\infty}(G,\tau)\). Then the image \(\mathcal{P}^{\nu}_{\lambda}(L^{2}(K,\sigma_{\nu}))\) is a proper closed subspace of \(\mathcal{E}_{\lambda}(G,\tau_{\nu})\) the space of all \(F\in C^{\infty}(G,\tau_{\nu})\) satisfying
\[\Omega\,F=-(\lambda^{2}+\rho^{2}-\nu(\nu+2))F.\]
For more details see section 2.
For \(\lambda\in\mathbb{R}\setminus\{0\}\), we define a weighted \(L^{2}\)-space \(\mathcal{E}^{2}_{\lambda}(G,\tau_{\nu})\) consisting of all \(F\) in \(\mathcal{E}_{\lambda}(G,\tau_{\nu})\) that satisfy
\[\parallel F\parallel_{*}=\sup_{R>1}\left(\frac{1}{R}\int_{B(R)}\|F(g)\|_{\nu} ^{2}\,\mathrm{d}g_{K}\right)^{\frac{1}{2}}<\infty.\]
Our first main result is an image characterization of the Poisson transform \(\mathcal{P}^{\nu}_{\lambda}\) of \(L^{2}(K,\sigma_{\nu})\) for \(\lambda\in\mathbb{R}\setminus\{0\}\).
**Theorem 1.1**.: Let \(\lambda\in\mathbb{R}\setminus\{0\}\) and \(\nu\) a nonnegative integer.
1. There exists a positive constant \(C_{\nu}\) independent of \(\lambda\) such that for \(f\in L^{2}(K,\sigma_{\nu})\) we have \[C_{\nu}^{-1}|\mathbf{c}_{\nu}(\lambda)|\,\|f\|_{L^{2}(K,\sigma_{\nu})}\leq\| \mathcal{P}^{\nu}_{\lambda}f\|_{*}\leq C_{\nu}|\mid\mathbf{c}_{\nu}(\lambda) \mid\|f\|_{L^{2}(K,\sigma_{\nu})},\] (1.5) with \[\mathbf{c}_{\nu}(\lambda)=2^{\rho-i\lambda}\frac{\Gamma(\rho-1)\Gamma(i \lambda)}{\Gamma(\frac{i\lambda+\rho+\rho}{2})\Gamma(\frac{i\lambda+\rho-\nu -2}{2})}.\] Furthermore we have the following Plancherel type formula for the Poisson transform \[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\|\mathcal{P}^{\nu}_{\lambda}f(g)\|_{ \nu}^{2}\,\mathrm{d}g_{K}=2\mid\mathbf{c}_{\nu}(\lambda)\mid^{2}\|f\|_{L^{2}( K,\sigma_{\nu})}^{2}\,.\] (1.6)
2. \(\mathcal{P}^{\nu}_{\lambda}\) is a topological isomorphism from \(L^{2}(K,\sigma_{\nu})\) onto \(\mathcal{E}^{2}_{\lambda}(G,\tau_{\nu})\).
This generalizes the result of Kaizuka [[16], (i) and (ii) in Theorem 3.3] which corresponds to \(\tau_{\nu}\) trivial.
**Consequence**
For \(\lambda\in\mathbb{R}\) we define the space
\[\mathcal{E}^{*}_{\lambda}(G,\tau_{\nu})=\{F\in\mathcal{E}_{\lambda}(G,\tau_{ \nu}):M(F)<\infty\},\]
where
\[M(F)=\lim\sup_{R\to\infty}\left(\frac{1}{R}\int_{B(R)}\parallel F(g)\parallel_{ \nu}^{2}\,\mathrm{d}g_{K}\right)^{\frac{1}{2}}.\]
Then as an immediate consequence of Theorem 1.1 we obtain the following result which generalizes a conjecture of W. Bray [10] which corresponds to \(\tau_{\nu}\) trivial.
**Corollary 1.1**.: If \(\lambda\in\mathbb{R}\setminus\{0\}\) then \(\mathcal{E}^{*}_{\lambda}(G,\tau_{\nu}),M)\) is a Banach space.
**Remark 1.1**.: In the case of the trivial bundle (the scalar case) the conjecture of Bray was proved by Ionescu [15] for all rank one symmetric spaces. It was generalized to Riemannian symmetric spaces of higher rank by Kaizuka, see [16].
Next, let us introduce our second main result on the \(L^{2}\)-range of the generalized spectral projections.
For \(F\in C^{\infty}_{c}(G,\tau_{\nu})\) the vector valued Helgason-Fourier transform \({\cal F}_{\nu}F\) is given by (see [11])
\[{\cal F}_{\nu}\,F(\lambda,k)=\int_{G}{\rm e}^{(i\lambda-\rho)H(g^{-1}k)}\tau_{ \nu}(\kappa(g^{-1}k)^{-1})F(g)\,{\rm d}g\quad\lambda\in\mathbb{C},\]
Then the following inversion formula holds (see section 4)
\[\begin{split} F(g)=\frac{1}{2\pi}&\int_{0}^{\infty }\int_{K}{\rm e}^{-(i\lambda+\rho)H(g^{-1}k)}\tau_{\nu}(\kappa(g^{-1}k)){\cal F }_{\nu}F(\lambda,k)\,\mid{\bf c}_{\nu}(\lambda)\mid^{-2}\,{\rm d}\lambda\,{ \rm d}k\\ &\quad+\sum_{\lambda_{j}\in D_{\nu}}d_{\nu}(\lambda_{j})\int_{K} {\rm e}^{-(i\lambda_{j}+\rho)H(g^{-1}k)}\tau_{\nu}(\kappa(g^{-1}k)){\cal F}_{ \nu}F(\lambda_{j},k)\,{\rm d}k.\end{split} \tag{1.7}\]
In above \(d_{\nu}(\lambda)=-iRes_{\mu=\lambda}({\bf c}_{\nu}(\mu){\bf c}_{\nu}(-\mu))^{- 1},\lambda\in D_{\nu}\) and \(D_{\nu}\) is a finite set in \(\{\lambda\in\mathbb{C};\Im(\lambda)>0\}\) which parametrizes the \(\tau_{\nu}\)-spherical functions arising from the discrete series of \(G\). It is empty if \(\nu\leq\rho-2\).
The formula (1.7) gives rise to the decomposition of \(L^{2}(G,\tau_{\nu})\) into a continuous part and a discrete part:
\[L^{2}(G,\tau_{\nu})=L^{2}_{cont}(G,\tau_{\nu})\oplus L^{2}_{disc}(G,\tau_{\nu})\]
Our aim here is to study the operator \({\cal Q}^{\nu}_{\lambda}\), \(\lambda\in\mathbb{R}\), defined for \(F\in L^{2}_{cont}(G,\tau_{\nu})\cap C^{\infty}_{c}(C,\tau_{\nu})\) by
\[{\cal Q}^{\nu}_{\lambda}F(g)=\mid{\bf c}_{\nu}(\lambda)\mid^{-2}{\cal P}^{ \nu}_{\lambda}[{\cal F}_{\nu}\,F(\lambda,.)](g), \tag{1.8}\]
More precisely, following Strichartz idea, we are interested in the following question:
Characterize those \(F_{\lambda}\in{\cal E}_{\lambda}(G,\tau_{\nu})\) (\(\lambda\in(0,\infty)\)) for which there exists \(F\in L^{2}_{cont}(G,\tau_{\nu})\) such that \(F_{\lambda}={\cal Q}^{\nu}_{\lambda}F\).
To do so, we introduce the space \({\cal E}^{2}_{+}(G,\tau_{\nu})\) consisting of all \(V_{\tau_{\nu}}\)-valued measurable functions \(\psi\) on \((0,\infty)\times G\) such that
* \(\Omega\,\psi(\lambda,.)=-(\lambda^{2}+\rho^{2}-\nu(\nu+2))\,\psi(\lambda,.)\) a.e. \(\lambda\in(0,\infty)\)
* \(\parallel\psi\parallel_{+}<\infty\).
where
\[\parallel\psi\parallel_{+}^{2}=\sup_{R>1}\int_{0}^{\infty}\frac{1}{R}\int_{B(R )}\parallel\psi(\lambda,g)\parallel_{\nu}^{2}\,{\rm d}g_{K}\,{\rm d}\lambda.\]
The second main result we prove in this paper can be stated as follows
**Theorem 1.2**.:
* There exists a positive constant \(C\) such that for \(F\in L^{2}(G,\tau_{\nu})\) we have \[C^{-1}\parallel F\parallel_{L^{2}(G,\tau_{\nu})}\leq\parallel{\cal Q}^{\nu}_{ \lambda}F\parallel_{+}\leq C\parallel F\parallel_{L^{2}(G,\tau_{\nu})}\] (1.9) Furthermore we have \[\lim_{R\to\infty}\int_{0}^{\infty}\frac{1}{R}\int_{B(R)}\parallel{\cal Q}^{\nu }_{\lambda}F(g)\parallel_{\nu}^{2}\,{\rm d}g_{K}\,{\rm d}\lambda=2\parallel F \parallel_{L^{2}(G,\tau_{\nu})}^{2}\] (1.10)
* The linear map \({\cal Q}^{\nu}_{\lambda}\) is a topological isomorphism from \(L^{2}_{cont}(G,\tau_{\nu})\) onto \({\cal E}^{2}_{+}(G,\tau_{\nu})\).
This extends Kaizuka result [[16], (i) and (ii) in Theorem 3.6] on the Strichartz conjecture (see [25] Conjecture 4.5] to the class of vector bundles considered here.
Before giving the outline of the paper, let us mention that a number of authors have obtained an image characterization for the Poisson transform \({\cal P}_{\lambda}\) (\(\lambda\in{\mathfrak{a}}^{*}\setminus\{0\}\)) of \(L^{2}\)-functions on \(K/M\) in the rank one case, see [[3], [5], [7], [15]]. Nevertheless, the obtained characterization is weaker than the one conjectured by Strichartz. The approach taken in
the quoted papers is based on the theory of Calderon-Zygmund singular integrals (see also [21]). Using a different approach based on the techniques used in the scattering theory, Kaizuka [16] settled the Strichartz conjecture on Riemannian symmetric spaces of noncompact type, of arbitrary rank.
We now describe the contents of this paper. The proofs of our results are a generalisation of Kaizuka's method [16]. In section 2 we recall some basic facts on the quaternione hyperbolic spaces and introduce the vector Poisson transforms. In section 3, we define the Helgason-Fourier transform on the vector bundles \(G\times_{K}V_{\nu}\) and give the inversion and Plancherel Theorem. The proof of Theorem 1.2 follows from the Plancherel formula and Theorem 1.1. The main ingredients in proving Theorem 1.1 are a Fourier restriction estimate for the vector valued Helgason-Fourier transform (Proposition 4.1 in section 4) and an asymptotic formula for the vector Poisson transform in the framework of Agmon-Hormander spaces [2] (Theorem 5.1). The proof of Theorem 5.1 will be derived from the Key lemma of this paper giving the asymptotic behaviour of the translate of the \(\tau_{\nu}\)-spherical functions. Section 6 is devoted to the proof of our main results. In section 7 we prove the Key Lemma.
## 2 Preliminaries
### The quaternionic hyperbolic space
Let \(G=Sp(n,1)\) be the group of all linear transformations of the right \(\mathbb{H}\)-vector space \(\mathbb{H}^{n+1}\) which preserve the quadratic form \(\sum_{j=1}^{n}\mid u_{j}\mid^{2}-\mid u_{n+1}\mid^{2}\). Let \(K=Sp(n)\times Sp(1)\) be the subgroup of \(G\) consisting of pairs \((a,d)\) of unitaries. Then \(K\) is a maximal compact subgroup of \(G\). The quaternionic hyperbolic space is the rank one symmetric space \(G/K\) of the noncompact type. It can be realized as the unit ball \(\mathbb{B}(\mathbb{H}^{n})=\{x\in\mathbb{H}^{n};\mid x\mid<1\}\).
The group \(G\) acts on \(\mathbb{B}(\mathbb{H}^{n})\) by the fractional linear mappings \(x\mapsto g.x=(ax+b)(cx+d)^{-1}\), if \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\), with \(a\in\mathbb{H}^{n\times n},b\in\mathbb{H}^{n\times 1},c\in\mathbb{H}^{1 \times n}\) and \(d\in\mathbb{H}\).
Denote by \(\mathfrak{g}\) the Lie algebra of \(G\); \(\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}\) the Cartan decomposition of \(\mathfrak{g}\), where \(\mathfrak{p}\) is a vector space of matrices of the form \(\left\{\begin{pmatrix}0&x\\ x^{*}&0\end{pmatrix},x\in\mathbb{H}^{n}\right\}\), and \(\mathfrak{k}=\left\{\begin{pmatrix}X&0\\ 0&q\end{pmatrix},X^{*}+X=0,q+\overline{q}=0\right\}\), where \(X^{*}\) is the conjugate transpose of the matrix \(X\) and \(q\in\mathbb{H}\).
Let \(H=\begin{pmatrix}0_{n}&e_{1}\\ {}^{t}e_{1}&0\end{pmatrix}\in\mathfrak{p}\) with \({}^{t}e_{1}=(1,0,\cdots,0)\). Then \(\mathbf{a}=\mathbb{R}\,H\) is a Cartan subspace in \(\mathfrak{p}\), and the corresponding analytic subgroup \(A=\{a_{t}=\exp t\,H;t\in\mathbb{R}\}\), where \(a_{t}=\begin{pmatrix}cht&0&sht\\ 0&0_{n-1}&0\\ sht&0&cht\end{pmatrix}.\) With \(A\) determined we then have that
\[M=\left\{g=\begin{pmatrix}q&0&0\\ 0&m&0\\ 0&0&q\end{pmatrix},m\in Sp(n-1),\mid q\mid=1\right\}\simeq Sp(n-1)\times Sp(1).\]
Let \(\alpha\in\mathfrak{a}^{*}\) be defined by \(\alpha(H)=1\). Then a system \(\Sigma\) of restricted roots of the pair \((\mathfrak{g},\mathfrak{a})\) is \(\Sigma=\{\pm\alpha,\pm 2\alpha\}\) if \(n\geq 2\) and \(\Sigma=\{\pm 2\alpha\}\) if \(n=1\), with Weyl group \(W\simeq\{\pm Id\}\). A positive subsystem of roots corresponding to the positive Weyl chamber \(\mathfrak{a}^{+}\simeq(0,\infty)\) in \(\mathfrak{a}\) is \(\Sigma^{+}=\{\alpha,2\alpha\}\) if \(n\geq 2\) and \(\Sigma^{+}=\{2\alpha\}\) if \(n=1\).
Let \(\mathfrak{n}=\mathfrak{g}_{\alpha}+\mathfrak{g}_{2\alpha}\) be the direct sum of the positive root subspaces, with \(\dim\mathfrak{g}_{\alpha}=4(n-1)\) and \(\dim\mathfrak{g}_{2\alpha}=3\) and \(N\) the corresponding analytic subgroup of \(G\). Then the half sum of the positive restricted roots with multiplicities counted \(\rho\) equals to \((2n+1)\alpha\), and shall be viewed as a real number \(\rho=2n+1\) by the identification \(\mathfrak{a}_{c}^{*}\simeq\mathbb{C}\) via \(\lambda\alpha\leftrightarrow\lambda\).
Let \(\overline{A^{+}}=\{a_{t}\in A;\quad t\geq 0\}\). Then we have the Cartan decomposition \(G=K\overline{A^{+}}K\), that is any \(g\in G\) can be written \(g=k_{1}(g)\,\mathrm{e}^{A^{+}(g)}\,k_{2}(g),\quad k_{1}(g),k_{2}(g)\in K\) and \(A^{+}(g)\in\overline{\mathfrak{a}^{+}}\).
If we write \(g\in G\) in \((n+1)\times(n+1)\) block notation as \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\). Then a straightforward computation gives
\[\cosh A^{+}(g)=\mid d\mid\quad\text{\it and}\quad H(g)=\log\mid ce_{1}+d\mid. \tag{2.1}\]
We normalize the invariant measure \(\mathrm{d}g_{K}\) on \(G/K\) so that the following integral formula holds: for all \(h\in L^{1}(G/K)\),
\[\int_{G/K}h(gK)\mathrm{d}g_{K}=\int_{G}h(g.0)\mathrm{d}g=\int_{K}\int_{0}^{ \infty}h(k\,a_{t})\Delta(t)\,\mathrm{d}k\,\mathrm{d}t, \tag{2.2}\]
where \(\mathrm{d}t\) is the Lebesgue measure, \(\Delta(t)=(2\sinh t)^{4n-1}(2\cosh t)^{3}\), and \(\mathrm{d}k\) is the Haar measure of \(K\) with \(\int_{K}\mathrm{d}k=1\).
### The vector Poisson transform
In this subsection we define the Poisson transform associated to the vector bundles \(G\times_{K}V_{\nu}\) over \(Sp(n,1)/Sp(n)\times Sp(1)\) and derive some results referring to [23], [27], and [28] for more informations on the subject.
Let \(\sigma_{\nu}\) denote the restriction of \(\tau_{\nu}\) to \(M\). For \(\lambda\in\mathbb{C}\) we consider the representation \(\sigma_{\nu,\lambda}\) of \(P=MAN\) on \(V_{\nu}\) defined by \(\sigma_{\nu,\lambda}(man)=a^{\rho-i\lambda}\sigma_{\nu}(m)\). Then \(\sigma_{\nu,\lambda}\) defines a principal series representations of \(G\) on the Hilbert space
\[H^{\nu,\lambda}:=\{f:G\to V_{\nu}\mid f(gran)=\sigma_{\nu,\lambda}^{-1}(man)f(g )\,\forall man\in MAN,f_{\mid K}\in L^{2}\},\]
where \(G\) acts by the left regular representation. We shall denote by \(C^{-\omega}(G,\sigma_{\nu,\lambda})\) the space of its hyperfunctions vectors. By the Iwasawa decomposition, the restriction map from \(G\) to \(K\) gives an isomorphism from \(H^{\nu,\lambda}\) onto the space \(L^{2}(K,\sigma_{\nu})\). This yields, the so-called compact picture of \(H^{\nu,\lambda}\), with the group action given by
\[\pi_{\sigma_{\nu},\lambda}(g)f(k)=\mathrm{e}^{(i\lambda-\rho)H(g^{-1}k)}f( \kappa(g^{-1}k)).\]
By \(C^{-\omega}(K,\sigma_{\nu})\) we denote the space of its hyperfunctions vectors.
A Poisson transform is the continuous, linear, \(G\)-equivariant map \(\mathcal{P}_{\lambda}^{\nu}\) from \(C^{-\omega}(G,\sigma_{\nu,\lambda})\) to \(C^{\infty}(G,\tau_{\nu})\) defined by
\[\mathcal{P}_{\lambda}^{\nu}\,f(g)=\int_{K}\tau_{\nu}(k)f(gk)\,\mathrm{d}k.\]
In the compact picture the Poisson transform is given by
\[\mathcal{P}_{\lambda}^{\nu}\,f(g)=\int_{K}\mathrm{e}^{-(i\lambda+\rho)H(g^{-1 }k)}\tau_{\nu}(\kappa(g^{-1}k))\,f(k)\,\mathrm{d}k.\]
Let \(\mathbb{D}(G,\tau_{\nu})\) denote the algebra of left invariant differential operators on \(C^{\infty}(G,\tau_{\nu})\). Let \(\mathcal{E}_{\nu,\lambda}(G)\) be the space of all \(F\in C^{\infty}(G,\tau_{\nu})\) such that \(\Omega\,F=-(\lambda^{2}+\rho^{2}-\nu(\nu+2))\,F\).
**Proposition 2.1**.: (i)_\(\mathbb{D}(G,\tau_{\nu})\) is the algebra generated by the Casimir operator \(\Omega\) of \(\mathfrak{g}\)._
(ii) For \(\lambda\in\mathbb{C},\nu\in\mathbb{N}\), the Poisson transform \(\mathcal{P}_{\lambda}^{\nu}\) maps \(C^{-\omega}(G,\sigma_{\nu,\lambda})\) to \(\mathcal{E}_{\nu,\lambda}(G)\).
Proof.: (i) Let \(U(\mathfrak{a})\) be the universal enveloping algebra of the complexification of \(\mathfrak{a}\). Since the restriction of \(\tau_{\nu}\) to \(M\) is irreducible, then \(\mathbb{D}(G,\tau_{\nu})\simeq U(\mathfrak{a})^{W}\). As \(\mathfrak{a}\) is one dimensional, then \(\mathbb{D}(G,\tau_{\nu})\simeq\mathbb{C}[s^{2}]\), symmetric functions of one variable. Thus \(\mathbb{D}(G,\tau_{\nu})\) is generated by the Casimir element \(\Omega\) of the Lie algebra \(\mathfrak{g}\) of \(G\), viewed as a differential operator acting on \(C^{\infty}(G,\tau_{\nu})\).
(ii) Since \(\sigma_{\nu}\) is irreducible, the image of \(\mathcal{P}_{\lambda}^{\nu}\) consists of joint eigenfunctions with respect to the action of \(\Omega\). Moreover \(\Omega\) acts by the infinitesimal character of the the principal series representations \(\pi_{\sigma_{\nu},\lambda}\). It follows from Proposition 8.22 and Lemma 12.28 in [18], that
\[\pi_{\sigma_{\nu},\lambda}(\Omega)=-(\lambda^{2}+\rho^{2}-c(\sigma_{\nu}))Id \quad\text{\it on}\quad C^{-\omega}(G,\sigma_{\nu,\lambda}), \tag{2.3}\]
where \(c(\sigma_{\nu})\) is the Casimir value of \(\sigma_{\nu}\) given by \(c(\sigma_{\nu})=\nu(\nu+2)\).
Let \(\Phi_{\nu,\lambda}\) be the \(\tau_{\nu}\)-spherical function associated to \(\sigma_{\nu}\). Then \(\Phi_{\nu,\lambda}\) admits the following Eisenstein integral representation (see [[11], Lemma 3.2]):
\[\Phi_{\nu,\lambda}(g)=\int_{K}\mathrm{e}^{-(i\lambda+\rho)H(g^{-1}k)}\tau_{\nu }(\kappa(g^{-1}k)k^{-1})\,\mathrm{d}k.\]
Note that \(\Phi_{\nu,\lambda}\) lies in \(C^{\infty}(G,\tau_{\nu},\tau_{\nu})\) the space of smooth functions \(F:G\to End(V_{\tau_{\nu}})\) satisfying
\[F(k_{1}gk_{2})=\tau_{\nu}(k_{2}^{-1})F(g)\tau_{\nu}(k_{1}^{-1}),\]
the so called \(\tau_{\nu}\)-radial functions. Being \(\tau_{\nu}\)-radial, \(\Phi_{\nu,\lambda}\) is completely determined by its restriction to \(A\), by the Cartan decomposition \(G=KAK\). Moreover, since \(\sigma_{\nu}\) is irreducible, it follows that \(\Phi_{\nu,\lambda}(a_{t})\in End_{M}(V_{\nu})\simeq\mathbb{C}Id_{V_{\nu}},\, \forall a_{t}\in A\). Therefore there exists \(\varphi_{\nu}:\mathbb{R}\to\mathbb{C}\) such that \(\Phi_{\nu,\lambda}(a_{t})=\varphi_{\nu}(t).Id_{V_{\nu}}\). We have
\[\varphi_{\nu,\lambda}(t)=\frac{1}{\nu+1}\int_{K}\mathrm{e}^{-(i \lambda+\rho)H(g^{-1}k)}\chi_{\nu}(\kappa(g^{-1}k)k^{-1})\,\mathrm{d}k, \tag{2.4}\]
where \(\chi_{\nu}\) is the character of \(\tau_{\nu}\).
This so-called trace \(\tau_{\nu}\)-spherical function has been computed explicitly in [12] using the radial part of the Casimir operator \(\Omega\) (see also [26] ). We have \(\varphi_{\nu,\lambda}(t)=(\cosh t)^{\nu}\phi_{\lambda}^{(\rho-2,\nu+1)}(t)\), where \(\phi_{\lambda}^{(\rho-2,\nu+1)}(t)\) is the Jacobi function (cf. [19])
\[\phi_{\lambda}^{(\rho-2,\nu+1)}(t)=\,_{2}F_{1}(\frac{i\lambda+ \rho+\nu}{2},\frac{-i\lambda+\rho+\nu}{2};\rho-1;-\sinh^{2}t).\]
We deduce from (A4) the asymptotic behaviour of \(\varphi_{\nu,\lambda}\)
\[\varphi_{\lambda,\nu}(a_{t})=\mathrm{e}^{(i\lambda-\rho)t}[\mathbf{c}_{\nu}( \lambda)+\circ(1)],\,\,\text{as}\,\,\,t\to\infty\quad if\quad\Im(\lambda)<0. \tag{2.5}\]
where
\[\mathbf{c}_{\nu}(\lambda)=\frac{2^{\rho-i\lambda}\Gamma(\rho-1) \Gamma(i\lambda)}{\Gamma(\frac{i\lambda+\rho+\nu}{2})\Gamma(\frac{i\lambda+ \rho-\nu-2}{2})}. \tag{2.6}\]
For \(\lambda\in\mathbb{C}\) the \(\mathbf{c}\)-function of Harish-Chandra associated to \(\tau_{\nu}\) is defined by
\[\mathbf{c}(\tau_{\nu},\lambda)=\int_{\overline{N}}\mathrm{e}^{-(i \lambda+\rho)H(\overline{n})}\tau_{\nu}(\kappa(\overline{n}))\,\mathrm{d} \overline{n}.\]
The integral converges for \(\lambda\) such that \(\Re(i\lambda)>0\) and it has a meromorphic continuation to \(\mathbb{C}\).
In above \(\mathrm{d}\overline{n}\) is the Haar measure of \(\overline{N}=\theta(N)\), \(\theta\) being the Cartan involution.
We may use formula (2.6) to give explicitly \(\mathbf{c}(\tau_{\nu},\lambda)\). Indeed, one easily check that \(\mathbf{c}(\tau_{\nu},\lambda)\in End_{M}(V_{\nu})=\mathbb{C}Id_{V_{\nu}}\). Then using the following result on the behaviour of \(\Phi_{\nu,\lambda}(a_{t})\) ([28], Proposition 2.4)
\[\Phi_{\nu,\lambda}(a_{t})=\mathrm{e}^{(i\lambda-\rho)t}(\mathbf{c}(\tau_{\nu},\lambda)+\circ(1))\text{as}\quad t\to\infty,\]
together with \(\Phi_{\nu,\lambda}(a_{t})=\varphi_{\nu,\lambda}(t).Id\), we find then from (2.5) that \(\mathbf{c}(\tau_{\nu},\lambda)=\mathbf{c}_{\nu}(\lambda)Id_{V_{\nu}}\).
We end this section by recalling a result of Olbrich [23] on the range of the Poisson transform on vector bundles which reads in our case as follows
**Theorem 2.1**.: [23] Let \(\nu\in\mathbb{N}\) and \(\lambda\in\mathbb{C}\) such that
* \(-2i\lambda\notin\mathbb{N}\)
* \(i\lambda+\rho\notin-2\mathbb{N}-\nu\cup-2\mathbb{N}+\nu+2\).
Then the Poisson transform \(\mathcal{P}_{\lambda}^{\nu}\) is a \(K\)-isomorphism from \(C^{-\omega}(K,\sigma_{\nu})\) onto \(\mathcal{E}_{\nu,\lambda}(G)\).
The vector-valued Helgason-Fourier transform
In this section we give the inversion and the Plancherel formulas for the Helgason-Fourier transform on the vector bundle \(G\times_{K}V_{\nu}\).
According to [11] the vector-valued Helgason-Fourier transform of \(f\in C_{c}^{\infty}(G,\tau_{\nu})\) is the \(V_{\nu}\)-valued function on \(\mathbb{C}\times K\) defined by:
\[\mathcal{F}_{\nu}f(\lambda,k)=\int_{G}e_{\lambda,\nu}(k^{-1}g)\,f(g)\mathrm{d}g,\]
where \(e_{\lambda,\nu}\) is the vector valued function \(e_{\lambda,\nu}:G\to End(V_{\nu})\) given by
\[e_{\lambda,\nu}(g)=\mathrm{e}^{(i\lambda-\rho)H(g^{-1})}\tau_{\nu}^{-1}(\kappa (g^{-1})).\]
Notice that our sign on \({}^{*}\lambda^{*}\) is the opposite of the one in [11].
In order to state the next theorem, we introduce the finite set in \(\{\lambda,\Im(\lambda)\geq 0\}\)
\[D_{\nu}=\{\lambda_{j}=i(\nu-\rho+2-2j),j=0,1,\cdots,\nu-\rho+2-2j>0\}.\]
Note that \(D_{\nu}\) is empty if \(\nu\leq\rho-2\). It parametrizes the discrete series representation of \(G\) containing \(\tau_{\nu}\), see [12].
Let
\[d_{\nu}(\lambda_{j})=\frac{2^{-2(\rho-\nu-1)}(\nu-\rho-2j+2)(\rho-2+j)!(\nu-j )!}{\Gamma^{2}(\rho-1)j!(\nu-\rho-j+2)!},\quad\lambda_{j}\in D_{\nu}\]
For \(\lambda_{j}\in D_{\nu}\), we define the operators \(\mathcal{Q}_{j}^{\nu}\)
\[L^{2}(G,\tau_{\nu})\to\mathcal{E}_{\nu,\lambda_{j}}(G,\tau_{\nu})\] \[F\mapsto d_{\nu}(\lambda_{j})\,\Phi_{\nu,\lambda_{j}}*F\]
We denote the image by \(A_{j}^{2}\). We set
\[L^{2}_{disc}(G,\tau_{\nu})=\bigoplus_{j;\,\nu-\rho+2-2j>0}A_{j}^{2},\]
and denote by \(L^{2}_{cont}(G,\tau_{\nu})\) its orthocomplement. Let \(L^{2}_{\sigma_{\nu}}(\mathbb{R}^{+}\times K,|\;\mathbf{c}_{\nu}(\lambda)\;|^{- 2}\;\mathrm{d}\lambda\,\mathrm{d}k)\) be the space of vector functions \(\phi:\mathbb{R}^{+}\times K\to V_{\nu}\) satisfying
* For each fixed \(\lambda,\phi(\lambda,km)=\sigma_{\nu}(m)^{-1}\phi(\lambda,k),\forall m\in M\)
* \(\int_{\mathbb{R}^{+}\times K}\parallel\phi(\lambda,k)\parallel_{\nu}^{2}|\; \mathbf{c}_{\nu}(\lambda)\;|^{-2}\;\mathrm{d}\lambda\,\mathrm{d}k<\infty\).
**Theorem 3.1**.: (i) For \(F\in C_{c}^{\infty}(G,\tau_{\nu})\) we have the following inversion and Plancherel formulas
\[F(g)=\frac{1}{2\pi}\int_{0}^{\infty}\int_{K}e_{\lambda,\nu}^{*}(k^{-1}g) \mathcal{F}_{\nu}F(\lambda,k)\;\mid\mathbf{c}_{\nu}(\lambda)\;|^{-2}\;\mathrm{ d}\lambda\,\mathrm{d}k+\sum_{\lambda_{j}\in D_{\nu}}d_{\nu}(\lambda_{j})\int_{K}e_{ \lambda_{j},\nu}^{*}(k^{-1}g)\mathcal{F}_{\nu}F(\lambda_{j},k)\,\mathrm{d}k, \tag{3.1}\]
\[\int_{G}\parallel F(g)\parallel_{\nu}^{2}\;\mathrm{d}g_{K}=\frac{1}{2\pi}\int _{0}^{\infty}\int_{K}\parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu}^{2} \mid\mathbf{c}_{\nu}(\lambda)\;|^{-2}\;\mathrm{d}\lambda\,\mathrm{d}k+\sum_{ \lambda_{j}\in D_{\nu}}d_{\nu}(\lambda_{j})\int_{K}<\mathcal{F}_{\nu}F(\lambda _{j},k),\mathcal{F}_{\nu}F(-\lambda_{j},k)>_{\nu}\;\mathrm{d}k \tag{3.2}\]
(ii) The Fourier transform \(\mathcal{F}_{\nu}\) extends to an isometry from \(L^{2}_{cont}(G,\tau_{\nu})\) onto the space \(L^{2}_{\sigma_{\nu}}(\mathbb{R}^{+}\times K,|\;\mathbf{c}_{\nu}(\lambda)\;|^{- 2}\;\mathrm{d}\lambda\,\mathrm{d}k)\).
The first part of Theorem 3.1 can be easily deduced from the inversion and Plancherel formulas for the spherical transform.
Let \(C^{\infty}_{c}(G,\tau_{\nu},\tau_{\nu})\) denote the space of smooth compactly supported \(\tau_{\nu}\)-radial functions. The spherical transform of \(F\in C^{\infty}_{c}(G,\tau_{\nu},\tau_{\nu})\) is the \(\mathbb{C}\)-valued function \(\mathcal{H}_{\nu}F\) defined by:
\[\mathcal{H}_{\nu}F(\lambda)=\frac{1}{\nu+1}\int_{G}Tr[\Phi_{\nu,\lambda}(g^{- 1})F(g))]\mathrm{d}g,\quad\lambda\in\mathbb{C}.\]
The inversion and the Plancherel formulas for the \(\tau\)-spherical transform have been given explicitly in [12]. For the convenience of the reader we give an elementary proof by using the Jacobi transform.
**Theorem 3.2**.: For \(F\in C^{\infty}_{c}(G,\tau_{\nu},\tau_{\nu})\) we have the following inversion and Plancherel formulas
\[F(g)=\frac{1}{2\pi}\int_{0}^{+\infty}\Phi_{\nu,\lambda}(g)\mathcal{H}_{\nu}F( \lambda)\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\mathrm{d}\lambda+\sum_{ \lambda_{j}\in D_{\nu}}\Phi_{\nu,\lambda_{j}}(g)\mathcal{H}_{\nu}f(\lambda_{j })\,d_{\nu}(\lambda_{j}), \tag{3.3}\]
\[\int_{G}\parallel F(g)\parallel_{HS}^{2}\mathrm{d}g=\frac{\nu+1}{2\pi}\int_{0 }^{+\infty}\mid\mathcal{H}_{\nu}F((\lambda)\mid^{2}\mid\mathbf{c}_{\nu}( \lambda)\mid^{-2}\mathrm{d}\lambda+(\nu+1)\sum_{\lambda_{j}\in D_{\nu}}d_{\nu} (\lambda_{j})\mid\mathcal{H}_{\nu}F((\lambda_{j})\mid^{2}, \tag{3.4}\]
In above \(\parallel\!\!\parallel_{HS}\) stands for the Hilbert-Schmidt norm.
Proof.: Let \(F\in C^{\infty}_{c}(G,\tau_{\nu},\tau_{\nu})\) and let \(f_{\nu}\) be its scalar component. Using the integral formula (2.2), the identity \(\Phi_{\nu,\lambda}(a_{t})=\Phi_{\nu,\lambda}(a_{-t})=(\cosh t)^{\nu}\phi_{ \lambda}^{(\rho-2,\nu+1)}(t)\) and the fact that \(\Delta(t)=(2\cosh t)^{-2\nu}\Delta_{\rho-2,\nu+1}\), we have
\[\begin{split}\mathcal{H}_{\nu}F(\lambda)&=\int_{0 }^{\infty}f_{\nu}(t)(\cosh t)^{\nu}\phi_{\lambda}^{(\rho-2,\nu+1)}(t)\,\Delta( t)\,\mathrm{d}t\\ &=\int_{0}^{\infty}f_{\nu}(t)(2^{2}\cosh t)^{-\nu}\phi_{ \lambda}^{(\rho-2,\nu+1)}(t)\,\Delta_{\rho-2,\nu+1}(t)\,\mathrm{d}t.\end{split} \tag{3.5}\]
Thus the \(\tau_{\nu}\)-spherical transform \(\mathcal{H}_{\nu}F\) may be written in terms of the Jacobi transform \(\mathcal{J}^{\alpha,\beta}\), with \(\alpha=\rho-2\) and \(\beta=\nu+1\). Namely, we have
\[\mathcal{H}_{\nu}F(\lambda)=\mathcal{J}^{\rho-2,\nu+1}[(2^{2}\cosh t)^{-\nu}f _{\nu}](\lambda).\]
We refer to (A5) in the Appendix for the definition of the Jacobi transform.
Now the theorem follows from the inversion and the Plancherel formulas for the Jacobi transform (A6), (A6') and (A7) in the Appendix.
For the proof of the surjectivity statement in Theorem 3.1 we shall need the following result
**Proposition 3.1**.: Let \(F\in C^{\infty}_{c}(G,\tau_{\nu})\) and \(\Phi\in C^{\infty}(G,\tau_{\nu},\tau_{\nu})\). Then we have
\[\mathcal{F}_{\nu}(F*\Phi)(\lambda,k)=\mathcal{H}_{\nu}\Phi(\lambda)\mathcal{F }_{\nu}F(\lambda,k),\quad\lambda\in\mathbb{C},k\in K,\]
where the convolution is defined by
\[(\Phi*F)(g)=\int_{G}\Phi_{\nu,\lambda}(x^{-1}g)F(x)\,\mathrm{d}x.\]
Proof.: Let \(\Phi\in C^{\infty}(G,\tau_{\nu},\tau_{\nu})\), \(v\in V_{\nu}\), and set \(F_{v}=\Phi(.)v\). Then we have the following relation between the Fourier transform and the spherical transform
\[\mathcal{F}_{\nu}F_{v}(\lambda,k)=\mathcal{H}_{\nu}\Phi(\lambda)\tau(k^{-1})v. \tag{3.6}\]
By definition
\[\mathcal{F}_{\nu}(F*\Phi)(\lambda,k) =\int_{G}\int_{G}e^{\nu}_{\lambda}(k^{-1}g)\Phi(x^{-1}g)F(x) \mathrm{d}x\mathrm{d}g\] \[=\int_{G}\mathrm{d}x\int_{G}e^{\nu}_{\lambda}(k^{-1}xy)\Phi(y)F(x) \mathrm{d}y\]
Using the following cocycle relations for the Iwasawa function \(H(x)\)
\[H(xy)=H(x\kappa(y))+H(y),\]
and
\[\kappa(xy)=\kappa(x\kappa(y)),\]
for all \(x,y\in G\), we get the following identity
\[e_{\lambda}^{\nu}(k^{-1}xy)=\mathrm{e}^{(i\lambda-\rho)H(x^{-1}k)}e_{\lambda} ^{\nu}(\kappa^{-1}(x^{-1}k)y),\]
from which we obtain
\[\mathcal{F}_{\nu}(\Phi*F)(\lambda,k)=\int_{G}\mathrm{e}^{(i\lambda-\rho)H(x^{ -1}k)}\left(\int_{G}e_{\lambda,\nu}(\kappa^{-1}(x^{-1}k)y)\Phi(y)F(x)\,\mathrm{ d}y\right)\mathrm{d}x.\]
Next, put \(h_{v}(y)=\Phi(y)v,v\in V_{\tau_{\nu}}\). Then (3.6) implies
\[\int_{G}e_{\lambda,\nu}(\kappa^{-1}(x^{-1}k)y)\Phi(y)F(x)\, \mathrm{d}y =\mathcal{F}_{\nu}(h_{F(x)})(\lambda,\kappa^{-1}(x^{-1}k))\] \[=\mathcal{H}(\Phi)(\lambda)\tau_{\nu}(\kappa^{-1}(x^{-1}k))F(x),\]
from which we deduce
\[\mathcal{F}_{\nu}(\Phi*F)(\lambda,k)=\mathcal{H}(\Phi)(\lambda)\int_{G} \mathrm{e}^{(i\lambda-\rho)H(x^{-1}k)}\tau_{\nu}(\kappa^{-1}(x^{-1}k))F(x) \mathrm{d}x,\]
and the proposition follows.
We now come to the proof of Theorem 3.1.
Proof.: (i) We may follow the same method as in [11] to prove the inversion formula (3.1) and the Plancherel formula (3.2) from Theorem 3.2. We give an outline of the proof.
Let \(F\in C_{c}^{\infty}(G,\tau_{\nu})\) and consider the \(\tau_{\nu}\)-radial function defined for any \(g\in G\) by
\[F_{g,v}(x).w=\int_{K}<\tau_{\nu}(k)w,v>_{\nu}F(gkx)\,\mathrm{d}k,\]
\(v\) being a fixed vector in \(V_{\nu}\). Then a straightforward calculation shows that
\[\mathcal{H}_{\nu}F_{g,v}(\lambda)=\frac{1}{\nu+1}<(\Phi_{\nu, \lambda}*F)(g),v>_{\nu}.\]
The inversion formula for the spherical transform together with \(TrF_{g,v}(e)=<F(g),v>_{\nu}\) imply
\[F(g)=\frac{1}{2\pi}\int_{0}^{\infty}(\Phi_{\nu,\lambda}*F)(g)\mid\mathbf{c}_{ \nu}(\lambda)\mid^{-2}\,\mathrm{d}\lambda+\sum_{\lambda_{j}\in D_{\nu}}(\Phi_ {\nu,\lambda_{j}}*F)(g)d_{\nu}(\lambda_{j}).\]
To conclude use the following result for the translated spherical function ( see [11] Proposition 3.3)
\[\Phi_{\nu,\lambda}(x^{-1}y)=\int_{K}\mathrm{e}^{-(i\lambda+\rho)H(y^{-1}k)} \mathrm{e}^{(i\lambda-rho)H(x^{-1}k)}\tau_{\nu}(\kappa(y^{-1}k))\tau_{\nu}( \kappa^{-1}(x^{-1}k))\,\mathrm{d}k, \tag{3.7}\]
to get
\[(\Phi_{\nu,\lambda}*F)(g)=\int_{K}\mathrm{e}^{-(i\lambda+\rho)H(g^{-1}k)}\tau _{\nu}(\kappa(g^{-1}k))\mathcal{F}_{\nu}F(\lambda,k)\,\mathrm{d}k,\]
and the inversion formula (3.1) follows.
The proof of the Plancherel formula (3.2) is essentially the same as in the scalar case, so we omit it.
Note that as a consequence of the Plancherel formula not involving the discrete series, we have
\[\int_{G}\parallel F(g)\parallel_{\nu}^{2}\;\mathrm{d}g_{K}=\frac{1}{\pi}\int_{0 }^{\infty}\int_{K}\parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu}^{2} \!\!\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\mathrm{d}\lambda\,\mathrm{d}k,\]
for every \(F\in L^{2}_{\text{\it cont}}(G,\tau_{\nu})\).
(ii) We prove the surjectivity statement. Suppose that there exists a function \(f\) in \(L^{2}_{\sigma_{\nu}}(\mathbb{R}^{+}\times K,\mid\mathbf{c}_{\nu}(\lambda)\mid^ {-2}\mathrm{d}\lambda\,\mathrm{d}k)\) such that
\[\int_{0}^{\infty}\int_{K}<f(\lambda,k),\mathcal{F}_{\nu}F(\lambda,k)>\mid \mathbf{c}_{\nu}(\lambda)\mid^{-2}\mathrm{d}\lambda\,\mathrm{d}k=0\]
for all \(F\in C^{\infty}_{c}(G,\tau_{\nu})\). Changing \(F\) into \(F\ast\Phi\) where \(\Phi\in C^{\infty}(G,\tau_{\nu},\tau_{\nu})\) and using Proposition 3.1, we have
\[\int_{0}^{\infty}\int_{K}<f(\lambda,k),\mathcal{F}_{\nu}F(\lambda,k)>\, \mathcal{H}_{\nu}\phi(\lambda)\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\mathrm{d} \lambda\,\mathrm{d}k=0\]
By the Stone-Weierstrass theorem, the algebra \(\{\mathcal{H}_{\nu}\Phi,\Phi\in C^{\infty}(G,\tau_{\nu},\tau_{\nu})\}\) is dense in \(C^{\infty}_{c}(\mathbb{R})\) the space of even continuous functions on \(\mathbb{R}\) vanishing at infinity. Therefore for every \(F\in C^{\infty}_{c}(G,\tau_{\nu})\) there is a set \(E_{F}\) of measure zero in \(\mathbb{R}\) such that
\[\int_{K}<f(\lambda,k),\mathcal{F}_{\nu}F(\lambda,k)>\mathrm{d}k=0\]
for all \(\lambda\) not in \(E_{F}\). The rest of the proof is based on an adaptation of the arguments given in [14] Theorem 1.5, for the scalar case, and the proof of Theorem 3.1 is completed.
## 4 Fourier restriction estimate
The main result of this section is the following uniform continuity estimate for the Fourier-Helgason restriction operator.
**Proposition 4.1**.: Let \(\nu\in\mathbb{N}\). There exists a positive constant \(C_{\nu}\) such that for \(\lambda\in\mathbb{R}\backslash\{0\}\) and \(R>1\), we have
\[\bigg{(}\int_{K}\parallel\mathcal{F}_{\nu}F(\lambda,k)\|_{\nu}^{2}dk\bigg{)}^{ 1/2}\leq C_{\nu}|c_{\nu}(\lambda)|R^{1/2}\bigg{(}\int_{G/K}\|F(g)\|_{\nu}^{2} \mathrm{d}g_{K}\bigg{)}^{1/2}, \tag{4.1}\]
for every \(F\in L^{2}(G,\tau_{\nu})\) with \(\text{supp}F\subset B(R)\).
To prove this result we shall need estimates of the Harish-Chandra \(c\)-function. To this end we introduce the function \(\mathbf{b}_{\nu}(\lambda)\) defined on \(\mathbb{R}\) by
\[\mathbf{b}_{\nu}(\lambda)=\begin{cases}\mathbf{c}_{\nu}(\lambda)&\text{if} \quad\frac{\nu-\rho+2}{2}\in\mathbb{Z}^{+}\\ \lambda\,\mathbf{c}_{\nu}(\lambda)&\text{if}\quad\frac{\nu-\rho+2}{2}\notin \mathbb{Z}^{+}\end{cases}\]
**Lemma 4.1**.: Assume \(\nu>\rho-2\).
1. The function \(\mathbf{b}_{\nu}(\lambda)\) has no zero in \(\mathbb{R}\).
2. There exists a positive constant \(C\) such that for \(\lambda\in\mathbb{R}\), we have \[C^{-1}(1+\lambda^{2})^{\frac{2\rho-4-\varepsilon(\nu)}{4}}\leq\mid\mathbf{b}_ {\nu}(\lambda)\mid^{-1}\leq C(1+\lambda^{2})^{\frac{2\rho-4-\varepsilon(\nu)}{4 }},\] (4.2)
with \(\varepsilon(\nu)=\pm 1\) according to \(\frac{\nu-\rho+2}{2}\notin\mathbb{Z}^{+}\) or \(\frac{\nu-\rho+2}{2}\in\mathbb{Z}^{+}\)
_Proof._ (i) If \(\frac{\nu-\rho+2}{2}\notin\mathbb{Z}^{+}\), then \(\mathbf{b}_{\nu}(\lambda)=\frac{2^{\rho+\nu-i\lambda}\Gamma(\rho-1)\Gamma( \lambda+1)}{\Gamma(\frac{\lambda+\rho+\nu}{2})\Gamma(\frac{\lambda+\rho-\nu-2 }{2})}\), and clearly \(\mathbf{b}_{\nu}(\lambda)\) has no zero on \(\mathbb{R}\). If \(\frac{\nu-\rho+2}{2}\in\mathbb{Z}^{+}\) then \(\mathbf{b}_{\nu}(\lambda)\) a priori can have zero and pole at \(\lambda=0\). This is not the case, since
\[\lim_{\lambda\to 0}\mathbf{b}_{\nu}(\lambda)=(-1)^{\frac{\nu-\rho+2}{2}} \frac{2^{\rho+\nu}\Gamma(\rho-1)(\frac{\nu-\rho+2}{2})!}{\Gamma(\frac{\rho+ \nu}{2})}.\]
(ii) To prove the estimate (4.2) we shall use the following property of the \(\Gamma\)-function
\[\lim_{|z|\to\infty}\frac{\Gamma(z+a)}{\Gamma(z)}z^{-a}=1,\ |\arg(z) \ |<\pi-\delta, \tag{4.3}\]
where \(a\) is any complex number, and \(\log\) is the principal value of the logarithm and \(\delta>0\).
Assume first that \(\frac{\nu-\rho+2}{2}\notin\mathbb{Z}^{+}\). Using the duplicata formula for the function gamma
\[\Gamma(2z)=\frac{2^{2z-2}}{\sqrt{\pi}}\Gamma(z)\Gamma(z+\frac{1}{2}),\]
we rewrite \(\mathbf{b}_{\nu}(\lambda)\) as
\[\mathbf{b}_{\nu}(\lambda)=\frac{2^{\rho+\nu-1}}{\sqrt{\pi}}\frac{\Gamma( \frac{i\lambda+1}{2})\Gamma(\frac{i\lambda+2}{2})}{\Gamma(\frac{i\lambda+\rho+ \nu}{2})\Gamma(\frac{i\lambda+\rho-\nu-2}{2})}.\]
It follows from (4.3) that for every \(\lambda\in\mathbb{R}\), we have
\[|\ \mathbf{b}_{\nu}(\lambda)\ |\leq C(1+\lambda^{2})^{-\frac{2\rho-5}{4}}\]
and
\[|\ \mathbf{b}_{\nu}(\lambda)\ |^{-1}\leq C(1+\lambda^{2})^{\frac{2\rho-5}{4}}.\]
The proof for the case \(\frac{\nu-\rho+2}{2}\in\mathbb{Z}^{+}\) follows the same line as in the case \(\frac{\nu-\rho+2}{2}\notin\mathbb{Z}^{+}\), so we omit it.
This finishes the proof of the Lemma.
Let us recall from [1] an auxiliary lemma which will be useful for the proof of Proposition 4.1.
Let \(\eta\) be a positive Schwartz function on \(\mathbb{R}\) whose Fourier transform has a compact support. For \(m\in\mathbb{R}\), set
\[\eta_{m}(x)=\int_{\mathbb{R}}\eta(t)(1+|t-x|)^{m/2}\,\mathrm{d}t.\]
**Lemma 4.2**.:
1. \(\eta_{m}\) is a positive \(C^{\infty}\)-function with \[C^{-1}(1+t^{2})^{\frac{m}{2}}\leq\eta_{m}(t)\leq C(1+t^{2})^{\frac{m}{2}},\] (4.4) for some positive constant \(C\).
2. The Fourier transform of \(\eta_{m}\) has a compact support.
In order to prove the Fourier restriction Theorem, we need to introduce the bundle valued Radon transform, see [9] for more informations.
The Radon transform for \(F\in C_{c}^{\infty}(G,\tau_{\nu})\) is defined by
\[\mathcal{R}F(g)=e^{\rho H(g)}\int_{N}F(gn)dn.\]
We set \(\mathcal{R}F(t,k)=\mathcal{R}F(ka_{t})\). Then, using the Iwaswa decomposition \(G=NAK\), we may rewrite the Helgason-Fourier transform as
\[\mathcal{F}_{\nu}F(\lambda,k)=\mathcal{F}_{\mathbb{R}}(\mathcal{R}F(\cdot,k))( \lambda),\]
where
\[\mathcal{F}_{\mathbb{R}}\phi(\lambda)=\int_{\mathbb{R}}e^{-i\lambda t}\phi(t) \,\mathrm{d}t,\]
is the Euclidean Fourier transform of \(\phi\) a \(V_{\nu}\)-valued smooth function with compact support in \(\mathbb{R}\).
We define on \(\mathfrak{p}\) the scalar product \(<X,Y>=\frac{1}{2}Tr(XY)\) and denote by \(|\mid\) the corresponding norm. It induces a distance function \(d\) on \(G/K\). By the Cartan decomposition \(G=K\exp\mathfrak{p}\), any \(g\in G\) may be written uniquely as \(g=k\exp X\), so that \(d(0,gK)=\mid X\mid\). Define the open ball centred at \(0\) and of radius \(R\) by \(B(R)=\{gK\in G/K;\quad d(0,gK)<R\}\).
**Lemma 4.3**.: Let \(F\in C_{0}^{\infty}(G,\tau_{\nu})\). If \(\mathit{supp}\,F\subset\overline{B(R)}\), then \(\mathit{supp}\,\mathcal{R}F\subset[-R,R]\times K\).
Proof.: As (see [[13], page 476]
\[d(0,k\mathrm{e}^{tH}nK)\geq\mid t\mid,\quad k\in K,n\in N,t\in\mathbb{R}\]
it follows that \(\mathit{supp}\,\mathcal{R}F\subset[-R,R]\times K\) if \(\mathit{supp}\,F\subset\overline{B(R)}\)
**Proof of Proposition 4.1.** It suffices to prove the estimate (4.1) for functions \(F\in C_{c}^{\infty}(G,\tau_{\nu})\) supported in \(B(R)\). It follows from the Plancherel formula (3.2) that
\[\int_{B(R)}\parallel F(g)\parallel_{\nu}^{2}\,\,\mathrm{d}g_{K}\geq\int_{K} \int_{\mathbb{R}}\parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu}^{2} \mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\,\mathrm{d}\lambda\,\mathrm{d}k\]
Therefore it is sufficient to show
\[\int_{K}\int_{\mathbb{R}}\parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu} ^{2}\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\,\mathrm{d}\lambda\,\mathrm{d}k \geq C\,\frac{\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}}{R}\int_{\mathbb{R}} \parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu}^{2}\,\,\mathrm{d}k, \tag{4.5}\]
for some positive constant \(C\).
By (4.2) we have \(\mid\mathbf{c}_{\nu}(\lambda)\mid^{-1}\asymp\eta_{\frac{2\mu-3}{2}}(\lambda).\) Therefore (4.5) is equivalent to
\[\frac{\eta_{\frac{2\mu-3}{2}}(\lambda)}{R}\int_{K}\parallel\mathcal{F}_{\nu}F (\lambda,k)\parallel_{\nu}^{2}\,\,\mathrm{d}k\leq\int_{K}\int_{\mathbb{R}} \parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu}^{2}\,\,\eta_{\frac{2\mu -3}{2}}(\lambda)\mathrm{d}\lambda\,\mathrm{d}k \tag{4.6}\]
Let \(T\) be the tempered distribution on \(\mathbb{R}\) defined by \(T:=\mathcal{F}_{\mathbb{R}}^{-1}\eta_{\frac{2\mu-3}{2}}\). By Lemma 4.2, \(T\) is compactly supported. Let \(R_{0}>1\) such that \(\mathit{supp}\,T\subset[-R_{0},R_{0}]\). Then (4.6) is equivalent to
\[\int_{K}\parallel\mathcal{F}_{\mathbb{R}}(T*\mathcal{R}F(.\,,k))(\lambda) \parallel_{\nu}^{2}\,\,\mathrm{d}k\leq CR\int_{K}\int_{\mathbb{R}}\mathcal{F}_ {\mathbb{R}}(T*\mathcal{R}F(.\,,k))(\lambda)\parallel_{\nu}^{2}\,\,\mathrm{d} \lambda\,\mathrm{d}k, \tag{4.7}\]
where \(*\) denotes the convolution on \(\mathbb{R}\).
From \(\mathit{supp}T\subset[-R_{0},R_{0}]\) and Lemma 4.3, it follows that for any \(k\in K\), \(\mathit{supp}\,(T*\mathcal{R}F(.\,,k))\subset[-(R+R_{0}),R+R_{0}]\). Thus
\[\int_{K}\parallel\mathcal{F}_{\mathbb{R}}(T*\mathcal{R}F(.\,,k)(\lambda)\parallel _{\nu}^{2}\,\,\mathrm{d}k\leq 2(R+R_{0})\int_{K}\int_{\mathbb{R}}\parallel(T* \mathcal{R}F(.\,k))(t)\parallel_{\nu}^{2}\,\,\mathrm{d}t\,\mathrm{d}k\]
Next use the Euclidean Plancherel formula to get (4.7), and the proof is finished.
As a consequence of Proposition 4.1, we obtain the uniform continuity estimate for the Poisson transform \(\mathcal{P}_{\lambda}^{\nu}\).
**Corollary 4.1**.: Let \(\nu\in\mathbb{N}\). There exists a positive constant \(C_{\nu}\) such that for \(\lambda\in\mathbb{R}\backslash\{0\}\), we have
\[\sup_{R>1}\left(\frac{1}{R}\int_{B(R)}\parallel\mathcal{P}_{\lambda}^{\nu}f(g) \parallel_{\nu}^{2}\,\mathrm{d}g_{K}\right)^{1/2}\leq C_{\nu}\left|c_{\nu}( \lambda)\right|\parallel f\parallel_{L^{2}(K,\sigma_{\nu})} \tag{4.8}\]
for every \(f\in L^{2}(K,\sigma_{\nu})\).
Proof.: Let \(F\in L^{2}(G,\tau_{\nu})\) with \(\mathit{supp}\,F\subset B(R)\), and let \(f\in L^{2}(K,\sigma_{\nu})\). Since \(\lambda\) is real and \(\tau_{\nu}\) is unitary, the Poisson transform and the restriction Fourier transform are related by the following formula
\[\int_{B(R)}<\mathcal{P}_{\lambda}^{\nu}f(g),F(g)>_{\nu}\mathrm{d}g_{K}=\int_{K }<f(k),\mathcal{F}_{\nu}F(\lambda,k)>_{\nu}\mathrm{d}k.\]
Thus
\[|\int_{B(R)}<\mathcal{P}_{\lambda}^{\nu}f(g),F(g)>_{\nu}\mathrm{ d}g_{K} | \leq\|f\|_{L^{2}(K,\sigma_{\nu})}(\int_{K}\parallel\mathcal{F}_{\nu }F(\lambda,k)\parallel_{\nu}^{2}\mathrm{d}k)^{\frac{1}{2}}\] \[\leq C_{\nu}|c_{\nu}(\lambda)|R^{1/2}\parallel f\parallel_{L^{2}( K,\sigma_{\nu})}\parallel F\parallel_{L^{2}(G,\tau_{\nu})},\]
by the restriction Fourier theorem. Taking the supermum over all \(F\) with \(\parallel F\parallel_{L^{2}(G,\tau_{\nu})}=1\), the corollary follows.
## 5 Asymptotic expansion for the Poisson transform
In this section we give an asymptotic expansion for the Poisson transform. We first start by establishing some intermediate results.
Let \(L^{2}_{\lambda}(K,\sigma_{\nu})\) denote the finite linear span of the functions
\[f^{g}_{\lambda,\nu}:k\longmapsto f^{g}_{\lambda,\nu}(k)=\mathrm{e}^{(i\lambda- \rho)H(g^{-1}k)}\tau_{\nu}^{-1}(\kappa(g^{-1}k))v,\quad g\in G,v\in V_{\nu}.\]
**Lemma 5.1**.: For \(\lambda\in\mathbb{R}\setminus\{0\},\nu\in\mathbb{N}\) the space \(L^{2}_{\lambda}(K,\sigma_{\nu})\) is a dense subspace of \(L^{2}(K,\sigma_{\nu})\).
Proof.: As \(\lambda\in\mathbb{R}\setminus\{0\}\), the density is just a reformulation of the injectivity of the Poisson transform \(\mathcal{P}_{\nu,\lambda}\).
**Lemma 5.2**.: Let \(\lambda\in\mathbb{R}\setminus\{0\},\nu\in\mathbb{N}\). Then there exists a unique unitary isomorphism \(U^{\nu}_{\lambda}\) on \(L^{2}(K,\sigma_{\nu})\) such that :
\[U^{\nu}_{\lambda}\,f^{g}_{\lambda,\nu}=f^{g}_{-\lambda,\nu},\quad g\in G.\]
Moreover, for \(f_{1},f_{2}\in L^{2}(K,\sigma_{\nu})\), we have \(\mathcal{P}_{\lambda}^{\nu}F_{1}=\mathcal{P}_{-\lambda}^{\nu}F_{2}\) if and only if \(U^{\nu}_{\lambda}F_{1}=F_{2}\) ( i.e. \(U^{\nu}_{\lambda}=(\mathcal{P}_{-\lambda}^{\nu})^{-1}\circ\mathcal{P}_{\lambda }^{\nu}\)).
Proof.: The proof is the same as in [17] (see also Lemma 5.2 in[8]) so we omit it.
We now introduce the function space \(B^{*}(G,\tau_{\nu})\) on G, consisting of functions \(F\) in \(L^{2}_{loc}(G,\tau_{\nu})\) satisfying
\[\parallel F\parallel_{B^{*}(G,\tau_{\nu})}=\sup_{j\in\mathbb{N}}[2^{-\frac{j} {2}}\int_{A_{j}}\parallel F(g)\parallel_{\nu}^{2}\,\mathrm{d}g_{K}]<\infty,\]
where \(A_{0}=\{g\in G;d(0,g.0)<1\}\)_and_\(A_{j}=\{g\in G;2^{j-1}\leq d(0,g.0)<2^{j}\}\), for \(j\geq 1\).
One could easily show that \(\parallel F\parallel_{B^{*}(G,\tau_{\nu})}\leq\parallel F\parallel_{*}\leq 2 \parallel F\parallel_{B^{*}(G,\tau_{\nu})}\).
We define an equivalent relation on \(B^{*}(G,\tau_{\nu})\). For \(F_{1},F_{2}\in B^{*}(G,\tau_{\nu})\) we write \(F_{1}\simeq F_{2}\) if
\[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\parallel F_{1}(g)-F_{2}(g)\parallel_ {\nu}^{2}\,\mathrm{d}g_{K}=0.\]
Note that by using the polar decomposition we see that \(F_{1}\simeq F_{2}\) if
\[\lim_{R\to+\infty}\frac{1}{R}\int_{0}^{R}\int_{K}\parallel F_{1}(k\mathrm{e}^{ tH})-F_{2}(k\mathrm{e}^{tH}))\parallel_{\nu}^{2}\,\mathrm{d}k\,\Delta(t) \mathrm{d}t\ =0.\]
We now state the main result of this section
**Theorem 5.1**.: Let \(\nu\in\mathbb{N},\lambda\in\mathbb{R}\setminus\{0\}\). For \(f\in L^{2}(K,\sigma_{\nu})\) we have the following asymptotic expansions for the Poisson transform in \(B^{*}(G,\tau_{\nu})\)
\[\mathcal{P}^{\nu}_{\lambda}f(x)\simeq\tau_{\nu}^{-1}(k_{2}(x))[\mathbf{c}_{\nu }(\lambda)\mathrm{e}^{(i\lambda-\rho)(A^{+}(x)}f(k_{1}(x))+\mathbf{c}_{\nu}(- \lambda)e^{(-i\lambda-\rho)(A^{+}(x))}U^{\nu}_{\lambda}f(k_{1}(x))], \tag{5.1}\]
where \(x=k_{1}(x)\mathrm{e}^{A^{+}(x)}k_{2}(x)\).
Most of the proof of the above theorem consists in proving the following Key Lemma, giving the asymptotic expansion for the translates of the \(\tau_{\nu}\)-spherical function.
**KEY LEMMA.** For \(\lambda\in\mathbb{R}\setminus\{0\},g\in G\) and \(v\in V_{\nu}\), we have the following asymptotic expansion in \(B^{*}(G,\tau_{\nu})\)
\[\Phi_{\nu,\lambda}(g^{-1}x).\,v\simeq\tau_{\nu}^{-1}(k_{2}(x))\sum_{s\in\{ \pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda-\rho)A^{+}(x)}f^{g}_{s \lambda,v}(k_{1}(x)),\]
\(x=k_{1}(x)\mathrm{e}^{A^{+}(x)}k_{2}(x)\).
**Proof of Theorem 5.1.** We first note that both side of (5.1) depend continuously on \(f\in L^{2}(K,\sigma_{\nu})\). This can be proved in the same manner as in [8]. Therefore we only have to prove that the asymptotic expansion (5.1) holds for \(f\in L^{2}_{\lambda}(K,\sigma_{\nu})\). Let \(f=f^{g}_{\lambda,v}\). Then according to [[11], Proposition 3.3], we have
\[\mathcal{P}^{\nu}_{\lambda}f(x)=\Phi_{\nu,\lambda}(g^{-1}x)v.\]
The theorem follows from the Key lemma.
As a consequence of Theorem 5.1 we obtain the following result giving the behaviour of the Poisson integrals.
**Proposition 5.1**.:
1. For any \(f\in L^{2}(K,\sigma_{\nu})\) we have the Plancherel-Poisson formula \[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\parallel\mathcal{P}^{\nu}_{\lambda}f( g)\parallel^{2}_{\nu}\;\mathrm{d}g_{K}=2\mid\mathbf{c}_{\nu}(\lambda)\mid^{2} \parallel f\parallel^{2}_{L^{2}(K,\sigma_{\nu})}\] (5.2)
2. Let \(\nu\in\mathbb{N}\). There exists a positive constant \(C_{\nu}\) such that for any \(\lambda\in\mathbb{R}\setminus\{0\}\), we have \[C^{-1}_{\nu}\mid\mathbf{c}_{\nu}(\lambda)\mid\parallel f\parallel_{L^{2}(K, \sigma_{\nu})}\leq\parallel\mathcal{P}^{\lambda}_{\nu}f\parallel_{*}\leq C_{ \nu}\mid\mathbf{c}_{\nu}(\lambda)\mid\parallel f\parallel_{L^{2}(K,\sigma_{ \nu})},\] (5.3) for every \(f\in L^{2}(K,\sigma_{\nu})\).
_Proof._ 1. We define for \(f\in L^{2}(K,\sigma_{\nu})\)
\[S^{\nu}_{\lambda}f(x):=\tau_{\nu}^{-1}(k_{2}(x))[\mathbf{c}_{\nu}(\lambda) \mathrm{e}^{(i\lambda-\rho)(A^{+}(x)}f(k_{1}(x))+\mathbf{c}_{\nu}(-\lambda)e^ {(-i\lambda-\rho)(A^{+}(x))}U^{\nu}_{\lambda}f(k_{1}(x))],\]
\(x=k_{1}(x)\mathrm{e}^{A^{+}(x)}k_{2}(x)\).
By the unitarity of \(U_{\lambda}\), we have
\[\frac{1}{R}\int_{B(R)}\|S^{\nu}_{\lambda}f(g)\|^{2}dg_{K} =2|\mathbf{c}_{\nu}(\lambda)|^{2}\|f\|^{2}_{L^{2}(K,\tau_{\nu})} \left(\frac{1}{R}\int_{0}^{R}\mathrm{e}^{-2\rho t}\Delta(t)\mathrm{d}t\right)\] \[+2|\mathbf{c}_{\nu}(\lambda)|^{2}\Re\left(<f,U_{\lambda}f>_{L^{2}( K,\sigma_{\nu})}\frac{1}{R}\int_{0}^{R}e^{2(i\lambda-\rho)t}\Delta(t)dt\right).\]
From \(\lim_{R\to+\infty}\frac{1}{R}\int_{0}^{R}\mathrm{e}^{-2\rho t}\Delta(t) \mathrm{d}t=1\), and \(\lim_{R\to+\infty}\frac{1}{R}\int_{0}^{R}e^{2(i\lambda-\rho)t}\Delta(t) \mathrm{d}t=0\), we deduce that
\[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\parallel S^{\nu}_{\lambda}f(g) \parallel^{2}_{\nu}\mathrm{d}g_{K}=2\mid\mathbf{c}_{\nu}(\lambda)\mid^{2}\parallel f \parallel^{2}_{L^{2}(K,\sigma_{\nu})}. \tag{5.4}\]
Next write
\[\frac{1}{R}\int_{B(R)}\parallel\mathcal{P}_{\lambda}^{\nu}f(g) \parallel_{\nu}^{2}dg_{K} =\frac{1}{R}\int_{B(R)}(\parallel S_{\lambda}^{\nu}f(g)\parallel_{ \nu}^{2}+\parallel\mathcal{P}_{\lambda}^{\nu}f(g)-S_{\lambda}^{\nu}f(g) \parallel_{\nu}^{2}\] \[+2Re[<\mathcal{P}_{\lambda}^{\nu}f(g)-S_{\lambda}^{\nu}f(g),S_{ \lambda}^{\nu}f(g)>])dg_{K}.\]
The estimate (5.2) then follows from (5.4), Theorem 5.1 and the Cauchy-Schwarz inequality.
2. The right hand side of the estimate (5.3) has already been proved, see corollary 4.1. The left hand side of the estimate (5.3) obviously follows from the estimate (5.2). This finishes the proof of the proposition.
**Remark 5.1**.: Let \(f_{1},f_{2}\in L^{2}(K,\sigma_{\nu})\). Then using the polarization identity as well as the estimate (5.2), we get
\[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}<\mathcal{P}_{\lambda}^{\nu}f_{1}(g), \mathcal{P}_{\lambda}^{\nu}f_{2}(g)>_{\nu}\mathrm{d}g_{K}=2\mid\mathbf{c}_{ \nu}(\lambda)\mid^{2}<f_{1},f_{2}>_{L^{2}(K,\sigma_{\nu})} \tag{5.5}\]
## 6 Proof of the main results
In this section we shall prove Theorem 1.1 on the \(L^{2}\)-range of the vector Poisson transform and Theorem 1.2 characterizing the image \(\mathcal{Q}_{\lambda}^{\nu}(L^{2}(G,\tau_{\nu})\).
### The \(L^{2}\)-range of the Poisson transform
We first recall some results of harmonic analysis on the homogeneous vector bundle \(K\times_{M}V_{\nu}\) associated to the representation \(\sigma_{\nu}\) of \(M\).
Let \(\widehat{K}\) be the unitary dual of \(K\). For \(\delta\in\widehat{K}\) let \(V_{\delta}\) denote a representation space of \(\delta\) with \(d_{\delta}=\dim V_{\delta}\). We denote by \(\widehat{K}(\sigma_{\nu})\) the set of \(\delta\in\widehat{K}\) such that \(\sigma_{\nu}\) occurs in \(\delta\mid_{M}\) with multiplicity \(m_{\delta}>0\).
The decomposition of \(L^{2}(K,\sigma_{\nu})\) under \(K\) (the group \(K\) acts by left translations on this space) is given by the Frobenius reciprocity law
\[L^{2}(K,\sigma_{\nu})=\bigoplus_{\delta\in\widehat{K}(\sigma_{\nu})}V_{\delta }\otimes Hom_{M}(V_{\nu},V_{\delta}),\]
where \(v\otimes L\), for \(v\in V_{\delta},L\in Hom_{M}(V_{\nu},V_{\delta})\) is identified with the function \((v\otimes L)(k)=L^{*}(\delta(k^{-1})v)\), where \(L^{*}\) denotes the adjoint of \(L\).
For each \(\delta\in\widehat{K}(\sigma_{\nu})\) let \((L_{j})_{j=1}^{m_{\delta}}\) be an orthonormal basis of \(Hom_{M}(V_{\nu},V_{\delta})\) with respect to the inner product
\(<L_{1},L_{2}>=\frac{1}{\nu+1}Tr(L_{1}L_{2}^{*})\).
Let \(\{v_{1},\cdots,v_{d_{\delta}}\}\) be an orthonormal basis of \(V_{\delta}\). Then
\[f_{ij}^{\delta}:k\to\sqrt{\frac{d_{\delta}}{\nu+1}}L_{i}^{*}\delta(k^{-1})v_{ j},\quad 1\leq i\leq m_{\delta},\quad 1\leq j\leq d_{\delta},\quad\delta\in \widehat{K}(\sigma)\]
form an orthonormal basis of \(L^{2}(K,\sigma_{\nu})\).
For \(f\in L^{2}(K,\sigma_{\nu})\) we have the Fourier series expansion \(f(k)=\sum_{\delta\in\widehat{K}(\sigma)}\sum_{i=1}^{m_{\delta}} \sum_{j=1}^{d_{\delta}}a_{ij}^{\delta}f_{ij}^{\delta}(k)\) with
\[\parallel f\parallel_{L^{2}(K,\sigma)}^{2}=\sum_{\delta\in\widehat{K}(\sigma) }\sum_{i=1}^{m_{\delta}}\sum_{j=1}^{d_{\delta}}\mid a_{ij}^{\delta}\mid^{2}.\]
We define for \(\delta\in\widehat{K}(\sigma)\) and \(\lambda\in\mathbb{C}\), the generalized Eisenstein integral
\[\Phi^{L}_{\lambda,\delta}(g)=\int_{K}\mathrm{e}^{-(i\lambda+\rho)H(g^{-1}k)}\tau _{\nu}(\kappa(g^{-1}k))L^{*}\delta(k^{-1})\mathrm{d}k,\quad L\in Hom_{M}(V_{\nu },V_{\delta}).\]
It is easy to see that \(\Phi^{L}_{\lambda,\delta}\) satisfies the following identity
\[\Phi^{L}_{\lambda,\delta}(k_{1}gk_{2})=\tau_{\nu}(k_{2}^{-1})\Phi^{L}_{\lambda, \delta}(g)\delta(k_{1}^{-1}),\quad k_{1},k_{2}\in K,\,g\in G.\]
We now prove an asymptotic estimate for the generalized Eisenstein integrals.
**Proposition 6.1**.: Let \(\nu\in\mathbb{N},\lambda\in\mathbb{R}\setminus\{0\}\). Then for \(\delta\in\widehat{K}(\sigma_{\nu}),T,S\in Hom_{M}(V_{\nu},V_{\delta})\) we have
\[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\,\text{Tr}\big{(}\Phi^{T}_{\lambda, \delta}(g)^{*}\Phi^{S}_{\lambda,\delta}(g)\big{)}\,\mathrm{d}g_{K}=2\mid \mathbf{c}_{\nu}(\lambda)\mid^{2}\text{Tr}(TS^{*}). \tag{6.1}\]
Proof.: By definition we have
\[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\,\text{Tr}\big{(}\Phi^{T}_{\lambda, \delta}(g)^{*}\Phi^{S}_{\lambda,\delta}(g)\big{)}\,\mathrm{d}g_{K}=\sum_{j=1}^ {d_{\delta}}\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}<\Phi^{S}_{\lambda,\delta} (g)v_{j},\Phi^{T}_{\lambda,\delta}(g)v_{j}>_{\nu}\,\mathrm{d}g_{K}\]
Noting that \(\Phi^{T}_{\lambda,\delta}(g)v_{j}\) is the Poisson transform of the function \(k\mapsto L^{*}\delta(k^{-1})v_{j}\) and using (5.5), we get
\[\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\,\text{Tr}\big{(}\Phi^{T}_{\lambda, \delta}(g)^{*}\Phi^{S}_{\lambda,\delta}(g)\big{)}\,\mathrm{d}g_{K}=2\mid \mathbf{c}_{\nu}(\lambda)\mid^{2}\sum_{j=1}^{d_{\delta}}\int_{K}<S^{*}\delta(k ^{-1})v_{j},T^{*}\delta(k^{-1})v_{j}>_{\nu}\,\mathrm{d}k.\]
Hence Schur Lemma lead us to conclude that \(\lim_{R\to+\infty}\frac{1}{R}\int_{B(R)}\,\text{Tr}\big{(}\Phi^{T}_{\lambda, \delta}(g)^{*}\Phi^{S}_{\lambda,\delta}(g)\big{)}\,\mathrm{d}g_{K}=2\mid \mathbf{c}_{\nu}(\lambda)\mid^{2}\text{Tr}(TS^{*})\), and the proof is finished.
**Remark 6.1**.: Noting that
\[Tr(\big{(}\Phi^{T}_{\lambda,\delta}(g)^{*}\Phi^{S}_{\lambda,\delta}(g)\big{)} =Tr(\big{(}\Phi^{T}_{\lambda,\delta}(a_{t})^{*}\Phi^{S}_{\lambda,\delta}(a_{t })\big{)}\,,\quad g=k_{1}\,a_{t}\,k_{2},\]
it follows from (6.1) that
\[\lim_{R\to+\infty}\frac{1}{R}\int_{0}^{R}Tr\big{(}\Phi^{T}_{\lambda,\delta}(a_ {t})^{*}\Phi^{S}_{\lambda,\delta}(a_{t})\big{)}\,\Delta(t)\mathrm{d}t=2\mid \mathbf{c}_{\nu}(\lambda)\mid^{2}\text{Tr}(TS^{*}). \tag{6.2}\]
**Proof of Theorem 1.1.**
(i) The estimate (5.3) implies that the Poisson transform \(\mathcal{P}^{\nu}_{\lambda}\) maps \(L^{2}(K,\sigma_{\nu})\) into \(\mathcal{E}_{\lambda}(G,\tau_{\nu})\) and that the estimate (1.5) holds.
(ii) We now prove that the Poisson transform maps \(L^{2}(K,\sigma_{\nu})\) onto \(\mathcal{E}^{2}_{\lambda}(G,\tau_{\nu})\). Let \(F\in\mathcal{E}^{2}_{\lambda}(G,\tau_{\nu})\). Since \(\lambda\in\mathbb{R}\setminus\{0\}\), we know by Theorem 2.1 that there exists a hyperfunction \(f\in C^{-\omega}(K,\sigma_{\nu})\) such that \(F=\mathcal{P}^{\nu}_{\lambda}f\).
Let \(f=\sum_{\delta\in\widehat{K}(\sigma)}\sum_{j=1}^{d_{\delta}}\sum_{i=1}^{m_{s}} a^{\delta}_{ij}f^{\delta}_{ij}\), be the Fourier series expansion of \(f\). Then we have
\[F(g)=\sum_{\delta\in\widehat{K}(\sigma)}\sqrt{\frac{d_{\delta}}{\nu+1}}\sum_{j= 1}^{d_{\delta}}\sum_{i=1}^{m_{s}}a^{\delta}_{ij}\Phi^{L_{i}}_{\lambda,\delta}( g)v_{j}\quad\text{in}\quad C^{\infty}(G,V).\]
By the Schur relations, we have
\[\int_{K}<\Phi^{L_{i}}_{\lambda,\delta}(ka_{t})v_{j},\Phi^{L_{m}}_{\lambda, \delta^{\prime}}(ka_{t})v_{n}>_{\nu}\,\mathrm{d}k=\left\{\begin{array}{ll}0& \text{if }\delta\nsim\delta^{\prime}\\ \frac{1}{d_{\delta}}Tr(\Phi^{L_{m}}_{\lambda,\delta^{\prime}}(a_{t}))^{*}\Phi^{ L_{i}}_{\lambda,\delta}(a_{t})<v_{j},v_{n}>_{\nu}&\text{if }\quad\delta^{\prime}=\delta\end{array}\right.\]
Therefore
\[\int_{K}\parallel F(ka_{t})\parallel_{\nu}^{2}\,{\rm d}k =\frac{1}{\nu+1}\sum_{\delta\in\widehat{K}(\sigma)}\sum_{j=1}^{d_{ s}}\sum_{1\leq i,j\leq m_{s}}a_{ij}^{\delta}\overline{a_{mj}^{\delta}}Tr[(\Phi_{ \lambda,\delta}^{L_{m}}(a_{t}))^{*}\Phi_{\lambda,\delta}^{L_{i}}(a_{t})]\] \[=\frac{1}{\nu+1}\sum_{\delta\in\widehat{K}(\sigma)}\sum_{j=1}^{d_ {s}}Tr\left[\sum_{1\leq i,m\leq m_{s}}(a_{mj}^{\delta}\Phi_{\lambda,\delta}^{L_ {m}}(a_{t}))^{*}(a_{ij}^{\delta}\Phi_{\lambda,\delta}^{L_{i}}(a_{t})\right]\] \[=\frac{1}{\nu+1}\sum_{\delta\in\widehat{K}(\sigma)}\sum_{j=1}^{d_ {s}}\parallel\sum_{i=1}^{m_{s}}a_{ij}^{\delta}\Phi_{\lambda,\delta}^{L_{i}}(a_ {t})\parallel_{HS}^{2},\]
Let \(\Lambda\) be a finite subset in \(\widehat{K}(\sigma)\). Since \(\parallel F\parallel_{*}<\infty\), it follows that, for any \(R>1\) we have
\[\infty>\parallel F\parallel_{*}^{2}\frac{1}{\nu+1}\sum_{\delta\in\Lambda}\sum_ {j=1}^{d_{s}}\frac{1}{R}\int_{0}^{R}\parallel\sum_{i=1}^{m_{s}}a_{ij}^{\delta} \Phi_{\lambda,\delta}^{L_{i}}(a_{t})\parallel_{HS}^{2}\ \Delta(t)\,{\rm d}t\]
By (6.2) we have
\[\lim_{R\to\infty}\frac{1}{R}\int_{0}^{R}\parallel\sum_{i=1}^{m_{ s}}a_{ij}^{\delta}\Phi_{\lambda,\delta}^{L_{i}}(a_{t})\parallel_{HS}^{2}\ \Delta(t)\,{\rm d}t =\lim_{R\to\infty}\sum_{1\leq i,m\leq m_{s}}a_{ij}^{\delta} \overline{a_{mj}^{\delta}}\,\frac{1}{R}\int_{0}^{R}Tr[(\Phi_{\lambda,\delta}^{ L_{m}}(a_{t}))^{*}\Phi_{\lambda,\delta}^{L_{i}}(a_{t})]\,\Delta(t){\rm d}t\] \[=2\mid{\bf c}_{\nu}(\lambda)\mid^{2}\sum_{1\leq i,m\leq m_{s}}a_{ ij}^{\delta}\overline{a_{mj}^{\delta}}Tr(L_{i}L_{m}^{*})\] \[=2(\nu+1)\mid{\bf c}_{\nu}(\lambda)\mid^{2}\sum_{i=1}^{m_{s}} \mid a_{ij}^{\delta}\mid^{2}.\]
Thus \(\infty>\parallel F\parallel_{*}^{2}\geq\mid{\bf c}_{\nu}(\lambda)\mid^{2}\sum _{\delta\in\Lambda}\sum_{j=1}^{d_{s}}\sum_{i=1}^{m_{s}}\mid a_{ij}^{\delta} \mid^{2}\). Since \(\Lambda\) is arbitrary, it follows that
\[\mid{\bf c}_{\nu}(\lambda)\mid^{2}\sum_{\delta\in\widehat{K}(\sigma)}\sum_{j= 1}^{d_{s}}\sum_{i=1}^{m_{s}}\mid a_{ij}^{\delta}\mid^{2}\leq\parallel F\parallel _{*}^{2}.\]
This shows that \(f\in L^{2}(K,\sigma_{\nu})\) with \(\mid{\bf c}_{\nu}(\lambda)\mid\parallel f\parallel_{L^{2}(K,\sigma_{\nu})} \leq\parallel{\cal P}_{\lambda}^{\nu}f\parallel_{*}\) and the proof of the theorem is completed.
### The \(L^{2}\)-range of the generalized spectral projections
We now proceed to the poof of the second main result of this paper.
**Proof of Theorem 1.2.**
Let \(F\in L_{c}^{2}(G,\tau_{\nu})\cap C^{\infty}(G,\tau_{\nu})\). It follows from the definition ( see (1.8)) that the operator \({\cal Q}_{\lambda}^{\nu}\) may be written as
\[{\cal Q}_{\lambda}^{\nu}F(g)=\mid{\bf c}_{\nu}(\lambda)\mid^{-2}{\cal P}_{ \lambda}^{\nu}({\cal F}_{\nu}F(\lambda,.))(g). \tag{6.3}\]
Using Theorem 1.1 we deduce that
\[\sup_{R>1}\frac{1}{R}\int_{B(R)}\parallel{\cal Q}_{\lambda}^{\nu}F(g)\parallel _{\nu}^{2}\ {\rm d}g_{K}\leq C_{\nu}\ \mid{\bf c}_{\nu}(\lambda)\mid^{-2}\int_{K}\parallel{\cal F}_{\nu}F( \lambda,k)\parallel_{\nu}^{2}\ {\rm d}k.\]
The above inequality and the Plancherel formula (3.4) imply
\[\int_{0}^{\infty}(\sup_{R>1}\frac{1}{R}\int_{B(R)}\parallel{\cal Q }_{\lambda}^{\nu}F(g)\parallel_{\nu}^{2}\ {\rm d}g_{K})\,{\rm d}\lambda \leq C_{\nu}\int_{0}^{\infty}\int_{K}\parallel{\cal F}_{\nu}F( \lambda,k)\parallel_{\nu}^{2}\ {\bf c}_{\nu}(\lambda)\mid^{-2}\ {\rm d}k\,{\rm d}\lambda\] \[\leq C_{\nu}\parallel F\parallel_{L^{2}(G,\tau)}^{2}.\]
This prove the right hand side of the inequality (1.9).
From (6.3) and (1.6) we have
\[\lim_{R\to\infty}\frac{1}{R}\int_{B(R)}\parallel\mathcal{Q}_{\lambda}^{\nu}F(g) \parallel_{\nu}^{2}\;\mathrm{d}g_{K}=2\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2} \int_{K}\parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel_{\nu}^{2}\;\mathrm{d}k,\]
and since for all \(R>1\)
\[\frac{1}{R}\int_{B(R)}\parallel\mathcal{Q}_{\lambda}^{\nu}F(g)\parallel^{2}\; \mathrm{d}g_{K}\leq C_{\nu}\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\int_{K} \parallel\mathcal{F}_{\nu}F(\lambda,k)\parallel^{2}\mathrm{d}k,\quad\text{a.e. } \lambda\in(0,\infty),\]
we may apply the Lebesgue's dominated convergence theorem to get
\[\lim_{R\to\infty}\int_{0}^{\infty}\left(\frac{1}{R}\int_{B(R)}\parallel \mathcal{Q}_{\lambda}^{\nu}F(g)\parallel_{\nu}^{2}\;\mathrm{d}g_{K}\right) \mathrm{d}\lambda=2\parallel F\parallel_{L^{2}(G,\tau_{\nu})}^{2}.\]
It follows from the above equality that
\[C\parallel F\parallel_{L^{2}(G,\tau_{\nu})}^{2}\leq\int_{0}^{\infty}(\sup_{R>1 }\int_{B(R)}\parallel\mathcal{Q}_{\lambda}^{\nu}F(x)\parallel^{2}\;\mathrm{d} x)\,\mathrm{d}\lambda.\]
This complete the proof of the inequality (1.9).
We now prove that \(\mathcal{Q}_{\lambda}^{\nu}\) maps \(L^{2}_{c}(G,\tau_{\nu})\) onto \(\mathcal{E}_{\lambda}^{2}(G,\tau_{\nu})\). Let \(F_{\lambda}\in\mathcal{E}_{\lambda}^{2}(G,\tau_{\nu})\). Then we have
\[\sup_{R>1}\frac{1}{R}\int_{B(R)}\parallel F_{\lambda}(g)\parallel_{\nu}^{2}\; \mathrm{d}g_{K}<\infty,\quad\text{for a.e.}\quad\lambda\in\,(0,\infty).\]
By Theorem 1.1, there exists \(f_{\lambda}\in L^{2}(K,\sigma_{\nu})\) such that \(F_{\lambda}(g)=\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\mathcal{P}_{\lambda}^{ \nu}f_{\lambda}(g)\) with
\[\sup_{R>1}\frac{1}{R}\int_{B(R)}\parallel F_{\lambda}(g)\parallel_{\nu}^{2}\; \mathrm{d}g_{K}\geq C_{\nu}^{-1}\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\int_{K }\parallel f_{\lambda}(k)\parallel^{2}\;\mathrm{d}k\]
Integrating the both side of the above inequality over \((0,\infty)\), we get
\[\infty>\parallel F_{\lambda}\parallel_{*}^{2}\geq C_{\nu}^{-1}\int_{O}^{\infty }\int_{K}\parallel f_{\lambda}(k)\parallel_{\nu}^{2}\mid\mathbf{c}_{\nu}( \lambda)\mid^{-2}\mathrm{d}k\,\mathrm{d}\lambda.\]
It now follows from Theorem 3.1, that there exists \(F\in L^{2}_{c}(G,\tau_{\nu})\) such that \(\mathcal{F}_{\nu}F(\lambda,k)=f_{\lambda}(k)\).
Henceforth \(F_{\lambda}(g)=\mid\mathbf{c}_{\nu}(\lambda)\mid^{-2}\mathcal{P}_{\lambda}^{ \nu}(\mathcal{F}_{\nu}F(\lambda,.)(g)\). This finishes the proof of Theorem 1.2.
## 7 Proof of the Key Lemma
In this section we prove the Key Lemma of this paper. To this end we need to establish some auxiliary results. We first prove an asymptotic formula for the \(\tau_{\nu}\)-spherical function.
**Proposition 7.1**.: Let \(\lambda\in\mathbb{R}\setminus\{0\}\). For any \(v\in V_{\nu}\) we have
\[\Phi_{\nu,\lambda}(g).\,v\simeq\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda) \mathrm{e}^{(is\lambda-\rho)A^{+}(g)}\tau_{\nu}^{-1}(\kappa_{1}(g)\kappa_{2}(g )).\,v, \tag{7.1}\]
\(g=\kappa_{1}(g)\mathrm{e}^{A^{+}(g)}\kappa_{2}(g)\)
Proof.: Since \(\Delta(t)\leq 2^{3}\mathrm{e}^{2\rho\,t}\), we get
\[\frac{1}{R}\int_{B(R)}\parallel\mathrm{e}^{(i\lambda-\rho)A^{+}( g)}\tau_{\nu}^{-1}(\kappa_{1}(g)\kappa_{2}(g)).\,v\parallel^{2}\;\mathrm{d}g_{K} =\frac{1}{R}\parallel v\parallel^{2}\int_{0}^{R}\mathrm{e}^{-2 \rho\,t}\Delta(t)\mathrm{d}t\] \[\leq 2^{3}\parallel v\parallel^{2}.\]
This shows that the right hand side of (7.1) belongs to \(B^{*}(G,\tau_{\nu})\).
Since \(\lambda\in\mathbb{R}\setminus\{0\}\), we may use the identity (A3) to write
\[\varphi_{\nu,\lambda}(t)-\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s \lambda)\mathrm{e}^{(is\lambda-\rho)t} =\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\left((2\cosh t)^{\nu} \Psi_{s\lambda}^{\rho-2,\nu+1}(t)-\mathrm{e}^{(is\lambda-\rho)t}\right)\] \[=\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is \lambda-\rho)t}\left((1+\mathrm{e}^{-2t})^{\nu}\mathrm{e}^{(\rho+\nu-is\lambda )t}\Psi_{s\lambda}^{\rho-2,\nu+1}(t)-1\right).\]
It follows from (A2') that
\[\varphi_{\nu,\lambda}(t)-\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda) \mathrm{e}^{(is\lambda-\rho)t}=\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s \lambda)\mathrm{e}^{(is\lambda-\rho)t}\left((1+\mathrm{e}^{-2t})^{\nu}-1)+ \mathrm{e}^{-2t}E_{s\lambda}(t)\right),\]
where \(\mid E_{s\lambda}(t)\mid\leq 2^{\nu}C\) if \(t\geq 1\). Therefore
\[\mid\varphi_{\nu,\lambda}(t)-\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda) \mathrm{e}^{(is\lambda-\rho)t}\mid\leq C_{\nu,\lambda}\mathrm{e}^{-\rho t} \mathrm{e}^{-2t},\]
if \(t\geq 1\). This together with
\[\mid\varphi_{\nu,\lambda}(t)-\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda) \mathrm{e}^{(is\lambda-\rho)t}\mid\leq C_{\nu,\lambda}\mathrm{e}^{-\rho t},\]
for \(t\in[0,1]\), imply that
\[\lim_{R\to\infty}\frac{1}{R}\int_{B(R)}\parallel\Phi_{\nu, \lambda}(g).\,v-\sum_{s\in\{\pm 1\}}c_{\nu}(s\lambda)e^{(is\lambda-\rho)A^{+}(g)}\tau^{-1}( \kappa_{1}(g)\kappa_{2}(g)).\,v\parallel_{\nu}^{2}\,\,\mathrm{d}g_{K}=\] \[= \parallel v\parallel^{2}\lim_{R\to\infty}\frac{1}{R}\int_{0}^{R} \mid\varphi_{\nu,\lambda}(t)-\sum_{s\in\{\pm 1\}}c_{\nu}(s\lambda)e^{(is\lambda-\rho)t} \mid^{2}\,\Delta(t)\,\mathrm{d}t=0,\]
and the proof is finished.
**Lemma 7.1**.: Let \(g\in G,k\in K\) and \(t\) a non negative real number. Then we have
\[0\leq A^{+}(g^{-1}k\exp(tH))-H(g^{-1}k\exp(tH))\leq\frac{1+\mid g.0\mid}{1- \mid g.0\mid}\mathrm{e}^{-2t}, \tag{7.2}\]
Proof.: Let \(g^{-1}=\begin{pmatrix}a&b\\ c&d,\end{pmatrix}\) and \(k==\begin{pmatrix}u&0\\ O&v,\end{pmatrix}\), where \(a,b,c\) and \(d\) are \(n\times n,n\times 1,1\times n\) and \(1\times 1\) matrices respectively. A direct computation yields
\[g^{-1}k\exp(tH)=\begin{pmatrix}*&**\\ c_{1}&d_{1}\end{pmatrix},\]
where \(c_{1}=c\,u\begin{pmatrix}\cosh\,t&0\\ 0&I_{n-1}\end{pmatrix}\) and \(d_{1}=\sinh t\,cue_{1}+\cosh t\,dv\).
By (2.1) we have
\[\mathrm{e}^{H(g^{-1}k\exp(tH))}=\mathrm{e}^{t}\midcue_{1}+dv\mid,\]
and
\[\mathrm{e}^{A^{+}(g^{-1}k\exp(tH))}= \mid\sinh t\,cue_{1}+\cosh t\,dv\mid+(\mid\sinh t\,cue_{1}+\cosh t \,dv\mid^{2}-1)^{\frac{1}{2}}.\]
From
\[\mathrm{e}^{A^{+}(g^{-1}k\exp(tH))-H(g^{-1}k\exp(tH))}=\frac{\mathrm{e}^{-t}}{|\;cue _{1}+dv}\big{[}|\;\sinh t\,cue_{1}+\cosh t\,dv\;|+(|\;\sinh t\,cue_{1}+\cosh t \,dv\;|^{2}\;-1)^{\frac{1}{2}}],\]
together with
\[|\sinh t\,cue_{1}+\cosh t\,dv\;|+(|\;\sinh t\,cue_{1}+\cosh t\, dv\;|^{2}\;-1)^{\frac{1}{2}} \leq 2\;|\;\sinh t\,cue_{1}v^{-1}+\cosh t\,d\;|\] \[\leq |\;cue_{1}v^{-1}+d\;|\;\mathrm{e}^{t}+|\;d-cue_{1}v^{-1}\;|\; \mathrm{e}^{-t}\]
we deduce that
\[\mathrm{e}^{(A^{+}(g^{-1}k\exp(tH))-H(g^{-1}k\exp(tH))}\leq 1+\frac{|\;d-cue_ {1}v^{-1}\;|}{|\;cue_{1}v^{-1}+d\;|}\mathrm{e}^{-2t}.\]
Noting that \((g.0)^{*}=-(d^{-1}c)\), and \(k.e_{1}=ue_{1}v^{-1}\), we get
\[\mathrm{e}^{(A^{+}(g^{-1}k\exp(tH))-H(g^{-1}k\exp(tH))} \leq 1+\frac{|\;1+<g.0,k.e_{1}>|}{|\;1-<g.0,k.e_{1}>|}\mathrm{e}^{ -2t}\] \[\leq 1+\frac{1+|\;g.0}{1-|\;g.0}\Big{|}\mathrm{e}^{-2t},\]
from which we deduce (7.2), and the proof of the lemma is finished.
**Proof of the Key Lemma.** Since \(B^{*}(G,\tau_{\nu})\) is \(G\)-invariant, we may apply Proposition 7.1 to get
\[\Phi_{\nu,\lambda}(g^{-1}x)v\simeq\tau_{\nu}^{-1}(\kappa_{1}(g^{-1}x)\kappa_{2 }(g^{-1}x)\sum_{s\in\{\pm\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda- \rho)A^{+}(g^{-1}x)}v.\]
Thus it suffices to show that
\[\tau_{\nu}^{-1}(\kappa_{1}(g^{-1}x)\kappa_{2}(g^{-1}x)\sum_{s\in\{\pm\}} \mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda-\rho)A^{+}(g^{-1}x)}v\simeq \tau_{\nu}^{-1}(k_{2}(x))\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is \lambda-\rho)A^{+}(x)}f_{s\lambda,v}^{g}(k_{1}(x)), \tag{7.3}\]
Note that
\[\tau_{\nu}^{-1}[k_{1}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)}k_{2}(x))k_{2}(g^{-1 }k_{1}(x)\mathrm{e}^{A^{+}(x)}k_{2}(x))]=\tau_{\nu}^{-1}[k_{1}(g^{-1}k_{1}(x) \mathrm{e}^{A^{+}(x)})k_{2}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})k_{2}(x))],\]
\(x=k_{1}(x)\mathrm{e}^{A^{+}(x)}k_{2}(x)\).
Henceforth (7.3) is equivalent to
\[\begin{split}\tau_{\nu}^{-1}[k_{1}(g^{-1}k_{1}(x)\mathrm{e}^{A^{ +}(x)})k_{2}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})]&\sum_{s\in\{ \pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda-\rho)A^{+}(g^{-1}k_{1}(x )\mathrm{e}^{A^{+}(x)})}\,v\\ &\simeq\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is \lambda-\rho)A^{+}(x)}f_{s\lambda,v}^{g}(k_{1}(x))\end{split} \tag{7.4}\]
We write the left hand side of (7.4) as
\[\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda-\rho)A^ {+}(x)}f_{s\lambda,v}^{g}(k_{1}(x))+r_{g}(x)v,\]
where
\[\begin{split} r_{g}(x)=&\tau_{\nu}^{-1}[k_{1}(g^{ -1}k_{1}(x)\mathrm{e}^{A^{+}(x)})k_{2}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})] \sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda-\rho)A^{+}(g^{-1}k_{1}(x )\mathrm{e}^{A^{+}(x)})}\\ &-\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is \lambda-\rho)[A^{+}(x)+H(g^{-1}k_{1}(x))]}\tau_{\nu}^{-1}(\kappa(g^{-1}k_{1}( x)),\quad x\in G\end{split} \tag{7.5}\]
To finish the proof we show that for each \(g\in G\), \(r_{g}\simeq 0\).
Noting that
\[H(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})=H(g^{-1}k_{1}(x))+A^{+}(x),\]
we rewrite \(r_{g}\) as
\[r_{g}(x) =[\tau_{\nu}^{-1}(k_{1}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})k_{2} (g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)}))-\tau_{\nu}^{-1}(\kappa(g^{-1}k_{1}(x))] \sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\mathrm{e}^{(is\lambda-\rho)H(g^{-1}k _{1}(x)\mathrm{e}^{A^{+}(x)})}\] \[+\tau_{\nu}^{-1}(k_{1}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})k_{2} (g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)}))\left(\sum_{s\in\{\pm 1\}}\mathbf{c}_{ \nu}(s\lambda)[\mathrm{e}^{(is\lambda-\rho)A^{+}(g^{-1}k_{1}(x)\mathrm{e}^{A^ {+}(x)})}-\mathrm{e}^{(is\lambda-\rho)H(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})}]\right)\] \[=:I_{g}(x)+J_{g}(x).\]
Using the following result
**Lemma 7.2**.: Let \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in Sp(n,1)\). Then we have
\[\tau_{\nu}(\kappa_{1}(g)\kappa_{2}(g))=\tau_{\nu}\big{(}\frac{d}{d\mid}\big{)} \tag{7.6}\]
\[\tau_{\nu}(\kappa(g))=\tau_{\nu}(\frac{ce_{1}+d}{\mid ce_{1}+d\mid}) \tag{7.7}\]
\[\lim_{R\to\infty}\tau_{\nu}(\kappa_{1}(g\exp(RH))\kappa_{2}(g\exp(RH)))=\tau_{ \nu}(\kappa(g)). \tag{7.8}\]
we easily see that \(I_{g}v\simeq 0\).
We have
\[J_{g}(x)= \tau_{\nu}^{-1}(k_{1}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})k_{2} (g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)}))\mathrm{e}^{(is\lambda-\rho)H(g^{-1}k_{ 1}(x)\mathrm{e}^{A^{+}(x)})}\] \[\sum_{s\in\{\pm 1\}}\mathbf{c}_{\nu}(s\lambda)\left(\mathrm{e}^{( is\lambda-\rho)[A^{+}(g^{-1}k_{1}(x)\mathrm{e}^{A^{+}(x)})-H(g^{-1}k_{1}(x) \mathrm{e}^{A^{+}(x)})]}-1\right).\]
As \(\tau_{\nu}\) is unitary and using polar coordinates we see that \(\frac{1}{R}\int_{B(R)}\parallel J_{g}(x)v\parallel_{\nu}^{2}\ \mathrm{d}x_{K}\) is less than
\[2\parallel v\parallel^{2}\frac{\mid\mathbf{c}_{\nu}(\lambda)\mid^{2}}{R}\int_ {0}^{R}\int_{K}\mathrm{e}^{-2\rho H(g^{-1}k\mathrm{e}^{tH})}\mid\mathrm{e}^{( is\lambda-\rho)[A^{+}(g^{-1}ke^{tH})-H(g^{-1}ke^{tH})]}-1\mid^{2}\ \mathrm{d}k\,\Delta(t)\mathrm{d}t.\]
From the estimate
\[\mid\mathrm{e}^{(is\lambda-\rho)[A^{+}(g^{-1}ke^{tH})-H(g^{-1}ke^{tH})]}-1 \mid\leq C(\mid\lambda\mid+\rho)\mid A^{+}(g^{-1}ke^{tH})-H(g^{-1}ke^{tH})\mid\]
together with (7.2) we get
\[\frac{1}{R}\int_{B(R)}\parallel J_{g}(x)v\parallel_{\nu}^{2}\ \mathrm{d}x_{K}\leq \left(C(\mid\lambda\mid+\rho)\frac{1+\mid g.0\mid}{1-\mid g.0\mid}\right)^{2} \frac{1}{R}\int_{0}^{R}\int_{K}\mathrm{e}^{-2\rho H(g^{-1}k)}\mathrm{e}^{-2( \rho+2)t}\Delta(t)\,\mathrm{d}k\,\mathrm{d}t.\]
As \(\int_{K}\mathrm{e}^{-2\rho H(g^{-1}k)}\,\mathrm{d}k=1\) and \(\Delta(t)\leq 2^{3}\mathrm{e}^{2\rho t}\) we obtain
\[\lim_{R\to\infty}\frac{1}{R}\int_{B(R)}\parallel J_{g}(x)v\parallel_{\nu}^{2} \ \mathrm{d}x_{K}=0.\]
This shows that \(J_{g}\simeq 0\). Therefore we have proved that for each \(g\in G,r_{g}\simeq 0\), as to be shown.
It remains to prove Lemma 7.2.
**Proof of Lemma 7.2.** If \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\begin{pmatrix}u_{1}&0\\ 0&v_{1}\end{pmatrix}a_{t}\begin{pmatrix}u_{2}&0\\ 0&v_{2}\end{pmatrix}\) with respect to the Cartan decomposition \(G=KAK\). Then we easily see that \(d=\cosh t\,v_{1}v_{2}\) and (7.6) follows. Analogously if \(g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\begin{pmatrix}u&0\\ 0&v\end{pmatrix}a_{t}\,n\) with respect to the Iwasawa decomposition. Then from \(g.e_{1}=\begin{pmatrix}ae_{1}+b\\ ce_{1}+d\end{pmatrix}=\mathrm{e}^{t}\begin{pmatrix}u\\ v\end{pmatrix}\) we get \(\mathrm{e}^{t}\,v=ce_{1}+d\) and (7.7) follows.
We have
\[g\exp(RH)=\begin{pmatrix}*&**\\ ***&\sinh Re_{1}+\cosh Rd\end{pmatrix}\]
Then (7.6) imply that \(\tau_{\nu}(\kappa_{1}(g)\kappa_{2}(g))=\tau_{\nu}(\frac{\tanh Re_{1}+d}{| \tanh Re_{1}+d|})\). Thus \(\lim_{R\to\infty}\tau_{\nu}(\kappa_{1}(g)\kappa_{2}(g))=\tau_{\nu}(\frac{ce_{1 }+d}{|ce_{1}+d|})\). This finishes the proof of Lemma 7.2, and the proof of the Key Lemma is completed.
## 8 Appendix
In this section we collect some results on the Jacobi functions, referring to [19] for more details.
For \(\alpha,\beta,\lambda\in\mathbb{C};\alpha\neq-1,-2,\cdots\) and \(t\in\mathbb{R}\), the Jacobi function is defined by
\[\phi^{(\alpha,\beta)}_{\lambda}(t)=\,_{2}F_{1}(\frac{i\lambda+\rho_{\alpha, \beta}}{2},\frac{-i\lambda+\rho_{\alpha,\beta}}{2};\alpha+1;-\sinh^{2}t),\]
where \({}_{2}F_{1}\) is the Gauss hypergeometric function and \(\rho_{\alpha,\beta}=\alpha+\beta+1\).
The Jacobi function \(\phi^{(\alpha,\beta)}_{\lambda}\) is the unique even smooth function on \(\mathbb{R}\) which satisfy \(\phi^{(\alpha,\beta)}_{\lambda}(0)=1\) and the differential equation
\[\{\frac{d^{2}}{dt^{2}}+[(2\alpha+1)\coth t+(2\beta+1)\tanh t]\frac{d}{dt}+ \lambda^{2}+\rho_{\alpha,\beta}^{2}\}\phi^{(\alpha,\beta)}_{\lambda}(t)=0.\]
For \(\lambda\notin-i\mathbb{N}\) another solution \(\Psi^{\alpha,\beta}_{\lambda}\) of (A1) such that
\[\Psi^{\alpha,\beta}_{\lambda}(t)=\mathrm{e}^{(i\lambda-\rho_{\alpha,\beta})t} (1+\circ(1)),\quad\text{as}\quad t\to\infty\]
is given by
\[\Psi^{\alpha,\beta}_{\lambda}(t)=(2\sinh t)^{i\lambda-\rho_{\alpha,\beta}}\,_{ 2}F_{1}(\frac{\rho_{\alpha,\beta}-i\lambda}{2},\frac{\beta-\alpha+1-i\lambda} {2};1-i\lambda;-\frac{1}{\sinh^{2}t}).\]
Moreover there exists a constant \(C>0\) such that for all \(\lambda\in\mathbb{R}\) and all \(t\geq 1\) we have
\[\Psi^{\alpha,\beta}_{\lambda}(t)=\mathrm{e}^{(i\lambda-\rho_{\alpha,\beta})t} (1+\mathrm{e}^{-2t}\Theta_{\lambda}(t)),\quad\text{with}\quad\mid\Theta_{ \lambda}(t)\mid\leq C.\]
For \(\lambda\notin i\mathbb{Z}\), we have
\[\phi^{(\alpha,\beta)}_{\lambda}(t)=\sum_{s=\pm 1}\mathbf{c}_{\alpha,\beta}(s \lambda)\Psi^{\alpha,\beta}_{s\lambda}(t)\]
where
\[\mathbf{c}_{\alpha,\beta}(\lambda)=\frac{2^{\rho_{\alpha,\beta}-i\lambda}\, \Gamma(\alpha+1)\Gamma(i\lambda)}{\Gamma(\frac{i\lambda+\rho_{\alpha,\beta}}{ 2})\Gamma(\frac{i\lambda+\alpha-\beta+1}{2})}.\]
For \(\Re(i\lambda)>0\), the asymptotic behaviour of \(\phi^{(\alpha,\beta)}_{\lambda}\) as \(t\to\infty\) is then given by
\[\lim_{t\to\infty}\mathrm{e}^{(\rho_{\alpha,\beta}-i\lambda)t}\phi^{(\alpha,\beta )}_{\lambda}(t)=\mathbf{c}_{\alpha,\beta}(\lambda).\] (A4)
Let \(D_{e}(\mathbb{R})\) denote the space of even smooth function with compact support on \(\mathbb{R}\). For \(f\in D_{e}(\mathbb{R})\), the Fourier-Jacobi transform \(\mathcal{J}^{\alpha,\beta}f\) (\(\lambda\in\mathbb{C}\)) is defined by
\[\mathcal{J}^{\alpha,\beta}f(\lambda)=\int_{0}^{\infty}f(t)\phi^{(\alpha,\beta )}_{\lambda}(t)\Delta_{\alpha,\beta}(t)\,\mathrm{d}t,\] (A5)
where \(\Delta_{\alpha,\beta}(t)=(2\sinh t)^{2\alpha+1}(2\cosh t)^{2\beta+1}\).
In the sequel, we assume that \(\alpha>-1,\beta\in\mathbb{R}\). Then the meromorphic function \(\mathbf{c}_{\alpha,\beta}(-\lambda)^{-1}\) has only simple poles for \(\Im\lambda\geq 0\) which occur in the set
\[D_{\alpha,\beta}=\{\lambda_{k}=i(|\ \beta\ |-\alpha-1-2k);k=0,1,\cdots,|\ \beta\ | -\alpha-1-2k>0\}.\]
(If \(|\ \beta\ |\leq\alpha+1\), then \(D_{\alpha,\beta}\) is empty).
The following inversion and Plancherel formulas for the Jacobi transform hold for every \(f\in D_{e}(\mathbb{R})\):
\[f(t)=\frac{1}{2\pi}\int_{0}^{\infty}(\mathcal{J}^{\alpha,\beta}f)(\lambda)\, \phi^{(\alpha,\beta)}_{\lambda}(t)\mid\mathbf{c}_{\alpha,\beta}(\lambda)\mid^ {-2}\,\mathrm{d}\lambda+\sum_{\lambda_{k}\in D_{\alpha,\beta}}d_{k}(\mathcal{ J}^{\alpha,\beta}f)(\lambda_{k})\,\phi^{(\alpha,\beta)}_{\lambda_{k}}(t),\] (A6)
\[\int_{0}^{\infty}\mid f(t)\mid^{2}\ \Delta(t)\,\mathrm{d}t=\frac{1}{2\pi} \int_{0}^{\infty}\mid(\mathcal{J}^{\alpha,\beta}f)(\lambda)\mid^{2}\mid \mathbf{c}_{\alpha,\beta}(\lambda)\mid^{-2}\,\mathrm{d}\lambda+\sum_{\lambda_{ k}\in D_{\alpha,\beta}}d_{k}\mid(\mathcal{J}^{\alpha,\beta}f)(\lambda_{k})\mid^{2}\] (A6')
where \(d_{k}=-i\,\mathit{Res}_{\lambda=\lambda_{k}}(\mathbf{c}_{\alpha,\beta}(\lambda )\mathbf{c}_{\alpha,\beta}(-\lambda))^{-1}\), is given explicitly by
\[d_{k}=(\beta-\alpha-2k-1)\frac{2^{-2(\alpha+\beta)}\Gamma(\alpha+k+1)\Gamma( \beta-k)}{\Gamma^{2}(\alpha+1)\Gamma(\beta-\alpha-k)k!}.\] (A7)
|
2308.06306 | Towards Packaging Unit Detection for Automated Palletizing Tasks | For various automated palletizing tasks, the detection of packaging units is
a crucial step preceding the actual handling of the packaging units by an
industrial robot. We propose an approach to this challenging problem that is
fully trained on synthetically generated data and can be robustly applied to
arbitrary real world packaging units without further training or setup effort.
The proposed approach is able to handle sparse and low quality sensor data, can
exploit prior knowledge if available and generalizes well to a wide range of
products and application scenarios. To demonstrate the practical use of our
approach, we conduct an extensive evaluation on real-world data with a wide
range of different retail products. Further, we integrated our approach in a
lab demonstrator and a commercial solution will be marketed through an
industrial partner. | Markus Völk, Kilian Kleeberger, Werner Kraus, Richard Bormann | 2023-08-11T15:37:38Z | http://arxiv.org/abs/2308.06306v1 | # Towards Packaging Unit Detection for Automated Palletizing Tasks
###### Abstract
For various automated palletizing tasks, the detection of packaging units is a crucial step preceding the actual handling of the packaging units by an industrial robot. We propose an approach to this challenging problem that is fully trained on synthetically generated data and can be robustly applied to arbitrary real world packaging units without further training or setup effort. The proposed approach is able to handle sparse and low quality sensor data, can exploit prior knowledge if available and generalizes well to a wide range of products and application scenarios. To demonstrate the practical use of our approach, we conduct an extensive evaluation on real-world data with a wide range of different retail products. Further, we integrated our approach in a lab demonstrator and a commercial solution will be marketed through an industrial partner.
## I Introduction
The task of handling packaging units appears in all kinds of logistics use cases. Most notable here are the distribution centers in the supply chain of retailers, supermarkets, online shops or of general postal services; practically everywhere where orders with different products have to be put together or packages need to be sorted so that they can reach their destination. Other familiar everyday applications include filling supermarket shelves or accepting empties (Fig. 1), to name a few. Due to the high potential for automation in the handling of packaging units, Fraunhofer IPA has been working on this topic for several years now. It has become apparent that with a large and constantly changing product range, there are two main challenges. The first one is directly the physical manipulation of packaging units. They are usually close together and some of them, such as beverage or fruit crates, are open at the top and therefore cannot be picked with a suction or clamping gripper. As a solution to this problem, a so called roll-on-gripper1[1] was developed and is now commercially available via Premium Robotics GmbH.
Footnote 1: [https://www.youtube.com/watch7v=x85pJr8k7xc](https://www.youtube.com/watch7v=x85pJr8k7xc)
The other major challenge and subject of this work is the detection (identification and 3D pose estimation) of the packing units. Normally, only RGBD sensor data and, in some use cases, the dimensions of the packaging units from stock data are available for this purpose. Classical methods of computer vision typically fail due to sensor quality, especially transparent or reflective products are an issue here, and they are not able to generalize sufficiently well to arbitrary packaging units. Even state-of-the-art CNN-based object detectors are often limited to 2D bounding box regression and classification. They require large amounts of annotated data and training a network for each new product is practically not feasible. In addition to the accuracy of the required annotations, the sparse depth information caused by the physical principle of most depth sensors poses another problem.
Generic CNN-based state-of-the-art 3D object detectors [2] are mostly extensions of 2D single shot detectors, while some approaches are specialized for industrial applications [3] in which RGBD sensors are available, but none of them can be directly used for general packaging unit detection. In this paper we address these challenges and present an industrially applicable framework for the most relevant scenarios of packaging unit detection. We further distinguish between _homogeneous pallet stacks_, with only one kind of product and _heterogeneous pallet stacks_ which contain several different packaging units or products. We refer to the dimension (long, short, height) of the packing units as box size and consider the following scenarios.
* Homogeneous pallet stacks with known box size: This is typically the case when orders of supermarkets etc. have to be fulfilled and heterogeneous pallet stacks have to be build in distribution centers. An additional issue in this case are intermediate layers (_interlayers_) which are layers of cardboard between the packaging units to stabilize the stack. They typically have to be detected and removed before the next layer can be handled.
* Heterogeneous pallet stacks and box size is not known: This is the case when packaging units have to be identified for delivery services, accepting empties etc.
* Heterogeneous pallet stacks with known box size: Searching for boxes on a pallet stack for restocking shelves in a supermarket or retail shop for instance.
Fig. 1: Beverage logistics with a mobile handling robot
The main contributions of this paper are as follows:
* An industrial grade framework/solution for the challenging task of packaging unit detection for palletizing tasks. The approach generalizes to arbitrary real world packaging units and objects of cuboid shape without any additional training.
* A valuable auxiliary prediction task for single-shot object detectors.
* A dynamically scaling loss function with fast convergence that can be applied to classification and regression tasks as well, while it can handle the influence of outliers and simultaneously large class imbalance.
* A general mechanism for estimating the prediction quality of neural networks in general and especially for single-shot object detectors.
* An extensive evaluation of our detection framework on a depalletizing task and its application to various beverage logistics scenarios (Fig. 1).
## II Related Work
### _Packaging Unit Detection_
Publications on the challenging problem of general packaging unit detection are scarce. Most of them focus on depalletizing and are limited to special cases like cardboard parcel boxes [4, 5] or are based on model-driven bin picking [6]. The typical approach to this problem is based on classical Computer Vision (CV) for feature detection [7, 8] and solving a combinatorial optimization problem [4, 7]. Others build on even stronger simplifications such as RFID-Tags [9] or focus on hardware and the overall architecture [10]. Non of these approaches is able to generalize to arbitrary packing units in homogeneous and heterogeneous pallet stacks while being robust against sensor limitations and environmental conditions. Our proposed framework for packaging unit detection fulfills all this requirements. It generalizes to arbitrary packing units or objects of cuboid shape without the need of further training or manual setup and can perform the detection in less then 100 ms.
### _Relevant Learning Techniques_
Single-shot object detectors like SSD [11], YOLO [12, 13] or RetinaNet [14] are current state-of-the-art in terms of prediction quality and speed. They are typically fully conventional Convolutional Neural Networks (CNNs) that make local predictions for object classification and a bounding box regression, which are subsequently filtered in post-processing step called Non Maximum Suppression (NMS). Their training with basic loss functions such as the Binary Cross Entropy (BCE) and L2 loss face three main issues. The first issue is the imbalance of classes in the local classification ground truth. This can either be addressed by a sampling strategy known as Online Hard Example Mining (OHEM) [15] or more recently by loss functions like the Focal Loss (FL) [14], the Reduced Focal Loss (RFL) [16] or the Shrinkage Loss (SL) [17] which automatically weight down easy samples based on their absolute error.
The second issue are outliers resulting from low quality bounding box annotations done by humans. One approach to reduce their influence is a combination between L1 and L2 loss called the Huber Loss [18] or more popular its special case the Smooth L1 Loss [19]. The third issue also considers bounding box regression and arises from the large variance in object scale. Recent approaches like the Distance IoU Loss [20] solve this issue by utilizing scale independent metrics like the Intersection over Union (IoU). Other recent approaches [21, 22] perform quality estimates which are later used in the post-processing step to suppress bounding boxes with low quality. Two of these quality estimates are the center-ness score [21] and the IoU score [22], whereby the predicted IoU is an estimate derived from an other estimate, the bounding box prediction. In Section III we propose a unified loss function that addresses all these issues and a generalized quality estimate for arbitrary local predictions.
The last years have brought some improvements and specializations concerning the convolution operation and the architecture of CNNs, which we will use in Section IV. Separable Convolutions [23, 24] are used in Inverted Residual blocks or MBConv blocks [25] to heavily (90% or more) reduce parameter count without notable performance loss. Adding the spatial location on which the convolution kernel is applied on the feature map as additional features to convolution input [26] improves the performance for practically all tasks which require more spatial or contextual information. Sparse [27] and Partial Convolution [28] where proposed to handle sparse input directly in the network by propagating the sparsity signal through the network and considering it in the convolution operation. Furthermore, it has been shown [29] that object detectors trained on synthetically generated data can outperform ones trained on real images under the right circumstances and that Physically Based Rendering (PBR) [30] outperforms other methods of synthetic data generation. Also, adding additional sparsity to the input data is not necessarily a harmful undertaking. It can be used as valuable auxiliary task for learning representation for language [31] or vision [32] task and works as regularization like in the well known Dropout [33].
## III General Contributions
This section introduces some novel concepts for generic single-shot detectors which we will later use in our proposed detection framework.
### _Bounded Distance Transform_
We propose an auxiliary prediction task that can be used in the post-processing step (NMS) to efficiently separate instances and suppress uncertain predictions near the instance boundary. In the following, we call the ground truth for this task and its computation the Bounded Distance Transform (BDT). The steps for obtaining the BDT \(d_{b}\) from id-maps or segmentation ground truth are visualized in Fig. 2. First we use Sobel filters for detecting the instance boundaries from the id-maps. Second we perform the Euclidean Distance Transform [34] on the edges representing the instance
boundaries and set the background areas to zero. As third and last step we use the \(\tanh\) to bring the values into the interval \([0,1]\) and keep them near \(1\) on the inside of the instances.
\[d_{b}=\tanh\frac{d}{s} \tag{1}\]
Here, \(d\) refers to the Euclidean Distance Transform and \(s\) to a scaling factor which we choose based on the down-sampling ratio of our CNN-architecture to get suitable ground truth.
Predictions of the BDT can later in the post-processing be used for thresholding while the \(\tanh\) prevents small instances from being ignored. It should also be possible to directly use a prediction of the BDT for instance segmentation. For this it would make sense to perform a morphological operation with suitable kernel size to compensate the too small instance masks resulting from thresholding. All the necessary operations for the BDT can be done using classical CV libraries like OpenCV [35] and to the best of our knowledge, this simple but efficient idea was not published before.
### _Dynamically Scaled Shrinkage Loss_
In this section we propose a loss function that dynamically scales the absolute error before passing it through its nonlinearity. Since we build on the idea of the well known Shrinkage Loss [17], we call our proposed loss function the Dynamically Scaled Shrinkage Loss (DSSL). The DSSL can be applied to classification and regression problems as well.
In the following, we consider the local predictions on a feature map with shape \(B\times H\times W\times K\), with \(b\), \(h\), \(w\) and \(k\) as the indices on this feature map. \(B\) represents the batch size, \(H\) and \(W\) the spatial dimensions height and width and \(K\) the number of considered channels, for instance, \(K\) confidences for the classification of \(K\) classes. The absolute error for one element can then be written as
\[l_{bhwk}=|y_{bhwk}-\hat{y}_{bhwk}| \tag{2}\]
where \(y\) denotes the local ground truth and \(\hat{y}\) the prediction of the network. Starting with the average absolute error within a batch \(\bar{l}\)
\[\bar{l}=\frac{1}{BHWK}\sum_{b=1}^{B}\sum_{h=1}^{H}\sum_{w=1}^{W}\sum_{k=1}^{K} l_{bhwk} \tag{3}\]
we can write a normalized form \(n_{bhwk}\) of the absolute error
\[n_{bhwk}=\frac{l_{bhwk}}{2\mathrm{gs}\left(\bar{l}\right)+\epsilon} \tag{4}\]
Where \(\mathrm{gs}(\cdot)\) stops the gradient propagation during training. \(\epsilon\) is a small constant for numerical stability. Utilizing the modulation factor \(f(l)\) with its parameters \(a\) and \(c\) from Shrinkage Loss
\[f(l)=\frac{1}{1+\exp a(c-l)} \tag{5}\]
we can formulate the following local loss function
\[L_{bhwk}=\frac{n_{bhwk}}{1+\exp a(c-n_{bhwk})} \tag{6}\]
which leads to our global loss function
\[L_{DSSL}=\frac{1}{BHW}\sum_{b=1}^{B}\sum_{h=1}^{H}\sum_{w=1}^{W}\sum_{k=1}^{K }L_{bhwk} \tag{7}\]
The proposed DSSL has some notable properties. The dynamic scaling of the absolute errors has the consequence that the loss function always operates around its inflection point at \(0.5\). Replacing \(n_{bhwk}\) by \(l_{bhwk}\) in Eq. (6) disables the dynamic scaling behavior. Instead of the squared absolute error in the SL, we use the absolute respectively the normalized absolute error in the nominator of Eq. (6). This avoids a second nonlinearity, reduces the influence of outliers and makes things more interpretable. Fig. 3 shows the influence of the parameters \(c\) and \(a\) on the modulation factor and the loss function itself2. Fig. 4 compares DSSL with other loss functions. It has the property of down-weighting the easy samples as it is done by the FL while at the same time reducing the influence of outliers as it is done by the Smooth L1 Loss. Smooth L1 and FL have also the drawback of nearly vanishing gradients for small regression errors. Due to the dynamic scaling we always get strong error signals and let the training dynamics do the rest while we reduce the learning rate during training. Another consequence of the normalization is that the loss value no longer decreases during training (Fig. 9). Therefore,
Fig. 3: Influence of the parameters \(a\) (left) and \(c\) (right) on the modulation factor Eq. (5) (top) and the loss function Eq. (6) (bottom) depending on the absolute or respectively the normalized absolute error
Fig. 2: Bounded Distance Transform (rendered image data with orthographic camera)
the absolute error may be a better choice for monitoring the training progress. One may also consider to maintain a moving average of the mean absolute error, but in our experiments, we found that the statistic over batch and spatial dimensions are sufficient for fast and stable convergence. If we apply the DSSL with our default parameters (\(a=20\) and \(c=0.5\)) to a classification task like the one shown in Figs. 14 and 15 (background, short, long), it rapidly fits the binary labels. For smoother values that look more like a probability distribution as produced by the FL, it would make sense to choose the parameters \(a\) and \(c\) more suitable, but for most classification tasks simply the \(\operatorname*{argmax}\) function is used. Also for bounding box regression it may be desired to get an error signal that is independent of the actual object scale, as with IoU Loss (Section II-B). This can be achieved by dividing the absolute error after Eq. (2) by the object scale. We provide our implementation3 of DSSL in TensorFlow.
Footnote 3: [https://github.com/mvoelk/ssd_detectors/blob/master/utils/losses.py](https://github.com/mvoelk/ssd_detectors/blob/master/utils/losses.py)
### _Prediction of Certainty_
In this section we propose a generalization of the quality estimate mentioned in Section II-B. The basic idea here is simple, let the network estimate its own prediction error. A direct prediction of the regression error as a quantitative measure may be difficult to handle in a post-processing (NMS) step, at least if a certain threshold has to be determined. Therefore, based on Eq. (2) and Eq. (3), we propose a unified quality measure Eq. (8) which is defined in the interval \([0,1]\) and where higher is better.
\[\text{certainty}_{bhwk}=\exp\left(\frac{l_{bhwk}}{\operatorname{gs}\left(l \right)+\epsilon}\log\frac{1}{2}\right) \tag{8}\]
The plot in Fig. 5 shows a graph of the proposed certainty function, which is \(0.5\) for the average error, \(1\) for an error of \(0\) and \(0\) for an infinitely large error. As our intuition suggests and [22] have shown, the backpropagation of the gradient resulting from the actual prediction is not beneficial for the overall performance. Therefore we also stop the gradient propagation at this place. The training can be done via the DSSL proposed in Section III-B. Our first experiments in Fig. 16 indeed show that the predicted certainty is low in areas with insufficient information or ambiguity due to partially hidden objects etc.
## IV Packaging Unit Detection
### _Data Generation_
Object detectors \(\hat{f}(x)\) in general, approximate a function \(y=f(x)\) based on a finite number of training samples \((X_{n},Y_{n})_{n=1}^{N}\), where \(y\) denotes the abstract representation of the objects (category, position, orientation,...) and \(x\) denotes the sensor data corresponding to the scene. Since human annotations on real 3D sensor data are laborious and often inaccurate, it is much easier and more flexible to model the inverse \(f^{-1}(y)\) of this function using computer graphics and sample training data from this simulated distribution.
We utilize Blender4 to render photorealistic scenes of pallet stacks and get the corresponding ground truth (Fig. 7) without manual annotations. During this process of procedural data generation we randomize certain scene properties like camera view, lighting, background as well as the packaging pattern and the size and type of the packaging units themselves. We model certain specific types of packaging units like, boxes, bags, beverage crates and closed and open cardboard boxes (fruit and vegetables) etc., as well as their content in form of cans, bottles, packages (beverage cartons, cardboard packages, etc.) or random stuff. For the packaging units, their content, the background and additional disturbance geometry, we excessively randomize the geometry using tessellation, the texture and the material properties in form of the shader parameterization. To achieve robustness against the noise and sparsity resulting from sensor limitations we add several artifacts to the synthesized sensor data. We add Gaussian noise with random standard deviation to mimic sensor noise and Gaussian blur with random kernel size and standard deviation to mimic blur resulting from various intrinsic filtering techniques. In addition we add artificial sparsity as shown in Fig. 6. These binary masks are a combination of binary noise drawn from a Bernoulli distribution, dilated binary noise and random polygons and elliptic shapes that mimic large sharp and round sparse areas resulting from the physical measuring principle of the sensors. In summary we get the simulated sensor data in form of RGB and depth images and the ground truth for packaging unit detection in form of size, position, orientation, keypoints (top-left, top-right, bottom-right, bottom-left corner of the front and back side of the packaging unit) in image and 3D space, visibility values
Fig. 4: Comparison of various loss functions; we use ’shrinkage abs’ for the DSSL, where the mean absolute error is dynamically scaled to \(0.5\)
Fig. 5: Certainty function as defined in Eq. (8)
and id-maps for instance segmentation. The visibility is the percentage of a packaging unit instance that is not occluded by other instances. An estimate of the visibility may later be helpful to decide whether a packaging unit can be picked or not.
### _Architecture_
For the special case of packaging unit detection, we propose a suitable fully convolutional network architecture, shown in Fig. 8, that directly results from the prior work mentioned in Section II-B. We use two separate encoder based MBConv blocks combined with Partial Convolution to handle sparse input data and concatenate the features together. We then use 1x1 convolution and dilated convolution to change the number of features and aggregate more spatial information. After a second stage of MBConv blocks we concatenate the tiled box size to the feature map. We also add the spatial location of elements on the feature map as additional features [26] and end up with a third and fourth stage in two separate branches, one for the classification and one for the regression part. We use Rectified Linear Unit (ReLU) activation functions for the whole architecture except for final output activation which we choose appropriately to the prediction (softmax, sigmod or linear). The local predictions of the network contain a classification (background, box and interlayer), a second classification for the box orientation (short, long), a confidence for the prior (whether the detected instance fits to the prior / provided box size or not), visibility, relative key points in image space, keypoints in 3D, position and orientation (elements of a rotation matrix) in 3D, distance of the front face, box dimension (short, long, height), box dimension (width, depth, height), the BDT proposed in Section III-A and the certainty (Section III-C) for classification, orientation classification and box size regression.
### _Training_
For all predictions, we use the DSSL proposed in Section III-B. To handle the symmetries of packaging units along the z-axis (bottom to top) for the prediction of the orientation, we calculate the loss values for all valid orientations and select the smallest value for backpropagation. Depending on the use case, we decide whether we use training data with homogeneous or heterogeneous pallet stacks and whether we can omit the rightmost branch in Fig. 8 providing the box size. It is clear, that the model will learn to make the best predictions for the case of homogeneous pallet stacks and known box size. For the more challenging use case with heterogeneous pallet stacks and unknown box size, the model has to learn to predict the box dimensions. It may also be easier to learn the box dimensions in form (width, depth, height) in camera view instead of (short, long, height), where rotation comes into play and the box prediction may totally fail due to a wrongly estimated depth dimension for instance. In the case of heterogeneous pallet stacks and known box size, the model will learn to distinguish the packaging units from others via the prior confidence and will also do a better
Fig. 8: Architecture of the proposed network; notation for Conv: kernel_size/stride, features; notation for MBConv: repeats, kernel_size, expansion
Fig. 6: Random artificial sparsity for RGB image (left) and with additional sparsity on depth image (right)
Fig. 7: Synthetically generated data; RGB image with (left), depth image (middle) and segmentation ground truth / id-map (right), detection ground truth (box frame and keypoints) drawn in RGB image
job on estimating their position and orientation.
### _Prediction and Post-Processing_
In this section we consider the post processing step for filtering the dense predictions from the CNN output. In addition to the direct pose regression for the instances, we also reconstruct their pose from the 3D keypoint regression. To select good candidates from the local predictions, we filter them based on classification, visibility, BDT and certainty. We group the remaining candidates based on their minimum box dimension and their Euclidean distance to each other before we select the candidate that is nearest to the spatial mean of each group. Finally, we sort the remaining predictions in the pallet coordinate system which is either known directly or a rough estimate is available. We sort them suitable for the application and based on their height, from top to bottom and from near to far.
## V Application
#### V-D1 Implementation
During our work, we trained different sensor specific models and one on orthographic projections which is independent of the intrinsic sensor parameters. Namely, we trained models for the Intel(r) RealSense D435 which is rather low-cost and for the Zivid One+ Large which provides sensor data with higher quality. We trained each model on roughly 120k generated training samples and used 2k for validation. The training itself was performed on one NVIDIA V100 GPU, with batch size 12 and Adam as optimizer. We started our training with an initial learning rate of \(0.001\) and reduced it by half after 225k, 300k, 375k, 450k iterations respectively. Some plots from the training process are shown in Fig. 9.
To facilitate the integration and practical use of our detection system, we developed interfaces for ROS and the Universal Robot platform via URCap-XMLRPC and provide a general REST-API. We used the ROS interface to integrate it on a mobile robot platform and applied it to beverage logistics (Figs. 1 and 10). We also used the XMLRPC interface to integrate it in our "AI-Picking" lab demonstrator (as shown in the attached video), where we use much smaller pallets and packaging units. There we did not have to train a new model, we simply scaled the whole geometry and worked with an existing model that was trained for euro pallets5. With the intention to improve the detection performance and for comparison, we implemented as simple strategy for depth map completion specialized for the depalletizing task. Therefore we used morphological operations to set large and far away areas to the depth value of the wall before we use a classical inpainting method following [36] to fill the remaining small areas (Fig. 11).
Footnote 5: Since there was no small version of the roll-on gripper mentioned above available, we were forced to utilize a suction gripper instead.
#### V-D2 Evaluation on Real-World Data
Due to the poor quality of the Intel(r) RealSense D435, it made little sense for us to annotate larger amounts of data from this sensor that can be used for evaluation. For the case of the depaletizing task where the box size is known, we got annotations on sensor data from the Zivid One+ Large. With this real world annotations we were able evaluate our system on a wide range of different products. We evaluated it with 50 samples per product on 45 different commercial products (2250 samples in total). Since only the detections in the topmost layer are relevant for the depalletizing task and we only have reliable annotations for this layer, we only detect and evaluate on the topmost layer (8937 instances in total). We used the _f-measure_ (harmonic mean of _precision_ and _recall_) as primary metric for the evaluation shown in Fig. 13. A detection is counted as true positive if its positional distance in pallet frame does not deviate more than a certain
Fig. 11: Our depth completion strategy on raw sensor data (top) and the orthographic projection of the raw sensor data (bottom)
Fig. 10: Detection of beverage crates (Intel® RealSense D435)
Fig. 9: Training log (left) and history (right); iterations on the bottom, epochs on top; dashed lines represent validation
maximal value (\(d_{x,\text{max}},d_{y,\text{max}},d_{z,\text{max}}\)), the orientation (short, long) is classified correctly and it is no duplicate detection. We evaluated for the direct regression of the box center, the box center constructed from the 3D keypoints and on the bottom-left and bottom-right front keypoints (red line in Figs. 14 and 15, since it is most relevant for picking. With a maximum permissible deviation of 25 mm in \(x,y,z\) at an average camera distance of \(1.8\) m, we get an average f-measure of 0.985 over all products. The direct regression of the box center location gives the best results for almost all products. We have also done this evaluation with our depth completion strategy from the previous section and found that depth completion roughly decreases the f-measure by 0.01. Closer investigations showed that it fails on large areas without depth information, caused by reflection or transparent materials, which can be better handled by the Partial Convolutions. The increase of the maximum permissible deviation in position from 25 mm to 50 mm shows that the actual detection of packaging units is not an issue, it is mostly a question of position error. We found several causes which require more detailed investigation in future work to further reduce the position error:
1. Inaccuracy of human annotations in the test data
2. Low quality of sensor data resulting from transparency and reflections
3. Small resolution in the model input
4. Synthetic-to-real gap
5. Tolerances of provided box size
6. Deformations of the packaging units
## VI Conclusion and Future Work
In this paper, we proposed a unified framework for packaging unit detection and applied it to various scenarios. We also contributed with several novel concepts for improving the training of single-shot object detectors and the post-processing stage in Section III. Resulting from the presented work, directions for further work arise. The proposed BDT, DSSL and certainty prediction should be applied with several generic state-of-the-art object detection frameworks and evaluated on well known benchmarks like MS COCO [37].
## Acknowledgment
This work was partially supported by the German Federal Ministry of Education and Research (Deep Picking - Grant No. 01IS20005C) and the Ministry of Economic Affairs, Labour and Tourism of the state Baden-Wurttemberg (Luka-Beverage - Grant No. 36-3400.7/91, and AI Innovation Center "Learning Systems and Cognitive Robotics").
Fig. 14: Prediction on generated validation data; only instances with visibility \(>0.25\) are shown
Fig. 12: Failure case: bad pose estimations caused by low sensor quality (Intel® RealSense D435); direct regression (red) and based on keypoint regregression (blue)
Fig. 13: Evaluation on 45 different products with 50 samples per product; maximum permissible deviation 25 mm (top) / 50 mm (bottom) |
2309.01740 | An Empirical Analysis for Zero-Shot Multi-Label Classification on
COVID-19 CT Scans and Uncurated Reports | The pandemic resulted in vast repositories of unstructured data, including
radiology reports, due to increased medical examinations. Previous research on
automated diagnosis of COVID-19 primarily focuses on X-ray images, despite
their lower precision compared to computed tomography (CT) scans. In this work,
we leverage unstructured data from a hospital and harness the fine-grained
details offered by CT scans to perform zero-shot multi-label classification
based on contrastive visual language learning. In collaboration with human
experts, we investigate the effectiveness of multiple zero-shot models that aid
radiologists in detecting pulmonary embolisms and identifying intricate lung
details like ground glass opacities and consolidations. Our empirical analysis
provides an overview of the possible solutions to target such fine-grained
tasks, so far overlooked in the medical multimodal pretraining literature. Our
investigation promises future advancements in the medical image analysis
community by addressing some challenges associated with unstructured data and
fine-grained multi-label classification. | Ethan Dack, Lorenzo Brigato, Matthew McMurray, Matthias Fontanellaz, Thomas Frauenfelder, Hanno Hoppe, Aristomenis Exadaktylos, Thomas Geiser, Manuela Funke-Chambour, Andreas Christe, Lukas Ebner, Stavroula Mougiakakou | 2023-09-04T17:58:01Z | http://arxiv.org/abs/2309.01740v2 | An Empirical Analysis for Zero-Shot Multi-Label Classification on COVID-19 CT Scans and Uncurated Reports
###### Abstract
The pandemic resulted in vast repositories of unstructured data, including radiology reports, due to increased medical examinations. Previous research on automated diagnosis of COVID-19 primarily focuses on X-ray images, despite their lower precision compared to computed tomography (CT) scans. In this work, we leverage unstructured data from a hospital and harness the fine-grained details offered by CT scans to perform zero-shot multi-label classification based on contrastive visual language learning. In collaboration with human experts, we investigate the effectiveness of multiple zero-shot models that aid radiologists in detecting pulmonary embolisms and identifying intricate lung details like ground glass opacities and consolidations. Our empirical analysis provides an overview of the possible solutions to target such fine-grained tasks, so far overlooked in the medical multimodal pretraining literature. Our investigation promises future advancements in the medical image analysis community by addressing some challenges associated with unstructured data and fine-grained multi-label classification.
## 1 Introduction
Artificial intelligence (AI) research in the medical domain throughout the pandemic prioritized applying supervised learning methods to help hospitals diagnose or triage patients faster. Whilst models were developed at a fast pace, the majority of these were deemed not fit for clinical use [47]. Despite these setbacks, the pandemic led to the generation and collection of substantial medical imaging data, among others. AI is still considered a high-quality strategy to assist in diagnosis, severity assessment, and prognosis of long COVID-19 [21]. The large amounts of unlabelled data from the pandemic allow the possibility of exploring self-supervised learning models to be developed. One of the most popular methods is contrastive visual language pertaining (CLIP), which enables training on pairs of images and text with zero-shot capabilities [33]. CLIP eliminates the need for precisely annotated datasets and learns representation from noisy image-text pairs, potentially resulting in significant time and cost savings.
Applying self-supervised deep learning methods to open-source data has rapidly gained popularity in medical and non-medical domains [5, 19, 39, 50]. The current success of self-supervised learning, particularly contrastive,
can be attributed to preventing dimension collapse through aggressive data augmentation and negative pairs [22] and utilizing large datasets. For instance, CLIP training from scratch was enabled by access to 400 million pairs of images and texts crawled from the web. Contrastive methods have been highly valuable in downstream tasks like image classification, enabling the creation of competitive representations comparable to fully supervised networks [10, 6].
The medical domain does not have access to such large datasets since collecting data is costly and requires highly specialized human expertise [42, 41]. Despite this, there has recently been a successful application of multimodal contrastive pretraining techniques [50, 39, 49, 45]. Most of the aforementioned studies performed pretraining on X-rays [50, 39], which are easier to gather but less precise than other modalities, e.g., CT scans. Furthermore, given the smaller scale of biomedical datasets and the significant domain shift among different subdomains compared to natural images (e.g., X-rays to CT scans), it remains an open research question on how to adapt and fine-tune available pre-trained models properly [45].
This work focuses on fine-tuning pre-trained encoders via CLIP on CT images and uncurated radiology reports obtained during the pandemic. We investigate several issues deriving from fine-tuning on a different domain, such as the data preprocessing of large volumetric CT scans and long unstructured reports. Furthermore, in collaboration with expert radiologists, we establish a fine-grained multi-label classification task that evaluates disease severity and identifies the presence of five distinct characteristics commonly associated with COVID-19: pulmonary embolism, pneumonia, consolidation, infiltrates and ground glass opacities. The zero-shot classification task is particularly challenging due to the uncurated structure of the reports and the fine-grained nature of the task. To improve the correct matching across visual predictions and text targets, rather than keeping class-independent templates like standard practice [33], we design per-class templates.
We specifically focused on patients diagnosed with COVID-19, as we aim to develop a valuable tool to aid radiologists in identifying individuals at the highest risk. More broadly, we hope that our empirical analysis could be helpful for researchers involved in deploying pre-trained models on different medical imaging domains by only exploiting uncurated data such as CT scans and corresponding radiology reports.
## 2 Related Work
Contrastive visual language learning.Contrastive learning [31, 5] has evolved to be used in multi-modal data pipelines as popularised in CLIP [33]. When applying CLIP, we are considering image and text modalities. Recent work has extended the modalities to six [13]. In essence, we are mapping multi-modal data to a unified latent space, where we calculate the similarities between different elements and learn models to map data across different modalities effectively. When looking at how to calculate how similar different modalities are, we first look at the contrastive loss presented in Radford _et al_. [33], which Sohn originally influenced [37] and then was further used in contrastive representation learning in Oord _et al_. [30]. An early attempt in CLIP radiology presented ConVIRT [50], which maps chest X-rays to reports. CLIP's mainstream success has resulted in further applications to the medical domain [11]. CheXzero builds upon Radford _et al_. [33] by finetuning the model to the radiology domain, successfully achieving human-like results without explicit labels during training [39]. Similarly, MedCLIP explores CLIP by applying alternative models to Open AI and CheXzero [45]. In particular, they employ Swin transformer [25] and BioClinicalBERT [2] as their respective vision and text encoders. They also perform ablation studies with a ResNet-50 [15] as the vision encoder, which provides the best results in zero-shot classification on COVID-19 and RSNA pneumonia X-ray datasets. More recently, Zhang _et al_. built one of the most extensive medical image-pair datasets and obtained successful image-text/text-image retrieval results [49]. One significant key difference is the increase in context length from 77 to 256. Contrastive learning has also been successfully applied to lung CT images [38, 24]. However, there is currently a lack of extensive literature exploring the application of this technique specifically to CT images and their corresponding reports. Multi-modal learning provides opportunities to provide interesting insights into tasks such as zero-shot learning.
Covid-19.Mohit _et al_. [29] use convolutional neural networks (CNNs) as encoders to build a computer-aided diagnosis system for COVID-19 to assist radiologists in early diagnosis. Building upon this, transformers in supervised diagnosis, as seen in [12], can also be used, as shown in Dong _et al_. [1]. Moreso, this work demonstrates strong results on public datasets by extracting the relevant features from 3D volumes. We see contrastive learning used to build a severity system based on electronic health records (EHRs) [46]. Providing good results, structured data like this is often difficult to obtain, and we primarily focus more on unstructured data such as text.
Transformers.Transformers have emerged as dominant models for natural language processing (NLP) and computer vision, largely due to their effective utilization of attention mechanisms [3, 28]. In NLP settings, self-attention compares word embeddings to capture the relevance and importance of each word in the context [40]. A popular choice, BERT [7], was recently fine-tuned on general and
COVID-19-related radiology reports [48, 4]. Influenced by text transformers, vision transformers have been argued to be more efficient in training than traditional CNNs [8]. They divide images into fixed-size patches and utilize self-attention to capture the relationships between them. This study considers three different vision transformers, ViT-B/16, ViT-B/32 [8] and Swin-Transformer [25]. The architectures in these two transformers differ. ViT operates on the entire image as a sequence of flattened patches, whereas Swin Transformer introduces a hierarchical structure of windows to process images. There have now been several implementations of vision transformers applied to medical images [16, 32].
## 3 Method
### Data pre-processing
CT images are naturally large, given their multi-channel nature. To decrease the memory overhead and filter out unwanted noise [17], we employ a data pre-processing technique influenced by ILD diagnosis research [35]. In particular, the pre-processing steps for each CT scan are as follows:
* We first reduce the size along the axial dimension by 10% on either end. CT scans contain redundant or low information at the beginning and end of the image.
* We then spatially crop the images to ensure there is no additional space and we are focusing on the lung tissue.
* The remaining CT is split into 4 blocks of similar dimensions.
* A single slice is randomly selected from each block and concatenated to form a 4-image slice montage.
* Each 4-slice montage is resized to \(224\times 224\).
To speed up our data loading, we perform the aforementioned data pre-processing offline. The pre-processing was performed 10 times for each CT scan. Random slice selection can be considered a form of data augmentation. In Figure 1, we can see an example of the image-text pair. For our text pre-processing, we translate the whole report from German to English using the Google Translate API in Python1. Then we slice the radiology report to focus only on the lung parenchyma. Additionally, we filter the resulting text with filters taken from Clinical XLNet [18].
Footnote 1: The reports are originally in German, given that the hospitals are in the German speaking part of Switzerland.
### Encoders selection
Our dataset is considered small compared to previous radiology datasets like CheXpert [20], and MIMIC-CXR [23]. We compensate for the lack of data by building upon previous models specific to our task. We consider established medical-based CLIP methods whilst also experimenting with extracting their vision encoder and finetuning with the RadBERT model. The RadBERT model is pre-trained on 466 million tokens or 4.42M radiology reports [48]. RadBERT was further fine-tuned in a COVID-19 investigation on 19,384 radiology reports this year in Chambon _et al_. [4], making the model weights publicly available. The prior representations of the text data learned in this model suit this study.
The choice of vision encoders is very large, but we are interested in those exposed to relevant data and tasks. There is an observation of previous studies opting to use either vision transformers or standard CNNs such as ResNet-50 [15]. Precisely, we consider the following vision encoders:
* **ResNet-50 [15].** Initialization with pre-trained weights on ImageNet [34] is standard practice in computer vision. We set it up to compare to medical domain-trained models.
* **CheXzero [39].** We initialize the CLIP model [33] with publically available weights trained on the CheXpert and MIMIC-CXR datasets. The results in this paper are very promising in a multi-label task. We are only changing the image modality input. The text data is related to lung conditions specific to our radiology reports.
* **MedCLIP [45].** It considers both a ViT [8] and ResNet-50 [15] trained on their image-text dataset. The datasets picked are relevant to our task. Using
Figure 1: An example of the CT montage, four random slices taken from the CT image. The text is a subsection of the radiology report.
alternative encoders, we are interested in the results compared to CheXzero.
* **BiomedCLIP [49].** The BioMedCLIP set-up is attractive to us due to the size of the specific dataset it has been trained on. They have also increased their context length when training text encoders.
* **COVID-ViT [12].** A custom ViT transformer used for COVID-19 diagnosis in CT slices. The model has learned to extract relevant features based on the pathological tissue in each CT slice. If correctly aligned, the vision encoder should successfully map the extracted image patterns to the relevant text features.
When considering BiomedCLIP, MedCLIP, and CheXzero, we explore the transfer without any adaptation, i.e., frozen encoders, the training of their vision and text encoders, and the extraction of only their vision encoder and training with the RadBERT text encoder. We also train the ResNet-50 and COVID-ViT with the RadBERT text encoder. When training with the RadBERT, we adapt each vision encoder to map the text output shape.
### Embeddings alignment
Contrastive visual language training requires image and text embeddings to be mapped to the same latent space. Closely following the forward pass in CLIP [33], we align the output embeddings of the text and vision encoder by calculating the logits of each modality and passing these into separate cross-entropy loss functions. During training, given a batch of \(B\) input pairs \((\mathbf{x}_{v},\mathbf{x}_{u})\), we calculate their respective representation pairs \((\mathbf{v},\mathbf{u})\) by feeding them into each respective encoder. We use \((\mathbf{v}_{i},\mathbf{u}_{i})\) to denote the \(i\)-th pair. The first loss function is an image-to-text contrastive loss for the \(i\)-th pair:
\[\ell_{i}^{(v\to u)}=-\log\frac{\exp\left(\left\langle\mathbf{v}_{i}, \mathbf{u}_{i}\right\rangle/\tau\right)}{\sum_{k=1}^{B}\exp\left(\left\langle \mathbf{v}_{i},\mathbf{u}_{k}\right\rangle/\tau\right)},\]
while similarly, the text-to-image loss:
\[\ell_{i}^{(u\to v)}=-\log\frac{\exp\left(\left\langle\mathbf{u}_{i}, \mathbf{v}_{i}\right\rangle/\tau\right)}{\sum_{k=1}^{B}\exp\left(\left\langle \mathbf{u}_{i},\mathbf{v}_{k}\right\rangle/\tau\right)}\]
where \(\left\langle\mathbf{v}_{i},\mathbf{u}_{i}\right\rangle\) represents the cosine similarity and \(\tau\in\mathbb{R}^{+}\) represents a temperature parameter. The temperature parameter controls the range of the logits in the stated losses and the strength of penalties on hard negative samples [43]. To calculate the overall loss, we calculate the average of the losses. Precisely, we add them together and divide them by the number of modalities, i.e., two.
Figure 2: **Zero-shot pipeline.** In the left part, we see the five boxes which represent each class’ list of template pairs. For a given class we iterate through its template pairs to generate the postive-negative zero-shot weights. Multiplying the respective CT montage embedding with both positive-negative zero-shot weights gives us a similarity. The last stage is calculating the softmax of these two predictions to result in our final prediction vector.
### Zero-shot multi-label classification
Zero-shot methods have gained traction recently, popularised by seminal papers such as Socher [36] and Radford [33]. The difficulty in evaluating our models lies within the embedding space of our text data. Previous works have utilized shorter, curated text data. This allows the design of simple prompts that match the training distribution, e.g., "A picture of a CAT". However, our report data comes directly from radiologists and reflects a realistic medical setting. To overcome this challenge, we employ prompt-based engineering, as seen in the latest approaches [27, 39, 33]. In collaboration with human experts, we analyze the radiology reports to create sensible classes for the dataset. We first create a word cloud (Figure 3, left) on the text data to visualize the most common words. We settle on the following classes: pulmonary embolism, pneumonia, consolidation, infiltrates and ground glass opacities. Confirming these are sensible and useful classes for radiologists, the human experts label the testing samples for us. Pulmonary embolism is the least common class, diagnosed in approximately one in ten patients. The prevalence of the other classes ranges from 65% to 80%. These classes lay unstructured in the text data we are training on and remain hidden from the vision encoders as in traditional zero-shot learning [44]. For a given class, our prompt is a positive-negative template paired with the word CLASSNAME, which is replaced by the class. The choice of templates required manual analysis of the reports to estimate what prompts would be a good choice. To justify our template choice, we removed the class names from the text data and generated a word cloud visible in Figure 3 (right). Using this and with the manual reading of the reports, we identify prompts which occur with our class names; for example, the phrase "bilateral infiltrates" creates our prompt "bilateral CLASSNAME" for the class infiltrates.
Following Tu [39], to map the models' prediction to probabilities, we use a softmax layer for each template pair. Instead of using the same template pairs for each class, we propose each class has a list of its template pairs. This is visualized in Figure 2. More in detail:
* We iterate over each class. In each class, we iterate over the template pairs. For a given template, we substitute the class into the template (e.g., no pulmonary embolism) and pass it into the text encoder. We normalize the embeddings and concatenate the results to get our zero-shot weights. We are left with a 5D vector which is our zero-shot weights for each class.
* We pass the respective CT montage into the vision encoder and normalize the embeddings.
* We calculate the cosine similarity of the resulting embeddings by multiplying the zero-shot weights and the vision embeddings.
* We estimate the class by calculating the softmax of the positive-negative predictions.
We do not scale by the temperature parameter \(\tau\) in the zero-shot evaluation. We do not include this as we are relying on the representations learned by our encoders. The parameter is no longer needed to control the range of the logits. As a CT montage can have more than one label, we solve a multi-label zero-shot problem by performing binary classification for each class.
## 4 Experimental Settings
### Dataset
After the ethics approval, the study collected data from University Hospital Zurich in Switzerland, resulting in over
\begin{table}
\begin{tabular}{c c} \hline \hline Variable & All patients \\ \hline Age (years \(\pm\) SD) & 61.4 \(\pm\) 14.2 \\ Male / Female & 276 / 84 \\ BMI (\(\frac{Kq}{m^{2}}\)\(\pm\) SD) & 27.9 \(\pm\) 8.5 \\ Data split (Training / test) & 368 / 92 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Descriptive statistics of the patient cohort.
Figure 3: **Word Clouds.** Word cloud generated to assist in class decisions (left). Word cloud with class names removed to assist in template selection (right).
460 image-text pairs from individual patients taken during the COVID-19 pandemic. A small summary of the patient cohort can be seen in Table 1. CT scans were taken on either Siemens SOMATOM Definition AS, SOMATOM Definition Flash, or SOMATOM Force machines. We only consider thorax CT images taken in a single session. After resampling the CT images and preparing the montages (Section 3.1), we are left with 13,240 montages and 460 reports. We split the dataset 80:20 based on patient ID, so montages from the same CT are not be seen in training and evaluation. We trained on 11,010 montages along with their respective reports. When evaluating our test set, we chose a single montage-report pair for each patient and performed the zero-shot evaluation.
### Training details
In all training scenarios, we set a maximum of 100 epochs, a batch size of 100, and use the AdamW optimizer [26] with betas of (0.9, 0.98). When fine-tuning existing methods (CheXzero, MedCLIP, BioMedCLIP), we adapt the hyperparameters to achieve the best results. Specifically, we set the learning rate to [5e-6, 5e-5, 5e-5] and the weight decay to [1e-4, 1e-4, 1e-3], respectively. For training RadBERT and alternative vision encoders, we fix the learning rate at 5e-5 and the weight decay at 1e-3.
We implemented offline image data preparations to accelerate data loading. At the bottom of the image processing pipeline described in Section 3.1, we convert montages to PIL images and tensors. During training, we use a single NVIDIA RTX A6000 GPU to train the encoders.
### Zero-shot evaluation
To assess the zero-shot capabilities, we consider calculating the macro average F1 score, Hamming loss, and the subset accuracy. The F1 score is defined as
\[F1=\frac{2*Precision*Recall}{Precision+Recall}\]
Where precision and recall are respectively defined as \(\frac{TP}{TP+FP}\) and \(\frac{TP}{TP+FN}\). The macro average F1 gives us the average of the F1 over all classes.
Hamming loss is often considered in multi-label evaluation. The Hamming loss measures the fraction of instances where a model's predictions do not equal the true labels. It is obtained by dividing the number of incorrect predictions by the total number of instances and classes.
\[HL=\frac{1}{NL}\sum_{l=1}^{L}\sum_{i=1}^{N}Y_{i,l}\oplus X_{i,l},\]
\(N\) is the total number of data samples and \(L\) is the total number of classes. \(\oplus\) is Exlusive-OR, \(X_{i,l}\) (\(Y_{i,l}\)) stands for boolean that the \(i\)-th index (prediction) is equal to the \(l\)-th label.
Lastly, the subset accuracy measures how accurately every sample is predicted. For instance, if our target array is [0, 1, 1, 0, 0], it is considered correct only if all elements of the vector are predicted accurately. The prediction [0, 1, 1, 1, 0] is hence considered wrong. This makes the subset accuracy the most challenging metric to fulfill.
## 5 Results
In this section, we empirically investigate the best training solution for reliably mapping a relatively small set of
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Model & Encoders & Fine-tuned & Context length & Macro Avg. F1 (\(\uparrow\)) & HL (\(\downarrow\)) & Sub. Acc. (\(\uparrow\)) \\ \hline CheXzero [39] & ViT-B/32 \(|\) GPT2 & ✗ & 77 & 0.43 & 0.54 & 0.00 \\ CheXzero [39] & ViT-B/32 \(|\) GPT2 & ✓ & 77 & 0.71 & 0.29 & 0.23 \\ \hline MedCLIP [45] & Swin-T \(|\) BCB & ✗ & 77 & 0.62 & 0.45 & 0.04 \\ MedCLIP [45] & Swin-T \(|\) BCB & ✓ & 77 & 0.51 & 0.48 & 0.12 \\ \hline BioMedCLIP [49] & ViT-B/16 \(|\) PMB & ✗ & 256 & 0.33 & 0.49 & 0.06 \\ BioMedCLIP [49] & ViT-B/16 \(|\) PMB & ✓ & 256 & 0.61 & 0.44 & 0.09 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Frozen vs fine-tuned encoders**. Comparison between the frozen and fine-tuned models. HL stands for the Hamming loss, and Sub. Acc. for subset accuracy. PMB, BCB respectively, denote PubMedBERT and BioClinicalBERT.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Templates & Macro Avg. F1 & HL & Sub. Acc. \\ \hline \multirow{2}{*}{CheXzero} & CI & 0.55 & 0.47 & 0.05 \\ & CD & 0.71 & 0.29 & 0.23 \\ \hline \multirow{2}{*}{RN50} & CI & 0.46 & 0.51 & 0.12 \\ & CD & 0.50 & 0.45 & 0.21 \\ \hline \multirow{2}{*}{MedCLIP} & CI & 0.57 & 0.44 & 0.13 \\ & CD & 0.58 & 0.42 & 0.15 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Class-independent (CI) vs. class-dependent (CD) templates**. CheXzero is the ViT-B/32 plus GPT2. The ResNet-50 (RN50) is pre-trained with ImageNet. MedCLIP is the Swin transformer. The RN50 and MedCLIP are coupled with the RadBERT text encoder with a context length of 100 and 200, respectively. All models are fine-tuned on our dataset.
CT images to corresponding radiology reports and consequently performing zero-shot predictions. As anticipated, we consider three approaches: 1) applying frozen baseline methods, 2) finetuning baseline methods, and 3) training an alternative text and vision encoder. We also study the impact of class-dependent and class-independent templates. We perform additional ablations considering the truncation side and context length.
### Baselines and finetuning
For a comprehensive comparison, we consider three baselines that provide publically available weights applying contrastive visual language models in the medical domain. CheXzero [39], MedCLIP [45], and BioMedCLIP [49] are all available and are relevant to our task.
**Frozen encoders.** As baseline methods, we load the pre-trained models and apply straight-away the zero-shot evaluation on our test set without any modifications. All evaluated baseline models feature transformers. The results for the frozen encoders are shown in Table 2. We see the subset accuracies are extremely low, from a minimum of 0% to a maximum of 6%. However, considering none of the models has been exposed to CT montages before, we see the effect of large datasets does not compensate for the shift to our domain, and the models need specific finetuning.
**Fine-tuned encoders.** Next, we finetune each baseline on our small dataset and again perform the evaluation. As expected, we observe an increase in the exact match performance (Table 2). MedCLIP and BioMedCLIP respectively gain 8 and 3 percent points. Notably, CheXzero has adapted extremely well to our dataset, increasing the subset accuracy from 0% to 23%. The other metrics are also the best we observe in this study, i.e., Hamming loss of 0.29 and macro average F1 score of 0.71. This finding suggests that previous pre-training on large chest X-ray datasets and their corresponding reports helps achieve better results in this fine-grained task. Unexpectedly, the macro average F1 and Hamming loss for MedCLIP decreases. We suspect that the concurrent decrease of these two metrics and the increase in subset accuracy imply a recognition improvement for a subset of the classes and a deterioration in others. We finally see a big increase in the macro average F1 score for BioMedCLIP but not so much in the other metrics.
**Class-independent vs. class-dependent templates.** In contrast to previous studies, each zero-shot class has its list of template pairs. This enables us better target vision features to specific prompts for a class. For example, the positive prompt for pneumonia was often "consistent with", whereas this prompt would not make sense for a different class like a pulmonary embolism. As shown in Table 3, CheXzero improves drastically in all three metrics, in particular with a subset accuracy increase of 18%, when apply
ing class-dependent prompts. All previous results, shown in Table 2, are obtained with class-dependent templates.
### Combining pre-trained models.
Investigating the combination of pre-trained models trained specifically on COVID radiology reports and chest image datasets is required to identify the best setups. All text encoders use the COVID-finetuned RadBERT, which has prior knowledge of radiological terms such as "pulmonary embolism" and "ground glass opacities". We also select the existing pre-trained CLIP models in the medical domain and extract their vision encoders. These vision encoders have all been exposed to chest X-rays, and some have even been exposed to single CT slices. Furthermore, we analyze a vision transformer used in COVID diagnosis and a pre-trained ResNet-50 on ImageNet. The difficulty of the task is acknowledged in the results achieved, receiving the best subset accuracy of 21% (Table 4). Interestingly, the latter is obtained with RN50, which has not been pre-trained on medical datasets.
We run further ablations considering text and multiple vision encoders. We experiment with class-dependent vs. class-independent templates, changing the context length and truncating from different sides.
**Class-independent vs. class-dependent templates.** Table 3 also shows positive trends for the top-two combinations in Table 4, those trained with RadBERT and an alternative vision encoder. With regards to subset accuracy, the ResNet-50 increases by 9% and MedCLIP marginally improves by 2%. All three metrics improve when applying class-dependent prompts.
**Context length and truncation.** Due to our uncurated radiology reports being longer than standard image-text pairs, we test varying context lengths equal to 100 or 200. Previous work in contrastive visual language learning deals with shorter text, so we explore increasing this to capture more information. Furthermore, we test whether truncating the tokens from the left or the right affects the performance. At the beginning of the reports, we can see the knowledge describing the lungs. At the end of the report, we see shorter statements that match our classes. We test whether more valuable representations can be learned from the beginning or end.
Our results show the best subset accuracy result with the shorter context length and truncating from the right. The best Hamming loss was recorded with a longer context length, truncated from the left, and still records a good subset accuracy. The average subset accuracy for context length 100 is 9.3%, and for 200, it is 9.41%. Equally, for truncation on the left, it is 7.8%, and for the right, it is 9.6%. We conclude that context length did not alter the results massively, but the truncation side worked better when truncating from the right.
## 6 Conclusions
In our paper, we investigated the development of a zero-shot tool that assists radiologists in detecting pulmonary embolisms and identifying intricate lung details, including ground glass opacities and consolidations, automatically.
To meet this goal, we collected uncurated COVID-19 CT scans and corresponding reports from a university hospital to ensure our study faces real-world scenarios. Secondly, we run tests based on image-text pretraining and fine-tuning for multi-label zero-shot classification. A clear challenge represented the variability of the text data and the consequent framing of the zero-shot targets. We partially addressed such an issue using a class-dependent zero-shot template scheme and pre-trained vision-text medical models. In parallel, we are in the process of curating data from three additional hospitals. This data will enable us to further develop and apply the described methods to longitudinal data, specifically for the prognosis of long COVID-19. In future work, we would like to see such techniques applied to 3D volumes to solve tasks such as disease prognostication and progression as discussed recently in review articles.[9, 14].
We hope our work inspires future research to meaningfully use data collected throughout the pandemic and improve the automatic identification of fine-grained lung pathological patterns.
## 7 Acknowledgements
This work was supported in part by the Emergency Department and the Department of Diagnostic, Interventional, and Pediatric Radiology of Inselspital Bern and in part by Campus Stiftung Lindenhof Bern (SLB).
|
2310.14548 | Test Smell: A Parasitic Energy Consumer in Software Testing | Traditionally, energy efficiency research has focused on reducing energy
consumption at the hardware level and, more recently, in the design and coding
phases of the software development life cycle. However, software testing's
impact on energy consumption did not receive attention from the research
community. Specifically, how test code design quality and test smell (e.g.,
sub-optimal design and bad practices in test code) impact energy consumption
has not been investigated yet. This study examined 12 Apache projects to
analyze the association between test smell and its effects on energy
consumption in software testing. We conducted a mixed-method empirical analysis
from two dimensions; software (data mining in Apache projects) and developers'
views (a survey of 62 software practitioners). Our findings show that: 1) test
smell is associated with energy consumption in software testing. Specifically
smelly part of a test case consumes 10.92\% more energy compared to the
non-smelly part. 2) certain test smells are more energy-hungry than others, 3)
refactored test cases tend to consume less energy than their smelly
counterparts, and 4) most developers lack knowledge about test smells' impact
on energy consumption. We conclude the paper with several observations that can
direct future research and developments. | Md Rakib Hossain Misu, Jiawei Li, Adithya Bhattiprolu, Yang Liu, Eduardo Almeida, Iftekhar Ahmed | 2023-10-23T04:03:56Z | http://arxiv.org/abs/2310.14548v1 | # Test Smell: A Parasitic Energy Consumer in Software Testing
###### Abstract
Traditionally, energy efficiency research has focused on reducing energy consumption at the hardware level and, more recently, in the design and coding phases of the software development life cycle. However, software testing's impact on energy consumption did not receive attention from the research community. Specifically, how test code design quality and test smell (e.g., sub-optimal design and bad practices in test code) impact energy consumption has not been investigated yet. This study examined 12 Apache projects to analyze the association between test smell and its effects on energy consumption in software testing. We conducted a mixed-method empirical analysis from two dimensions; software (data mining in Apache projects) and developers' views (a survey of 62 software practitioners). Our findings show that: 1) test smell is associated with energy consumption in software testing. Specifically smelly part of a test case consumes 10.92% more energy compared to the non-smelly part. 2) certain test smells are more energy-hungry than others, 3) refactored test cases tend to consume less energy than their smelly counterparts, and 4) most developers lack knowledge about test smells' impact on energy consumption. We conclude the paper with several observations that can direct future research and developments.
Test Smell, Energy, Green Software Engineering
## I Introduction
Today, millions--if not billions--of software applications govern countless aspects of our lives. These software consume energy while being used as well as while being developed. Energy consumption of the software products and services sector reached 15% of the world's total energy consumption in 2020 [1], and it is predicted to be responsible for 20% of global energy usage by 2025 [2, 3]. Therefore, the software sector has already become one of the major contributors (2.3%) of global Green House Gases [4] and growing at a much faster rate than initially predicted [5, 6]. So it is critical to understand and control energy consumption and ultimately reduce Green House Gas production due to software, while it is being developed and used, to ensure the future of human life on Earth [7].
The primary focus of research pertaining to energy-efficient solutions has been on the optimization of the hardware with the aim of making it more energy-efficient for executing software. [8, 9, 10, 11]. More recently, energy consumption incurred by software executions and software runtime performance [12, 13], especially in mobile [14, 15, 16] and embedded software [17], received more attention from the research community as energy efficiency is crucial for mobile applications and embedded systems.
Another thread of research investigated various aspects of the Software Development Life Cycle (SDLC) and their association with energy efficiency [6, 18]. For instance, in the design and coding phase, following a software architecture, applying a design pattern [19, 20, 21, 22, 23], adopting a specific programming language and framework all have impact on energy consumption [24, 25, 26].
Prior work however is limited to only some phases of the SDLC and provides a fragmented view of the possible energy-efficient techniques associated with the SDLC. For example, none of the existing work has looked into the effect of testing on energy consumption even though millions of lines of test code that test various aspects of software [27] are being executed daily in developers' IDEs or in the Continines Integration (CI) pipeline [28], which consume a considerable amount of energy every day [29].
Our goal is to investigate the effect of test code on software energy consumption during software testing. We posit that a bad test case design ( a.k.a, a test smell [30, 31, 32]) contributes to higher energy consumption than required due to the sub-optimal design induced unnecessary energy overhead.
Figure 1 demonstrates a motivating example that shows how the presence of test smells can significantly increase the energy consumption of running the test suite. The example code snippet represents a General Fixtures (GF) test smell [33] that occurs when a test class contains a setUp() method that may not be directly relevant to the executed test case, that is, the test case testRangeOfChats() has never utilized any field variables initialized in the setUp() method. However, the setUp() is invoked before every execution of this test case. Consequently, the presence of this test smell can lead to additional computations and memory usage because of unnecessary setup and teardown operations. While the direct impact may not be significant for individual test runs, the cumulative effect can substantially affect energy consumption in large-scale software projects with extensive test suites and frequent test executions.
A plethora of studies have investigated test smells' impacts on software maintainability, comprehension, and defect proneness [34, 35, 36]. Researchers also investigated automated test smell detection [37, 38, 39], and refactoring [40, 41, 42, 43]. However, to the best of our knowledge, no study has investigated the association between test smell and software energy consumption. Given the widespread prevalence of test smells [35], and developers' unawareness regarding the relation between test
small and energy consumption as indicated by some of the respondents in our survey _"I have no idea how these things are related to each other"[S-6] 1 or "They seem like two unrelated concepts."[S-17]_, it is of utmost importance that developers and researchers are aware of the relationship between test smell and energy consumption. Our study aims to take the first step toward achieving that goal.
Footnote 1: Here [S-6] refers to our survey respondent’s anonymous Id.
In this paper, we aim to contribute to the literature by providing a comprehensive study on the impact of test smells on software energy consumption. We design the study from two perspectives, _software_ and _developer_. We first investigate whether the smelly test code in a test case consumes more energy than its clean part. If so, does a test case with more test smell instances consume more energy than the ones with less smell instances (_software_) (RQ1). We conducted a case study where we manually removed the test smells to create test cases without smells. We analyzed the energy consumption difference between clean and smelly test cases that test the same functionality to explore the impact of test smells on energy consumption. To further understand different test smell types' impacts on energy consumption, we conduct a correlation analysis between the number of instances of each test smell type and an estimate of energy consumption per smell instance. The goal is to see which test smell types are the most energy-hungry (_software_) (RQ2).
Finally, we collected developers' perceptions of test smells' impacts on energy consumption and the underlying reasons for them to introduce such test smells (_developers' views_) (RQ3, RQ5). In addition, we also perform empirical analysis to explore who is the most responsible for introducing the energy-hungry test smells (_developers' views_) (RQ4).
Overall this paper makes the following contributions:
1. We conduct the first study to investigate the association between test smell and energy consumption in software testing.
2. We present the findings of a survey with 62 software practitioners that reflect the developers' perception of test smell and its impact on software energy consumption.
3. We perform data analysis to identify the most responsible developer group (core, non-core, and bot) for introducing energy-hungry test smells.
The paper is structured as follows: we describe the prior research on test smell's impact and software energy efficiency in Section II, followed by our approach of detecting test smells in software repositories, profiling energy consumption, and surveying developers for understanding developers' views in Section III. In Section IV, we present our analysis and finding, and Section V provides the results' implications for researchers and software developers.
## II Related Works
### _The Impact of Test Smells_
Test smells refer to symptoms of sub-optimal design choices or bad programming practices in software test code [30]. Researchers have defined various types of test smells that occur in large software projects, such as Assertion Roulette (AR), Lazy Test (LT), and Mystery Guest (MG) etc [30, 31, 32]. These smells have been proven to have a negative effect on test quality [44]. The impacts of test smell on software readability, understandability, maintainability, and performance have also been widely studied in literature [36, 45]. For example, Bavota et al. [33] conducted experiments to investigate the impact of test smells on program comprehension. Their results showed that test smells have a negative impact on both the comprehensibility and maintainability of the test code. Spadini et al. [46] found that smelly test cases are more change- and defect-prone, and they could cause the tested production code to be more defect-prone. However, no existing research has investigated the impacts of test smells on energy consumption in software testing.
### _Software Energy Efficiency_
Building software that is more energy-efficient has become an integral concern in improving sustainability. In the past, researchers have been trying to identify the factors that might lead to energy inefficiencies in software [47, 48, 49, 50, 51]. Liu et al. [52] found that wakelock deactivation and missing sensors as two of the main causes of energy inefficiencies in Android applications. Baberhee et al. [53] proposed several guidelines to refactor Android apps affected by energy-oblivious design practices, such as balancing the quality of service and functionality and restricting resource leaks. Bruce et al. [54] employed search-based software engineering techniques to automatically identify more energy-efficient versions of the MiniSAT Boolean satisfiability solver. Manotas et al. [55] built a framework that improves the energy efficiency of Java software by automatically using the most energy-efficient library implementations. Such energy-aware implementations made in practice have also been explored and analyzed by Moura et al. [56], which suggests that developers mostly utilize low-level energy management approaches such as idleness.
More recently, Song et al. [48] defined four anti-patterns of service usage inefficiency in Android applications, including
Figure 1: An Example of Smelly Test in Apache Commons-Lang Project.
premature create, late destroy, premature destroy, and service leak, which could lead to high energy consumption. Song et al. [50] found that the high average energy consumption is due to some methods that are frequently invoked by test cases (i.e., energy hotspots). In addition, Li et al. [51] explored various root causes of energy issues in mobile apps, such as unnecessary workload and wasted background processing. However, none of the existing works have investigated the anti-patterns/bad practices in software testing that could contribute to energy overhead, and we aim to fill that gap.
## III Methodology
This study aims to understand how the test code quality measured by test smells affects energy consumption. Figure 2 demonstrates the overview of our study design. Specifically, we seek to answer the following research questions.
* **RQ1 [Test Smells vs. Energy]:**_How do smelly tests, in general, impact energy consumption?_
* **RQ2 [Test Smell Types vs. Energy]:**_How does each test smell type impact energy consumption?_
* **RQ3 [Developers' Awareness]:**_Are developers aware of the impact of test smells on energy consumption?_
* **RQ4 [Developers' Group]:**_Who is the most responsible for introducing energy-hungry test smells?_
* **RQ5 [Provenance of Test Smell]:**_What are the underlying reasons for developers to introduce test smells that could cause unnecessary energy consumption?_
In the following sections, we describe the processes we carried out to answer these research questions, such as selecting subject systems, mining test smell instances, profiling energy consumption in both smelly and refactored test case executions as well as conducting a developer survey.
### _Subject Systems_
In this phase, we selected the subject systems that satisfy our selection criteria and experimental attributes.
#### Iii-A1 **Project Selection**
We started our experiment with a sample of 44 open-source Java software projects from the Apache Software Foundation [57]. We decided to select these projects for three reasons. First, we chose the Apache projects as these projects are well-maintained and supported by a large developer community. Besides, numerous prior research works have been conducted on these Apache projects [58]. Second, we selected projects written in Java since it is considered one of the most widely-used programming languages [59]. Third, in the literature, most of the test smell detection tools are available for Java compared to other languages [60]. For our analysis, we needed to build the projects and execute the test cases. To avoid complications related to building a project, we selected projects using Maven [61] as its build system. Next, we selected projects that use JUnit [62] as the unit testing framework, since it is one of the most widely-used unit testing frameworks for Java. We identified 25 projects that met all these criteria.
#### Iii-A2 **Unit of Measure Selection**
We decided to perform our analysis at the test case granularity to analyze test code quality and measure energy consumption. According to JUnit documentation, a method in the test code with @Test annotation represents a test case, and it can be executed inside the complete test suit or explicitly invoked with its Fully Qualified Name (FQN) from the command line. To locate a test case and its Lines Of Code (LOC), we employed the static code analysis technique utilizing the Eclipse JDT Core Abstract Syntax Tree (AST) traverse tool [63]. These projects allowed us to run each test case individually with the help of Maven and JUnit. During the execution, we discarded a project if a test case failed to execute independently.
Following this process, we recognized a subset of 12 projects where all test cases are executable independently and from the command line. Finally, we ended up with 12 Apache Java projects with 13,103 test cases in total. Table I exhibits the summary of the selected projects' statistics.
### _Test Smell Detection_
#### Iii-B1 **Tool Selection**
In the literature, researchers have proposed various test smell detectors where we found 18 tools that can detect test smells in JUnit-based Java projects [60]. We observed that among these tools, tsDetect [64] can detect the presence of 19 types of test smells, achieving the highest precision and recall [60, 64]. However, in the test smell detection report, tsDetect does not provide the locations of the smells in the test code.
In our study, we are interested in analyzing both the smelly and non-smelly parts of the test code. Therefore, we searched for an extension of tsDetect that can also identify test smell locations in the test code. We encountered JNose [37], which has reused the same test smell detection rules employed in tsDetect and can detect 21 types of test smells, including the 19 types of tsDetect detected smells. In addition, JNose provides the locations (i.e., line numbers) and test case names where a test smell is identified in the test code and the number of each type of test smell identified in a test class. To assess
Fig. 2: Overview of The Study Design with Experimental Phases
the correctness of JNose in terms of precision and recall, we utilized a benchmark of 65 JUnit test files containing instances from various smell types. This benchmark has been created and employed in an earlier qualitative study to evaluate tsDetect [64]. We executed JNose on that benchmark dataset and compared its test smell detection results with tsDetect. We observed that both JNose and tsDetect got the same overall precision score ranging from 85% to 100% and a recall score from 90% to 100% with an average F-score of 95%.
Besides, we also found that JNose has successfully been adopted by researchers in many recent studies [65, 66, 67]. This inspired us to utilize JNose in our experiment. A summary of JNose-detected test smells is demonstrated in Table II.
#### Iii-B2 **Mining Test Smells**
To detect and locate test smells in a given repository, JNose first parses the source code into Abstract Syntax Tree (AST) and then traverses the AST applying detection rules for identifying test smells [37]. Once the detection is completed, JNose generates a report containing information, for instance, test class, number of different types of smells, smelly test cases, and the source code line numbers where the test smell appears. Our experiment requires detecting the presence of different test smells in each test case. To do so, we executed JNose on the subject systems. Utilizing these reports, in a test case, we extracted the instance of various test smells and counted the total number of smell instances, smell count (\(SC\)). In total, we detected 56,908 test smell instances in 12 projects, and the frequency of each type of smell is shown in Table II. We also parsed the location where these smells were identified and calculated the smelly line of code (\(LOC_{(smell)}\)) and clean line of code (\(LOC_{(clean)}\)) of a test case. At the end of this phase, we created a tuple of the test case, \(LOC\), \(LOC_{(smell)}\), \(LOC_{(clean)}\), \(SC\), and the count of each type of test smell, such as: \(testcase\rightarrow\{LOC,LOC_{(smell)},LOC_{(clean)},SC,\)\(s_{1},s_{2},s_{3},...s_{21}\}\).
### _Energy Measurement_
In this stage, we executed each test case with a profiler that monitors the energy consumption rate during the test case execution.
#### Iii-C1 **Experimental Environment Setup**
To create an experimental environment and conduct our analysis, we used five MacBook Airs (13-inch, Mid-2012) laptops. All these laptops have the same hardware configuration containing a 1.8GHz dual-core Intel(r) Core(tm) i5 processor, 8GB RAM, and 256 GB SSD running macOS Catalina 10.15.7. To support the build configuration and test case execution of projects, we used Maven 3.8.6 build system.
#### Iii-C2 **Energy Profiler**
We used Intel PowerLog to measure the energy consumption [68]. It is a command-line tool provided by the Intel Power Gadget toolkit, a power usage monitoring tool. Intel PowerLog precisely estimates power usage from a software level without any hardware instrumentation [68]. It is supported on macOS to monitor and assess real-time processor package power information using the energy counters in the Intel(r) Core(tm) processors [68]. The primary motivation for adopting Intel PowerLog is that it provides a convenient technique to measure processor power usage while executing a specific command in the command line and storing energy profiling into a log file. The log file contains values representing energy, power, and the total time duration in a sequence of times lap, including the total energy consumption \(E(Joules)\), required power \(P(Watt)\), and entire time duration \(T(sec)\), respectively. Since it measures these metrics while executing a specific user command at the process level, it minimizes the effects of other processes. Moreover, Intel PowerLog has been used in prior research, increasing the confidence in using this for our analysis [69]. We installed the latest Intel PowerLog version: 3.7.0, on all these laptops.
#### Iii-C3 **Test Case Execution**
We ran each test case with the mwn test -Dtest="<test case FON>" command while monitoring energy usage using the Intel PowerLog. To minimize external interference, we configured the laptops to Zen Mode [69] before executing the test cases which prevents these laptops from interacting with external networks, and devices. We maintained the following configurations on each laptop to keep it in Zen Mode during the execution of the test cases.
We fully charged the laptop to 100% (). However, to provide equal battery capacity for test case runs, we kept the laptop plugged () in throughout the experiment.
All active applications were quit and unnecessary background services were killed except the terminal (). The auto-dim of an inactive screen was turned off. The screen saver was set up to appear for one hour, and the sleep time was set _never_ to prevent the laptop from falling into sleep mode. Microphone () and Speaker () were also turned off.
Automatic adjusting of brightness was turned off. Brightness () was also lowered to 50%, and the keyboard lighting, automatic logged out, notifications, AirDrop, Bluetooth (), and WiFi () were turned off.
In order to obtain accurate energy measurements, it was necessary to execute a test case multiple times with the Intel PowerLog while the assigned laptop was in Zen Mode. We conducted a preliminary experiment to determine the ideal number of test case runs for reliable energy measurements. Initially, we employed a stratified random sampling, aiming for a 90% confidence interval and a 10% margin of error that led us to select 68 test cases out of 13,103 test cases. Subsequently, we designed a script that automates the execution of a test case with Intel PowerLog for a predetermined number of iterations, incorporating a 30-second cool-down interval after each run. The cool-down period prevents both tail energy consumption from the previous measurement and the collateral tasks of the last execution from affecting the subsequent measurement.
Next, we executed that script to run each sampled test case 5, 15, and 25 times. From the resulting log files, we individually extracted the median values of energy \(E(Joules)\), power \(P(Watts)\), and time \(T(seconds)\). We generated a plot illustrating the relationship between the number of runs and the median energy values. Our analysis revealed that the median values exhibited insignificant variances when the
test cases were executed 5, 15, and 25 times, indicating overall consistency. Based on these experimental findings, we determined that the number of runs had no significant impact on the median energy consumption. Consequently, we proceeded with five runs of each test case.
In total, we executed 13,103 test cases five times, each with 30 second cool-down period. The whole experiment took approximately 874 hours (37 days) of execution time on each laptop. We then extracted five generated energy log files for each test case and calculated the median value of energy, power, and time. Thus, for a project, we created a list of test cases with their corresponding energy \(E(Joules)\), power \(P(Watt)\), and time \(T(sec)\) values in the form of tuples as follows: \(testcase\rightarrow\{E,P,T\}\).
### _Impact Analysis: Test Smell vs. Energy Consumption_
During test smell detection and energy measurement, we generated two types of tuples for the test cases. We then joined them and created a list of test cases with their smell and energy-associated values. This list of tuples helps us analyze test smell's impact on energy consumption.
\(testcase\rightarrow\{LOC,LOC_{(smell)},LOC_{(clean)},SC,E,P,T,\)
\(s_{1},s_{2},...s_{21}\}\).
#### Iv-D1 **Group Analysis**
We first investigated smelly tests' impact on energy consumption. Our goal in this step is to see if the test smells are associated with increased energy consumption. To do so, for each test case, we calculated the energy \(E_{LOC_{(smell)}}(Joules)\) required to execute a smelly line of code (\(LOC_{(smell)}\)) and the energy \(E_{LOC_{(clean)}}(Joules)\) for clean line of code (\(LOC_{(clean)}\)). According to Equations 1 and 2, we computed these energy values. We believe these two values could serve as estimates for the energy consumption of smelly and clean parts of the test code in a test case. Next, we categorized all the test cases into multiple groups based on their total smell count (\(SC\)). The goal is to analyze if more test smell instances increase the energy overhead incurred by test smells. We created these groups with the interval of various smell counts, such as 5, 10, and 25 to mitigate the bias brought upon by selected group size. Then for each group, we calculated mean and median energy consumption of \(E_{LOC_{(emell)}}\) to analyze the trend of energy consumption. We also conducted Welch's t-test [70, 71] and Cohen's D [72] to measure the statistical significance and effect size of the differences in energy consumption between different groups.
\[E_{LOC_{(emell)}}=E*\frac{LOC_{(smell)}}{LOC_{(test)}}(Joules) \tag{1}\]
\[E_{LOC_{(clean)}}=E*\frac{LOC_{(clean)}}{LOC_{(test)}}(Joules) \tag{2}\]
\[E_{(N)}=\frac{E}{SC}(Joules) \tag{3}\]
#### Iv-D2 **Correlation Analysis**
To further understand how individual test smell type impacts energy consumption, we investigate the relationship between energy consumption and each test smell type. However, we don't have a one-to-one mapping between smell type vs. energy consumption to establish the relationship. We illustrate it by provide an example: Supposes a test case contains 12 test smells (e.g., \(SC=12\)) in 3 different types, for example, \(S_{1}=7\), \(S_{4}=3\), and \(S_{15}=2\). The energy consumption (\(E\)) reflects the complete execution of that test case with the presence of all those 12 test smells. Hence, it is not possible to differentiate which test smell affects what portion of the energy consumption. Therefore, establishing a relation between (\(E\) vs. \(S_{1}\)),(\(E\) vs.
\(S_{4}\)), and (\(E\) vs. \(S_{15}\)) would not be accurate. To normalize the energy consumption for each smell, we measured the energy consumption per test smell using \(E_{(N)}\) according to Equation 3, where SC is the total smell count. For a test case, the normalized value \(E_{(N)}\) represents the energy consumption per test smell that relates to its effect on energy consumption. With this one-to-one mapping, we conducted a Kendall's Tau (\(\tau_{b}\)) correlation analysis [71] between energy consumption per test smell and the number of test smell instances for each test smell type. We used Kendall's Tau since it has smaller gross error sensitivity and smaller asymptotic variance compared to Spearman correlation [73].
### _Impact Analysis: Case Study of Test Smell Refactoring_
To gain a comprehensive understanding of the impact of test smells on energy consumption, we conducted a case study. We began by selecting the largest project from our pool of subject systems. Next, we sampled test cases and manually refactored them to eliminate any existing test smells, creating test case pairs (i.e., with and without test smells) that test the same production code functionality. We executed these refactored test cases using an energy profiler to measure the energy consumption rate. The following subsections provide details of each step undertaken in the case study.
#### Iii-E1 **Subject Selection**
In order to refactor the test cases of a project, it is essential to possess a reasonable comprehension of the project's codebase and its test suite. Nevertheless, comprehending the codebase of all 12 projects from our subject systems and refactoring 21 distinct types of smells in thousands of test cases present a challenging and demanding task. Therefore, we have opted to refactor a random sample of smelly test cases in a single project, which makes the codebase understanding and manual test smell refactoring manageable. For this purpose, we have selected the Apache Commons Lang project due to having the largest number of test cases (4,067), lines of code (89K LOC), and smelly test cases (2,032) among all our subject systems.
#### Iii-E2 **Sampling Test Cases**
Existing literature has demonstrated that certain test smells tend to co-occur together. Refactoring test smells like Lazy Test (LT), Eager Test (ET), and Conditional Test Logic (CTL) often involves modifying multiple test cases or introducing new ones. Consequently, developers may perform partial or complete refactoring. In partial refactoring, developers address some test smells in a test case, reducing the overall number of smells. On the other hand, complete refactoring involves removing all types of smells, resulting in a clean test case. To analyze the impact of energy consumption in both of these situations, we created two types of sampled test cases. These test cases represent the scenarios of partial refactoring, where only some test smells are addressed, and complete refactoring, where all test smells are removed, allowing us to study the energy consumption variations comprehensively.
To analyze the energy impact of partial refactoring, we created a stratified sample of test cases that contain different types of smells. We first determined the size of the random sample by utilizing a 90% confidence interval and a 10% margin of error, giving us a sample size of 66. In the Apache Commons Lang project, there were a total of 2,032 simply test cases. We proceeded to select a stratified random sample of 66 test cases from this pool of 2,032 smelly test cases. These 66 test cases contain various types and numbers of test smells. We tagged them as "Smelly-66".
To observe the energy consumption impact of complete refactoring, we filtered out test cases that contained test smells highly associated with energy consumption. Our analysis in Section IV-A2 revealed that certain test smells, namely Assertion Roulette (AR), Lazy Test (LT), and Eager Test (ET), have high correlations with energy consumption, with AR having the highest correlation. However, refactoring ET and LT may require changes in multiple test cases or the introduction of new ones which goes beyond the scope of our study. Therefore, we focused on refactoring test cases that solely contained Assertion Roulette (AR) smells. After filtering, we identified 79 test cases out of the initial 2,032 that only contained AR smells. We tagged these test cases as "Smelly-AR-79" for further analysis.
#### Iii-E3 **Manual Refactoring of Test Smells**
Automated test smell refactoring tools are not yet widely available. We conducted an exhaustive search of the existing literature and found only a few works proposing automated test smell refactoring approaches [74, 75, 76, 77, 78, 79]. However, none of these works reported any executable tools that we can readily utilize to do the automated test smell refactoring. We contacted the authors of these existing works and received several replies. None of the responses provide the proposed tool. Therefore, we took the refactoring strategies recommended by Soares et al. [78] to refactor some of the test smell categories, namely Assertion Roulette (AR), Conditional Test Logic (CTL), Duplicate Assert (DA), Mystery Guest (MG), and Exception Handling (EH).
Since no automated test smell refactoring tool was available, we manually refactored the collected test cases by following the refactoring strategies proposed by Soares et al. [78]. We performed partial refactoring for the "Smelly-66" test cases, focusing on addressing test smells that could be handled without modifying multiple test cases or introducing new ones. However, we performed complete refactoring for the "Smelly-AR-79" test cases and removed all instances of the Assertion Roulette (AR) smells. Two of the authors collaborated to perform the test smell refactoring together. To ensure that the refactoring changes did not affect other test cases or existing features, we repeatedly executed the regression test suite provided by the project after refactoring. This step was taken to verify that the refactoring process did not introduce regressions or negatively impact the existing functionality. We also ensured that the code coverage did not change after refactoring. We posit that the combination of not introducing regression and unchanged code coverage ensures that the refactored test case is semantically similar to the original test case.
#### Iii-E4 **Energy Measurement**
Finally, we have the smelly and refactored version of "Smelly-66" and "Smelly-AR-79" sampled test cases. We executed the refactored test case for five iterations of each test case with a 30-second cool-down period and measured the energy consumption. We describe
the complete process of energy measurement in detail in Section III-C. After the execution, we collected the energy consumption (\(E\)) for both smelly and refactored versions of the "Smelly-66" and "Smelly-AR-79" sampled test cases. Finally, we conducted Welch's t-test [70, 71] and Cohen's D [72] to measure the statistical significance and effect size of the differences in energy consumption between smelly and refactored test cases.
### _Developers Survey_
To validate our findings and understand developers' perceptions about test smell and its relationship to software energy consumption, we conducted an online survey with software developers. The following subsections describe our survey design, participation selection, pilot survey, data collection, and analysis. This survey was conducted following the guidelines and protocols approved by the Institutional Review Board (IRB).
**Survey Design:** Our survey consists of 12 questions, including multiple-choice, ranking, and open-ended questions. We began by collecting demographic information from respondents (Q2-Q3) to understand their background and experience in software development and writing unit test cases. We then asked about their familiarity with test smells and their practice writing unit test cases (Q4-Q5). Additionally, we inquired whether they pay attention to test smells (Q6). Next, we investigated the impact of test smells on energy consumption during software testing (Q7-Q8). We identified specific test smells that had a higher energy consumption impact, and participants familiar with these were asked to rate the test smells based on their perceived impact severity on energy consumption (Q8-Q9). Additionally, we inquired whether participants' organizations provide guidelines for monitoring energy consumption during software testing. Participants were asked to mention any tools or services they use to monitor energy consumption during testing (Q10-Q12). A text box option was also provided for respondents if they wanted to share the reason behind their choice. A complete list of questions for this survey is provided on the companion website [80].
**Participants Selection:** For our survey, we targeted the software developers from the 12 subject Apache projects in our study. To develop a list of survey participants, we mined a list of unique email addresses of contributors from the version control systems. In total, we collected 490 individual email addresses and recognized them as our potential participants. We utilized this email list to send the survey invitation to these developers.
**Pilot Survey:** To review the survey's validity, we asked Software Engineering professors and graduate students (two professors and two Ph.D. students) with experience in software development, writing unit test cases, and survey design. To enhance the clarity of the questions, we performed several iterations of the survey and rephrased and reorganized some questions according to their feedback. Considering software developers' hectic schedules, we emphasized the time required to complete the survey. We ensure that participants can complete the survey in 8 to 10 minutes. The pilot survey aimed only to improve the questions, and the responses are not included in the reported results.
**Data Collection:** To distribute our survey, we used Qualtrices [81] as a design and distribution platform. We emailed 490 developers from 12 Apache projects, following our organizational guidelines (with the approved University IRB protocol). To maximize survey participation, we followed the guidelines and best practices described by Smith et al. [82], such as allowing respondents to remain anonymous and sending personalized invitations. After publishing the survey, the survey was kept open for two weeks in total, and meanwhile, we also sent a reminder email at the end of the first week. Within these two weeks, we received completed responses from 62 participants. Overall, we got a response rate of 12.7%, consistent with the prior studies conducted in the software engineering areas [82, 83]. The software development experience of our respondents varies from 1 year to more than 20 years, and 80.6% of them have over two years of software testing experience.
**Data Analysis:** During the data analysis process, we consider the majority vote from developers as the final overall rating for a specific item. For example, when ranking test smell types in terms of the severity of their impacts on energy consumption, we determine the final ranking based on the majority consensus among the developers' responses. This approach ensures that the final results represent the collective perspective of the surveyed developers.
### _Developer Experience Analysis_
One of our research questions was related to investigating whether the developer's experience has any association with creating energy-hungry test smells. We analyzed developers in groups. Following existing software engineering studies [84], we categorized developers into three groups, core developers, non-core developers, and bots. It's widely accepted that a relatively small number of core developers are responsible for more than 80% of the contributions in any open-source project [84]. We used this principle to classify a developer as a core developer if they are among the top 20% of the developers in terms of the number of commits authored and a non-core developer otherwise. In this study, we used emails as developers' identifiers. We collected their emails and contributions by using git commands (i.e., git log) in the downloaded git repositories. In addition, some development activities are performed by automated tools (i.e., bots) that run at specific events. We detected these bots by using the same approach as in [85]. That is, we analyzed the variability of all contributors' commit message writing patterns, and we identified bots if the variability of the messages generated is lower than a threshold proposed by [85].
## IV Results
In this section, we present the results of our study from two complementary perspectives: _Software_ and _Developers'_.
### _Software Perspective_
We start this section by reporting our findings on how software energy consumption is associated with the presence of test smells.
#### Iv-A1 **Rq1 [Test Smells vs. Energy]** How do smelly tests, in general, impact energy consumption?
To answer this research question, we first investigate if the smelly part of a test case consumes more energy than its clean part. We calculate the energy consumption for smelly lines of code (\(E_{LOC_{(emell)}}\)) and clean line of code (\(E_{LOC_{(clean)}}\)) following Equations 1 and 2 mentioned in Section III-D1 for all smelly test cases in our selected projects. The values \(E_{LOC_{(emell)}}\) and \(E_{LOC_{(clean)}}\) represent an estimate of the energy consumption for smelly and clean test codes in a test case. We found that the mean value for the smelly test code is 109.93 Joule, which is greater than that of the clean test code, which is 88.27 Joule. The median value also follows the same trend with a higher energy consumption value for the smelly test code (102.61 Joule) compared to the clean test code (91.66 Joule). Further, to check whether the difference between the energy consumption of the smelly test code and the clean test code across all 12 projects is statistically significant, we performed Welch's t-test [70] after checking for the normality assumption of the data using Shapiro-Wilk's test [86]. We used Cohen's D [72] to measure the effect size. Our results show that the difference is statistically significant (Welch's t-test, _p-value_\(<\)6.27e-120, Cohen's D (0.37, small)), indicating that in a test case, the smelly lines consume more energy than the clean lines of code.
In addition, we also examine if more test smell instances in a smelly test case would cause it to consume more energy. To do so, we grouped all test cases based on the count of total number of smell instances it contains (\(SC\)). To prevent outliers from skewing our results, we removed 79 test cases having more than 50 test smell instances from our analysis (3 standard deviations away from the mean [87]). This process left us with 7,748 test cases containing at least one but no more than 50 test smell instances. Next, we group test cases based on the number of test smell instances each test case has. Figure 3 shows the mean and median energy consumption in terms of smelly line of code (\(E_{LOC_{(emell)}}\)) values for the group size of 5. As we can see, the energy consumption of smelly line of code (\(E_{LOC_{(emell)}}\)) grows with the increasing number of test smells. For group sizes 10 and 25, we found the same pattern. In general, it exhibits that the presence of more test smell instances increases the energy consumption for a test case, which implies test smell, in general, impacts energy consumption in software testing. To validate the statistical significance of the difference between groups, we also conducted Welch's t-test and Cohen's D between contiguous groups (i.e., group 6-10 with group 11-15, group 11-15 with group 16-20, etc.) Due to space limitation, we put the complete group analysis results in our replication package [80].
Iv-A2 **Rq2 [Test Smell Types vs. Energy]** How does each test smell type impact energy consumption?
We next seek which test smell types have a more severe impact on energy consumption than others. As discussed in Section III-D2, it is not possible to collect the actual one-to-one mapping of energy consumption and each type of smell. Therefore, we computed a normalized energy consumption value \(E_{(N)}=(E/SC)\) as an indicator of test smell's impact on energy consumption (shown in equation 3). In a test case, the normalized value \(E_{(N)}\) helps us to create a one-to-one mapping between \(E_{(N)}\) and the each type of test smells such as \(testcase\rightarrow\{E_{(N)},s_{1},s_{2},s_{3}...s_{21}\}\).
With this mapping, we then performed Kendall's Tau (\(\tau_{b}\)) correlation analysis between \(E_{(N)}\) and each type of smell (e.g., \(s_{1},s_{2},s_{3}...s_{21}\)). Table III demonstrates our correlation results with the top 10 types of smells. Since we perform multiple-statistical tests, we applied Bonferroni correction to adjust \(P\) values [88], which gives an adjusted \(\alpha=0.002\). As we can see, the correlation values Kendall (\(\tau_{b}\)) are statistically significant (i.g., _p-value_\(<\)_0.002) for all these test smells. Assertion Roulette (AR), Lazy Test (LT), Eager Test (ET), and Magic Number Test (MNT) are the smells with strong (\(\tau_{b}>0.30\)) correlation with energy consumption. On the other hand, Dependent Test (DT), Unknown Test (UT), and Verbose Test (VT) are moderately (\(\tau_{b}>0.20\)) correlated with energy consumption. Sensitive Equality (SE) and Conditional Test Logic (CLT) show a weak correlation. For the remaining test smells types, we found a very weak correlation (\(\tau_{b}<0.10\)).
#### Iv-A
Figure 3: Mean (\(E_{LOC_{smell}}\)) and Median (\(E_{LOC_{smell}}\)) energy consumption with Test Smells Count (\(SC\)) in Group size 5. Energy Unit Measured in Joule.
**Observation 2: Assertion Roulette (AT), Lazy Test (LT) and Eager Test (ET) test smells are strongly associated with energy consumption.**
#### Iv-A3 **Case Study [Smelly/refactored Test vs. Energy]**
To complement our results from RQ1 and RQ2, we conducted a case study on test cases with and without test smells (See Section III-E). For "Smelly-66", a significant difference (Welch's t-test, _p-value_\(<\)6.727e-46, Cohen's D (3.910, large)) was found between the energy consumption (\(E\)) of smelly test cases (204.922 Joule) and partially refactored test cases (184.974 Joule). Similarly, for "Smelly-AR-79", we also found a significant difference (Welch's t-test, _p-value_\(<\)4.508e-05, Cohen's D (0.672, medium)) between the energy consumption (\(E\)) of smelly test cases (204.048 Joule) and clean test cases (185.017 Joule). Our results show that the total energy consumption decreased significantly after removing test smells for both partial and complete refactoring. This indicates that test smells incur more energy consumption in software testing.
### _Developers' Perspective_
Here, we explain our results regarding the developer's practices and perception of test smells.
#### Iv-B1 **RQ3 [Developers' Awareness]**
_Are developers aware of the impact of test smells on energy consumption?_
Only 29.4% of our survey respondents who know about test smells expressed that they are confident that the presence of test smells in test cases has an impact on energy consumption, while 70.6% answered "Maybe" or "No." This indicates that most developers are not fully aware of the test smell's impact on energy consumption, which may contribute to the introduction of test smells during software testing.
We asked the survey participants to rank the test smell types we found in RQ2 based on their perceived severity of impacts on energy consumption. We show survey respondents' provided rankings in Figure 4. We could see that the rankings are widely spread among developers, which indicates that developers have conflicting opinions regarding the severity of different test smell types' impacts on energy consumption. We took the ranking that is voted by most respondents as the overall ranking from developers for a specific test smell type. We list our results in Table IV. We found multiple ranking mismatches between our empirical analysis and our survey respondents, such as Duplicate Assert (DT), Unknown Test (UT), and Assertion Roulette (AR). This, again, shows that developers have a limited understanding of the test smell's impact on energy consumption.
#### Iv-B2 **RQ4 [Developers' Group]**
_Who is the most responsible for introducing energy-hungry test smells?_
In this study, we identified the developers who last modified at least one of the test code lines that are part of a test smell instance as the one who is responsible for the test smell instance. We used "git blame" command for this. For each test smell type, we found that all three groups (core, non-core, and bot) introduced a similar number of test smells. This indicates that core, non-core, and bot developers all play role in introducing energy-hungry test smells almost equally. While non-core developers contributed less in open-source software projects than core developers, they are still responsible for introducing a comparable amount of test smells as core developers. In addition, bot committers also introduced almost the same amount of test smells as human developers (for some test smell types such as Assertion Roulette (AR), they introduced even more test smells than human developers), which suggests that the automation tools used in open-source software development contain issues in generating or updating test cases. This observation is similar to what Virginio et al. [89] found where the human-written tests showed a higher quality than their studied automated tools regarding the presence of test smells. Also, all three groups tend to introduce more Assertion Roulette (AR) than other test smell types, while AR has the highest correlation with the average energy cost per smell instance (\(E_{(N)}\)). In addition, they also tend to introduce a relatively large amount of Lazy Test (LT), Eager Test (ET), and Magic Number Test (MNT), which we showed to be highly correlated with energy consumption (See Subsection IV-A). The complete numbers are reported in the companion website [80]. These findings indicate that both developer groups should raise their awareness of energy-hungry test smells, and better test case
Fig. 4: Rankings of the Severity of Test Smell Types’ Impact on Energy Consumption from Survey Respondents
generation tools with quality checks in terms of test smells should be developed.
**Observation 5: Core, non-core, and bot developers are similarly responsible for introducing energy-hungry test smells.**
Iv-B3 **Rq5 [Provenance of Test Smell]** What are the underlying reasons developers introduce test smells that could cause additional energy consumption?
First, 45.2% of our survey participants do not know about test smells, so they may unknowingly introduce test smells when writing and updating test cases. Then, 5.9% of our survey respondents who know about test smells do not pay attention to test smells when writing test cases. One of them explicitly mentioned that _"My organization doesn't have any policy/requirement regarding test smells"_ and _"I don't care about test smells in test cases"[S-18]_. In addition, 61.8% of the developers replied that their organizations do not follow any guidelines regarding monitoring energy consumption during software testing, while 76.5% do not know or use any tools for monitoring energy consumption in the testing phase. The complete results of our survey is provided in our replication package [80]. From our survey results, we could see that even well-established organization like Apache lacks the proper guidelines for test smells and energy consumption monitoring in regular software testing, which might be one of the main reasons developers introduced energy-hungry test smells.
**Observation 6: Lack of guidelines, tools, and incentives are probable reasons developers introduce energy-hungry test smells.**
## V Discussion
The results of our study reveal that test smells, in general, have a negative impact on energy consumption during software testing. This finding complements previous research on the impact of test smells on various other aspects of software quality. Our study provides valuable evidence for developers, highlighting that the presence of test smells can lead to a significant amount of energy overhead, especially in large organizations where millions of test cases are executed daily.
To address this issue and promote energy-efficient software development practices, tool builders should consider providing Just In Time automated tools and IDE plugins. One surprising finding was the lack of automated test smell refactoring tools. As explained in Section III-E, even though there is prior work on test smell refactoring, no tool is available to refactor test smells off the shelf. Providing tools that can help developers by offering real-time feedback and suggestions for refactoring, developers can make more informed decisions to create cleaner and more energy-efficient test cases, ultimately leading to improved software quality and reduced energy consumption in the testing process.
Our analysis also revealed that the top three test smells with the highest association with energy consumption are Assertion Roulette (AR), Lazy Test (LT), and Eager Test (ET). To understand why AR is particularly energy-intensive, we manually inspected samples of AR test smell instances. Our investigation showed that most test cases containing AR have multiple assertion statements without any explanation, and all of these statements are considered smelly. In contrast, clean test cases usually contain only one assertion statement, which is more focused and purposeful. From an execution perspective, multiple assertion statements in Assertion Roulette (AR) test cases require more energy during testing than clean test cases with only one assertion statement. The repeated execution of multiple statements adds to the overall energy consumption, making test cases with Assertion Roulette (AR) more energy-intensive. We believe more research is needed to systematically investigate why these test smell types are the most energy-hungry ones.
Our analysis found a mismatch between the developer's perception and our empirical analysis result regarding the severity of test smell types' impacts on energy consumption. For example, the most energy-hungry test smell type recognized by developers is Duplicate Assert (DA) while it only ranks 5 in our empirical analysis. One possible explanation could be that developers assumed multiple unnecessary assertions in a test case could contribute to more execution time and energy consumption. Such mismatches between perception and reality have been observed in numerous other instances where long-held beliefs proved incorrect or outdated when actual evidence was collected through empirical analysis [90, 91]. Since the developers do not have tools for monitoring energy consumption during software testing, tool builders should create such tools that can be seamlessly integrated into the existing development workflow.
In terms of introducing energy-hungry test smells, bot committers (i.e., automated tools) are responsible for a comparable number of test smell instances to core and non-core developers. Combined with observations from [89, 92, 93], we believe that researchers should investigate ways to consider energy consumption as a factor while generating test cases automatically. Another interesting observation is that non-core developers are responsible for a similar amount of test smell instances as core developers, even though they contribute much less than core developers. Further investigation in the future is required to understand the underlying reason for this.
## VI Threats To Validity
In our study, we have tried to eliminate bias and the effects of random noise. However, a few biases are unavoidable, and our mitigation strategies may not have been effective for them.
**Bias due to confounding factors:** The potential confounding effect of the production code's complexity or LOC is a concern when studying the correlation between test smells and energy consumption. However, the case study conducted in Section III-E effectively mitigated this bias by ensuring that the production code remained the same across test cases with and without test smells. By keeping the production code consistent, we isolate the impact of test smells on energy consumption and ensure that any observed changes in energy
consumption are attributed only to the presence or absence of test smells in the test cases.
**Bias due to sampling:** The projects we have used in our study includes 12 Apache Java projects. We picked these projects from the Apache Software Foundation. Besides, we surveyed the developers who contributed to these 12 Apache projects. The responses of these developers may not represent all developers in other open-source projects and therefore, our findings may not generalize to all open-source projects.
We mined in total 13,103 test cases. We detect the test case based on the presence of @Test annotation. However, developers may write test cases without the @Test annotation by extending the JUnit Test class. We followed the developer's best practices and guidelines mentioned in JUnit documentation to identify test cases. However, it is also possible that other libraries are used to write unit test cases. In our study, we didn't consider test cases written using other libraries.
It is possible that our sampled test cases for manual refactoring are not representative of all test cases. However, to mitigate this threat, we utilized a 90% confidence interval and a 10% margin of error to calculate the sample size and used stratified random sampling. This statistical approach should mitigate the mentioned bias.
We categorized the developers into core and non-core groups based on the number of commits they contributed. According to this criteria, some of the developers could have been categorized as non-core though they were core developers who worked on significant contributions such as architectural refactoring or high-level design changes instead of frequently contributing to code changes. Since this is one of the most frequently used approaches in the literature, we relied on this approach. Also, we identified the bot committers based on the commit messages they wrote, which could have mistakenly classified some human developers as bots.
**Bias due to tools used:** In our study, we utilized a test smell detection tool and an energy profiler. Our analysis depended on these tool-generated outputs. Therefore, any errors in these tools may affect findings. To minimize the risk, we used tools that were validated by prior research. We used JNose to detect test smells. JNose can detect 21 types of test smells. However, there could be other test smell types that JNose can not detect.
We employed Intel PowerLog as an energy profiler. Although it reports energy consumption at the software level, some environmental factors, such as room temperature and outage in electricity, can affect its reported energy consumption result. We followed the procedure used in other work; we kept our experimental laptops at normal room temperature and placed them at a reasonable distance from each other to avoid heat transmission. We also ensured a cool-down period after every test case execution to avoid any impact of heating of the laptop itself on the results.
**Bias due to the survey:** It is possible that the survey participants misunderstood some of the survey questions. To alleviate this threat, we conducted a pilot study with experts having experience in software development, writing unit test cases, and survey design. We updated the survey questions according to their feedback.
## VII Conclusion
Our ultimate goal is to help catalyze advances in energy-efficient software testing and this paper takes the first step towards that by shedding light on the current state of affairs. We presented the results of our empirical study aimed at understanding the state of energy consumption occurring due to test smells. Overall, our analysis reveals that the smelly part of a test case generally consumes more energy than its clean part. Also, not all test smell types are equally energy-hunpy. Our analysis revealed that Assertion Roulette (AR), Lazy Test (LT), and Eager Test (ET), tend to consume more energy compared to other smell types. Moreover, our test smell refactoring results indicate that smelly tests consume more energy than their clean counterparts. Our findings highlight the need for increased developer awareness regarding the impact of test smells on energy consumption and opportunities for researchers and tool builders to address the lack of tools and guidelines. The research artifacts for this study are publicly available at the companion website [80].
|
2307.06583 | Contextuality, Coherences, and Quantum Cheshire Cats | We analyse the quantum Cheshire cat using contextuality theory, to see if
this can tell us anything about how best to interpret this paradox. We show
that this scenario can be analysed using the relation between three different
measurements, which seem to result in a logical contradiction. We discuss how
this contextual behaviour links to weak values, and coherences between
prohibited states. Rather than showing a property of the particle is
disembodied, the quantum Cheshire cat instead demonstrates the effects of these
coherences, which are typically found in pre- and postselected systems. | Jonte R. Hance, Ming Ji, Holger F. Hofmann | 2023-07-13T06:53:27Z | http://arxiv.org/abs/2307.06583v2 | # Contextuality, Coherences, and Quantum Cheshire Cats
###### Abstract
We analyse the quantum Cheshire cat using contextuality theory, to see if this can tell us anything about how best to interpret this paradox. We show that this scenario can be analysed using the relation between three different measurements, which seem to result in a logical contradiction. We discuss how this contextual behaviour links to weak values, and coherences between prohibited states. Rather than showing a property of the particle is disembodied, the quantum Cheshire cat instead demonstrates the effects of these coherences, which are typically found in pre- and postselected systems.
## I Introduction
The quantum Cheshire cat protocol has raised many eyebrows. The protocol preselects and postselects states where the weak value for the spatial projection operator of a quantum particle (e.g. a photon) is zero along a given path. This is despite this pre- and postselection giving a non-zero weak value for an operator supposedly representing one of the particle's constituent properties along that path (e.g. its polarisation). This is often interpreted as the property becoming disembodied, travelling along a path the particle itself cannot traverse [1]. The protocol was initially given as a thought experiment [2], but later demonstrated experimentally [3; 4; 5; 6; 7]. Recent work claims to have extended the protocol to dynamically changing the disembodied property [8], swapping this disembodied property between two particles [7; 9; 10], delayed choice of which path carries the particle and which carries the disembodied property [11; 12], disembodying multiple properties simultaneously [13; 14], and even "separating the wave-particle duality" of a particle [15].
However, this interpretation of the protocol is controversial. Many have questioned how paradoxical the protocol actually is [16], with some saying the effect simply constitutes standard quantum interference [17], and others claiming the same result can be obtained using classical physics [18].
In this paper, we analyse the protocol using the recently-developed tool of contextuality theory [19]. This involves considering the relationships between different possible (ideally local) measurements, and our classical intuitions about what those measurements would infer, and then seeing whether those inferences are compatible in the scenario in question. More formally, it involves assigning a measurement context to every mutually-orthogonal set of measurement outcomes, observing that some outcomes are shared by multiple contexts, and using these shared outcomes to infer how these contexts relate to one another.
While [20] and [21] mention in passing that the quantum Cheshire cat scenario is equivalent to a contextual scenario known as the 2-qubit Peres-Mermin square, this has not yet actually been shown, nor has the scenario been fully analysed using contextuality theory. We seek to correct this oversight.
To do so, we identify a set of observable properties, that allow us to derive statements about a quantum particle's path and polarisation from a preselected initial state and a postselected final state. We then show that these observable properties belong to different measurement contexts, where the postselected result should be impossible because it implies a contradiction between the polarisation, the path, and the correlation between the two. By analysing the Hilbert space algebra, we then proceed to show that the contextuality argument links naturally to the Cheshire cat argument presented in [2]. The original Cheshire cat argument emphasised the contradiction between the pairing of correlation and polarisation on the one hand, and path on the other, showing that the path determined by polarisation and correlation was opposite to the independently-determined path. We find that contextuality emphasises the symmetry of the three statements regarding path, polarisation, and correlation. However, other pairings are possible to highlight the contradictions. The impression that the quantum Cheshire cat describes the disembodiment of a physical property of the particle is therefore a consequence of a very specific interpretation of the contextuality relations that characterise the paradox.
This paper is organised as follows. In Section II, we go over the quantum Cheshire cat protocol, with a couple of simplifying adaptations. In Section III, we show we can define three claims about the properties of a particle in the quantum Cheshire cat protocol: individually, each claim can be shown experimentally to be true, but combining the three Claims leads to a contradiction. We then go on to adapt this logic to form an inequality, which our classical intuition would expect to be valid, but which a quantum mechanical description of the quantum Cheshire cat protocol violates. In Section IV we use
weak values and coherences to decompose the statistical operator describing the quantum Cheshire cat scenario into different bases. In each of these bases, we see coherences between modes which are not occupied. We show that these coherences between prohibited states causes the contextual behaviour demonstrated by the protocol. In Section V, we discuss the meaning of these coherences between prohibited states, and how they link to the compound operators in the original Quantum Cheshire Cats paper. We show that the meaning of these coherences becomes more obvious when we combine Claims differently. We then summarise our findings in Section VI.
## II The Cheshire cat protocol
In this Section, we describe a quantum Cheshire cat protocol, where, by choosing a suitable pre- and postselection, a quantum particle appears to become separated from one of its properties. This protocol is of the same form as the one introduced in [2], however, our form is both simplified, and emphasises certain key features. We implement this protocol optically--the quantum particle is a photon, and the "disembodied" property its polarisation.
In the quantum Cheshire cat protocol (as given in Fig. 1), a diagonally (\(D\)) polarised single-photon is emitted from a source, and passed through a 50:50 beamsplitter (BS1). This puts the photon onto a superposition of paths 1 and 2 through the interferometer--specifically, the superposition \(\left|+\right\rangle\), where we define superpositions \(\left|+\right\rangle\) and \(\left|-\right\rangle\) as
\[\left|\pm\right\rangle=\frac{1}{\sqrt{2}}\left(\left|1\right\rangle\pm\left|2 \right\rangle\right) \tag{1}\]
Path 2 then passes through a half wave plate (HWP), aligned to cause a phase shift of \(\pi\) between horizontal (\(H\)) and vertical (\(V\)) polarised components, such that it flips diagonal to anti-diagonal (\(D\) to \(A\)) polarisation, and vice-versa. This preselects the photon in the entangled state \(\left|E_{\text{CC}}\right\rangle\). In the \(H/V\) basis, this can be represented as
\[\left|E_{\text{CC}}\right\rangle=\frac{1}{2}\left(\left|H1\right\rangle+\left| H2\right\rangle+\left|V1\right\rangle-\left|V2\right\rangle\right) \tag{2}\]
where for convenience we write
\[\left|ab\right\rangle=\left|a\right\rangle\otimes\left|b\right\rangle \tag{3}\]
and \(H\) and \(V\) polarisation are related to \(D\) and \(A\) polarisation by
\[\left|H\right\rangle =\frac{1}{\sqrt{2}}\left(\left|D\right\rangle+\left|A\right\rangle \right), \tag{4}\] \[\left|V\right\rangle =\frac{1}{\sqrt{2}}\left(\left|D\right\rangle-\left|A\right\rangle\right)\]
Paths 1 and 2 then recombine at another 50:50 beamsplitter (BS2), before passing through a polarising beam splitter (PBS). This PBS transmits \(D\)-polarised light, and reflects \(A\)-polarised light. We then postselect on the photon being in state
\[\left|D+\right\rangle=\frac{1}{2}\left(\left|H1\right\rangle+\left|H2\right\rangle +\left|V1\right\rangle+\left|V2\right\rangle\right) \tag{5}\]
by only considering cases when the photon ends up transmitted through this PBS. For the preselected input state \(\left|E_{\text{CC}}\right\rangle\), the probability of finding a photon in this postselected output is \(1/4\).
We now want to ask what the properties of this pre- and postselected photon are, in the path-and-\(H/V\) basis. The standard way to observe the properties of a particle in quantum mechanics is through projective, or Von Neumann, measurements. However, we have already postselected the photon in an output that is an equal superposition of all basis states. Projective measurements on these basis states would change the state of the photon, meaning the results obtained in a sequential measurement wouldn't actually tell us about the undisturbed pre- and postselected system. Therefore, we need a way of observing the properties of the system without disturbing the system.
Weak values were originally claimed to be a way to infer information about an observable, between a fixed initial and final state without disturbing the evolution of the system [22; 23]. Despite originally being considered in the context of weak measurement, weak values have since been observed in settings other than those using
Figure 1: Optical implementation of the quantum Cheshire cat protocol. BS1 and BS2 are balanced beamsplitters, PBS (\(D/A\)) is a polarising beamsplitter which reflects anti-diagonally (\(A\)) polarised light and transmits diagonally (\(D\)) polarised light. HWP is a half-wave plate, aligned to transform \(D\)-polarised light into \(A\)-polarised light (and vice-versa). This implementation preselects the photon in entangled state \(\left|E_{\text{CC}}\right\rangle\), and postselects it in state \(\left|D+\right\rangle\). This creates a quantum Cheshire cat—the photon passes along arm 1, but its polarisation appears to travel along arm 2. See Section II for details.
weak measurement [24; 25], and their meaning appears more subtle than initially thought [26; 27; 28].
The weak value \(\langle\hat{O}\rangle_{w}\) of an operator \(\hat{O}\) between preselection \(|\hat{i}\rangle\) and postselection \(|f\rangle\) is given by
\[\langle\hat{O}\rangle_{w}=\frac{\langle f|\,\hat{O}\,|i\rangle}{\langle f|i\rangle} \tag{6}\]
Using the pre- and postselection given above, we can obtain weak values for properties of the particle between BS1 and the HWP.
The spatial projection operators, representing projection on path 1 and path 2 respectively, are
\[\hat{\Pi}(1) =\mathds{1}\otimes|1\rangle\langle 1|\,, \tag{7}\] \[\hat{\Pi}(2) =\mathds{1}\otimes|2\rangle\langle 2|\]
Their weak values are
\[\langle\hat{\Pi}(1)\rangle_{w} =1, \tag{8}\] \[\langle\hat{\Pi}(2)\rangle_{w} =0\]
The original Cheshire cat paper [2] interprets these values as showing that a photon which passes the pre- and postselection must travel on path 1.
We can define the polarisation difference operator
\[\hat{\sigma}_{HV} =|H\rangle\langle H|-|V\rangle\langle V| \tag{9}\] \[=|D\rangle\langle A|+|A\rangle\langle D|\]
From these projection and difference operators, we can define compound operators
\[\hat{\sigma}_{HV}(1) =\hat{\sigma}_{HV}\otimes|1\rangle\langle 1|\,, \tag{10}\] \[\hat{\sigma}_{HV}(2) =\hat{\sigma}_{HV}\otimes|2\rangle\langle 2|\]
The weak values of these compound operators are
\[\langle\hat{\sigma}_{HV}(1)\rangle_{w} =0, \tag{11}\] \[\langle\hat{\sigma}_{HV}(2)\rangle_{w} =1\]
This is taken to mean, the polarisation travels on path 2, despite the photon travelling on path 1, in the postselected scenario,
Just from this initial description, we can see some issues with both the scenario and this interpretation: the polarisation discrimination operator considers polarisation in a different basis to the pre- and postselection; weak values are treated as directly providing information about system properties analogously to eigenvalues; and it is not obvious how we should best interpret the compound operators. We discuss these issues in Section V.
As mentioned earlier in this section, this implementation is different to the implementation of the original Cheshire cat paper [2] in two ways. First, we removed a phase plate, which originally added a phase of \(\pi/2\) on (the equivalent of) path 2 at the same time as the HWP--however, this phase is not necessary to observe the effect. Second, we flipped the direction of the protocol, so the preselected state is the entangled state (\(|E_{\text{CC}}\rangle\)), and the postselected state is the Bell-local state (\(|D+\rangle\)). Given the time-symmetry of quantum mechanics, this has no effect on the protocol. Finally, as mentioned in the introduction, local measurement is important for contextual analysis--our goal here, by using contextuality theory, is to consider the relation between measurements, most of which should be local. Presenting the postselection as a simple unentangled measurement helps with this.
## III Contextual Analysis
The quantum Cheshire cat scenario involves considering polarisation in a different basis to that used in the pre- and postselection. The peculiar weak values obtained could be manifestations of this measurement incompatibility. This motivates us to consider the protocol from the perspective of contextuality, which provides a framework for linking paradoxical quantum effects with changes in measurement basis. We therefore consider how to represent the quantum Cheshire cat protocol as a contextuality argument, where there are a set of statements which are true individually about a photon in the protocol, but which together lead to a contradiction.
To do this, we start by working out how best to represent the system properties we infer from the weak values above.
### Individual Claims
The quantum Cheshire cat paradox can be represented by three key claims, each of which is the negation of the pre- and postselected particle having one of three properties:
**Claim 1** (Not-{2}).: _"No particle on Path 2."_
This claim comes from observing that the postselection requires the photon to be \(D\)-polarised. The preselection forces path 1 to be \(D\)-polarised and path 2 \(A\)-polarised (and \(D\) and \(A\) are orthogonal), so we would expect that only light which has been on path 1 can go to detector \(D+\). Therefore, a particle which passes both the pre- and postselection should not have been on path 2:
\[\langle D2|E_{\text{CC}}\rangle =0, \tag{12}\] \[\langle D+|A2\rangle =0\]
We denote the property of the pre- and postselected particle having been on path 2 as {2}, and so call this claim NOT-{2}.
The experimentally-verifiable condition for this claim is the absence of any \(D\)-polarised photons on path 2. A slight modification of the Cheshire cat protocol allows us to test this condition directly. As shown in Fig. 2a, we
can put a \(D/A\) polarising beamsplitter on path 2, positioned so it removes any \(D\)-polarised light from path 2 and sends it to a detector. As \(A\)-polarised light wouldn't meet the postselection criterion, only light which goes to this detector would have gone along path 2 and still passed the postselection. However, the detector is expected to record no counts, confirming \(\langle D2|E_{\text{CC}}\rangle=0\). This procedure will not change the postselection probability \(P(D+)\) since the observation of no photons in \(D2\) does not change the state \(|E_{\text{CC}}\rangle\).
The probability of a photon which meets the preselection being in state \(D2\) (and so arriving at this new detector) is
\[P(D2)=0 \tag{13}\]
Claim 1 can be identified with the weak value of the projector on path 2 in the original Cheshire cat paper,
\[\langle\hat{\Pi}(2)\rangle_{w}=0 \tag{14}\]
Since we postselect on the polarisation \(D\), we can replace the identity operator in the polarisation space with a projection operator on polarisation \(D\),
\[\langle\hat{\Pi}(2)\rangle_{w}=\langle|D\rangle\langle D|\otimes|2\rangle \langle 2|\rangle_{w} \tag{15}\]
Therefore, the probability of 0 in Eq. (13) ensures that the weak value in Eq. (14) is also zero.
The experiment in Fig. 2a has some similarity to the one done by Denkmeyer et al [3]. In that experiment, they put a blocker onto each of the two paths in turn, to work out which path neutrons took in their Cheshire cat interferometer. They inferred that the neutrons must travel along (their equivalent of) path 1, as only on that path did the presence of the blocker affect the detection intensity. Our set-up shows that this was possible due to the polarisation-path correlation--blocking path 2 merely blocks \(A\)-polarised light, whereas all of the \(D\)-polarised light is blocked when path 1 is blocked. The above analysis thus shows more details of the effect demonstrated in [3].
**Claim 2** (Not-\(\{V\}\)).: _"No \(V\)-polarised particle."_
This claim comes from observing that
\[|E_{\text{CC}}\rangle=\frac{1}{\sqrt{2}}\left(|H+\rangle+|V-\rangle\right) \tag{16}\]
This means the photon is only \(V\)-polarised if it is in path-superposition "\(-\)". As the postselected state \(|D+\rangle\) is in the orthogonal path-superposition "\(+\)", a photon which passes both pre- and postselection cannot be \(V\)-polarised:
\[\begin{split}\langle V+|E_{\text{CC}}\rangle&=0,\\ \langle D+|V-\rangle&=0\end{split} \tag{17}\]
We denote the property of the pre-and postselected particle having been \(V\)-polarised as \(\{V\}\), and so call this claim NOT-\(\{V\}\).
The experimentally-verifiable condition for this claim is the absence of any \(V\)-polarised photons in the "\(+\)" output of the interferometer. A slight modification of the Cheshire cat protocol allows us to test this condition directly. We can test this condition by putting a \(H/V\) polarising beamsplitter just before the \(D/A\) PBS, set up so any \(V+\) light from the interferometer goes to a detector (see Fig. 2b). Given any light in state \(V-\) wouldn't meet the postselection condition, this ensures only light which goes to this detector could have been \(V\)-polarised and still passed the postselection. This procedure will not
Figure 2: Altered forms of the quantum Cheshire cat protocol (as given in Fig. 1), each designed to allow the validation of one of the three claims made about the situation: Fig. 2a allows us to test Claim 1, Fig. 2b allows us to test Claim 2, and Fig. 2c allows us to test Claim 3. Given we are looking to show that preselected light never arrives at the relevant detectors, we can use either single photons or coherent states (e.g. laser light) to validate these claims. The black half-ovals are optical detectors, which based on their position will only detect light in the state given on their label (e.g. we know any light arriving at detector \(A-\) in Fig. 2c must be both \(A\)-polarised and in the “\(-\)” position superposition).
change the postselection probability \(P(D+)\) since the observation of no photons in \(V+\) does not change the state \(|E_{\mathrm{CC}}\rangle\).
The probability of a photon which meets the preselection being in state \(V+\) (and so arriving at this new detector) is
\[P(V+)=0 \tag{18}\]
It may be worth noting that the original quantum Cheshire cat paper [2] does not make any statements about the polarisation itself. However, in close analogy to Claim 1, we can now identify the weak value of the projector on \(V\)-polarisation,
\[\langle\hat{\Pi}(V)\rangle_{w}=0 \tag{19}\]
where \(\hat{\Pi}(V)=|V\rangle\langle V|\otimes\mathds{1}\). As before, the postselection of \(+\) ensures this weak value is the same as the weak value of \(V+\), which is zero because there is no \(V+\) component in \(|E_{\mathrm{CC}}\rangle\).
**Claim 3** (Not-\(\{\Phi\}\)).: _"No \(\Phi\)-correlation of path and polarisation."_
This Claim needs to establish the relation between polarisation and path. To classify the correlations in a way that allows us to connect the \(H/V\) and path basis to the postselected Bell states, we consider the Bell states. The two Bell states for which path 1 is always \(H\)-polarised and path 2 is always \(V\) polarised are states \(\Phi^{+}\) and \(\Phi^{-}\), where
\[\left|\Phi^{\pm}\right\rangle=\frac{1}{\sqrt{2}}\left(|H1\rangle\pm|V2\rangle\right) \tag{20}\]
We therefore refer to this correlation between path and polarisation as \(\Phi\)-correlation.
It is now possible to show there cannot be any \(\Phi\)-correlation, as the preselection of \(|E_{\mathrm{CC}}\rangle\) does not include \(|\Phi^{+}\rangle\), and the postselection of \(|D+\rangle\) does not include any \(|\Phi^{-}\rangle\):
\[\begin{split}\langle\Phi^{+}|E_{\mathrm{CC}}\rangle& =0,\\ \langle D+|\Phi^{-}\rangle&=0\end{split} \tag{21}\]
This relation shows that the postselected outcome \(|D+\rangle\) does not contain any \(|\Phi^{-}\rangle\) component. This means that the absence of the \(|\Phi^{+}\rangle\) component in the preselected initial state \(|E_{\mathrm{CC}}\rangle\) is sufficient to to experimentally confirm Claim 3.
We denote the property of the pre- and postselected particle having the \(\Phi\)-correlation between polarisation and path as \(\{\Phi\}\), and so call this claim NOT-\(\{\Phi\}\).
The experimentally-verifiable condition for this claim is the absence of \(|\Phi^{+}\rangle\). Since this is an entangled state, it is best to verify it in a two-step process. As shown in Fig. 2c, we can first remove all components that are not \(\Phi\)-correlated, by using the appropriate PBSs to remove \(V1\) and \(H2\). This corresponds to a projection operator \(\hat{\Pi}(\Phi)\) to the input state, where
\[\hat{\Pi}(\Phi)=|H1\rangle\langle H1|+|V2\rangle\langle V2| \tag{22}\]
Since the state \(|\Phi^{-}\rangle\) does not include any \(D+\) component, a detection of \(D+\) in the output now corresponds to a detection of \(\Phi^{+}\) in the initial state:
\[\langle D+|\,\hat{\Pi}(\Phi)=\frac{1}{\sqrt{2}}\left\langle\Phi^{+}\right| \tag{23}\]
The probability of the photon which meets the preselection being in state \(\Phi^{+}\) is
\[P(\Phi^{+})=0 \tag{24}\]
This shows that the part of the input state \(|E_{\mathrm{CC}}\rangle\) that has a \(\Phi\)-correlation does not connect to the postselected outcome \(D+\).
It is also possible to identify this claim with the weak value of the projector \(\hat{\Pi}(\Phi)\),
\[\langle\hat{\Pi}(\Phi)\rangle_{w}=0 \tag{25}\]
where the postselection of \(D+\) ensures that this is the same as the weak value of the projector on \(\Phi^{+}\), which is zero because there is no \(\Phi^{+}\) component in \(|E_{\mathrm{CC}}\rangle\).
### Combining Claims
The typical structure of a contextuality argument involves identifying claims which apply individually to a situation, even though their combination would result in logical contradictions. We do this here for the quantum Cheshire cat scenario by showing that the three claims discussed above cannot be satisfied by any combination of \(H/V\)-polarisation in path 1 or 2.
By combining the claims we define above in certain ways, we see that each pair of claims uniquely infers a different state:
**Inference 1** (\(H1\)).: _Claim 1 + Claim 2 \(\to\)\(H1\)_
This inference comes from Claim 1 saying the particle wasn't on path 2 (so must be on path 1), and Claim 2 saying the particle wasn't \(V\)-polarised, so must be \(H\)-polarised. \(H1\) is the only state which agrees with these two claims.
**Inference 2** (\(H2\)).: _Claim 2 + Claim 3 \(\to\)\(H2\)_
This inference comes in multiple steps. Claim 3 tells us the particle wasn't in a \(\Phi\)-correlation, so wasn't in either state \(H1\) or \(V2\). Therefore, the particle must have been in either state \(H2\) or \(V1\). Claim 2 then adds that the particle wasn't \(V\)-polarised, so of those two states, can't have been in state \(V1\). Therefore, the particle must have been in state \(H2\).
**Inference 3** (\(V1\)).: _Claim 3 + Claim 1 \(\to V1\)_
Again, this inference comes in multiple steps. Claim 3 says the particle wasn't in a \(\Phi\)-correlation, so wasn't in either state \(H1\) or \(V2\). Therefore, the particle must have been in either state \(H2\) or \(V1\). Claim 1 then adds that the particle wasn't on path 2, so of those two states, can't have been in state \(H2\). Therefore, the particle must have been in state \(V1\).
These inferences all contradict one another--Inference 1 infers the state is \(H1\), Inference 2 infers the state is \(H2\), and Inference 3 infers the state is \(V1\). We can represent this as a contextuality diagram, as shown in Fig. 3. This diagram shows that the quantum Cheshire cat can be represented as a typical example of contextuality. It is interesting to compare this to the original formulation, which lacks the symmetry of the contextuality diagram. This broken symmetry results from the combination of Claims in Inference 2. As we argued in the previous subsection, Claim 1 corresponds to the weak values of the spatial projection operator in the original quantum Cheshire cat paper [2]--specifically, since the weak values of projections of path 1 and path 2 must add up to 1, the weak value of 0 for projection on path 2 necessarily corresponds to a weak value of 1 for projection on path 1. To combine the weak values given in Claims 2 and 3, note that each of the projectors can be separated into two parts. According to Eq. (19) in Claim 2,
\[\langle|V1\rangle\langle V1|\rangle_{w}=-\langle|V2\rangle\langle V2|\rangle_ {w} \tag{26}\]
Similarly, Eq. (25) in Claim 3 gives
\[\langle|H1\rangle\langle H1|\rangle_{w}=-\langle|V2\rangle\langle V2|\rangle_ {w} \tag{27}\]
The weak value which is supposed to indicate the absence of polarisation in path 1 is given by
\[\langle\hat{\sigma}_{HV}\otimes|1\rangle\langle 1|\rangle_{w}=\langle|H1 \rangle\langle H1|\rangle_{w}-\langle|V1\rangle\langle V1|\rangle_{w} \tag{28}\]
Eqs. (26) and (28) show the two terms on the right hand side are equal, so that they cancel, and the weak value in Eq. (28) is indeed zero. It follows from Eq. (19) in Claim 2 that the weak value of the projector on \(V\)-polarisation is 0, and so the weak value of the projector on \(H\)-polarisation is 1. The weak value of \(\hat{\sigma}_{HV}\) is therefore 1, and so
\[\langle\hat{\sigma}_{HV}\otimes|1\rangle\langle 1|\rangle_{w}+\langle\hat{ \sigma}_{HV}\otimes|2\rangle\langle 2|\rangle_{w}=1 \tag{29}\]
This means
\[\langle\hat{\sigma}_{HV}\otimes|1\rangle\langle 1|\rangle_{w}=1 \tag{30}\]
as in the original quantum Cheshire cat paper [2].
### Experimental Verification of Contextuality
Fig. 2 suggests that all of the Claims can be verified experimentally. However, in a realistic experimental situation, the probabilities of the prohibited outcomes will not be exactly zero. It is therefore useful to formulate quantum contextuality as an inequality violation [29; 30; 31; 32; 33; 34]. The postselected outcome \(D+\) appears to be impossible when all three claims are satisfied. Statistically, the probability of finding \(D+\) should therefore be upper-bounded by the sum of the probabilities that each of the Claims is violated,
\[P(D+)\leq P(V+)+P(D2)+P(\Phi^{+}). \tag{31}\]
Fig. 2 shows the experimental methods for determining the individual probabilities. In the ideal case, the probability of finding the outcome \(D+\) in the input state \(|E_{\text{CC}}\rangle\) should be \(1/4\), where all of the probabilities on the right side of the inequality will be close to zero.
## IV Coherences between prohibited states
The original interpretation of the quantum Cheshire cat paradox is that the polarisation becomes disembodied from the particle. However, the contextuality analysis above indicates that it may be problematic to consider path 2 to be empty, just as it would be problematic
Figure 3: Hexagram diagram of the contextuality relations for the quantum Cheshire cat scenario. Contexts are represented by minimal closed cycles (with circles within these for clarity)—all states in the same context are all mutually orthogonal. Each of the three outer contexts is comprised of four states: two which share a side of the inner triangle, and two which share a side of the hexagon. The three outer contexts each correspond to one of the three Claims, where the prohibited properties are indicated by \(\{2\}\), \(\{V\}\), and \(\{\Phi\}\), placed between the two states in the context which share this property. All of the states that satisfy one of the claims are part of the inner triangle—specifically, each state is uniquely defined by two of the Claims. There is no state for which all three Claims are true, since each state of the inner triangle is orthogonal to the two states that satisfy the claim opposite to it.
to consider the particle to be entirely \(H\)-polarised. Let us consider whether the weak values formalism indicates why and how path 2 and \(V\)-polarisation are involved in this paradoxical behaviour. To do so, we note that all of the claims eliminate a possibility by combining probabilities of zero in the preselected state with probabilities of zero in the postselected state. This results in weak values of zero, because for any statement represented by a projector \(\hat{\Pi}\), the weak value will be zero when either the initial or the final state are orthogonal to a set of eigenstates of the projector.
The weak value in Claim 2 can be separated into the weak values of projectors on \(A2\) and \(D2\). The weak value of the projector on \(A2\) is zero because it is orthogonal to the final state; the weak value of \(D2\) is zero because it is orthogonal to the initial state. However, we cannot use this observation to determine the weak values of projectors on \(H2\) and \(V2\), despite the two also summing to zero. This is because both \(V2\) and \(H2\) have non-zero components in the initial and the final state. Weak values are determined by combining these non-zero components in a product. The weak values of zero for the projectors of \(A2\) and \(D2\) correspond to a contribution from ordered coherences between the two.
Making use of the orthogonality of \(A2\) and the postselected state \(D+\), and the orthogonality of \(D2\) and the initial state \(|E_{\text{CC}}\rangle\), the weak values of the projectors on \(H2\) and \(V2\) can be represented as weak values of coherences between \(A2\) and \(D2\),
\[\begin{split}\langle|H2\rangle\langle H2|\rangle_{w}& =\frac{1}{2}\langle|D2\rangle\langle A2|\rangle_{w},\\ \langle|V2\rangle\langle V2|\rangle_{w}&=-\frac{1}{ 2}\langle|D2\rangle\langle A2|\rangle_{w}\end{split} \tag{32}\]
Here, the weak value of an operator (be it a projector or a coherence) can be described by
\[\langle\hat{O}\rangle_{w}=\text{Tr}\left(\hat{O}\frac{|i\rangle\langle f|}{ \langle f|i\rangle}\right) \tag{33}\]
This shows that weak values can be represented as expectation values of a statistical operator, representing the pre- and postselection [35]. This statistical operator can be decomposed using the weak values of the coherences between the states \(n\) and \(m\) of a given basis [36; 37]:
\[\begin{split}\forall n,m\text{ such that}&|\langle n |m\rangle|^{2}=\delta_{n,m},\\ \frac{|i\rangle\langle f|}{\langle f|i\rangle}&= \sum_{n,m}\langle|m\rangle\langle n|\rangle_{w}\,|n\rangle\langle m|\end{split} \tag{34}\]
For the quantum Cheshire cat scenario, we can decompose the statistical operator in different orthogonal bases, corresponding to the different Claims. If we decompose it in the \((D/A)\otimes(1/2)\) basis, we get
\[\begin{split}&\frac{|E_{\text{CC}}\rangle\langle D+|}{\langle D+|E_{ \text{CC}}\rangle}\\ &=|D1\rangle\langle D1|+|D1\rangle\langle A2|+|D2\rangle\langle D 1|+|D2\rangle\langle A2|\end{split} \tag{35}\]
Given this decomposition only has a projector onto a state on path 1, this appears to support Claim 1. However, the decomposition also has a coherence between two states which are both on path 2.
Similarly, if we decompose the statistical operator in the \((H/V)\otimes(+/-)\) basis, we get
\[\begin{split}&\frac{|E_{\text{CC}}\rangle\langle D+|}{\langle D+|E_{ \text{CC}}\rangle}\\ &=|H+\rangle\langle H+|+|H+\rangle\langle V-|+|V+\rangle\langle H +|\\ &\quad+|V+\rangle\langle V-|\end{split} \tag{36}\]
Given this decomposition only has a projector onto the \(H\)-polarisation, this appears to support Claim 2. However, the decomposition also has a coherence between two states which are both \(V\)-polarised.
\[\begin{split}&\frac{|E_{\text{CC}}\rangle\langle D+|}{\langle D+|E_{ \text{CC}}\rangle}\\ &=\big{|}\Psi^{+}\big{\rangle}\big{\langle}\Psi^{+}\big{|}+ \big{|}\Phi^{+}\big{\rangle}\big{\langle}\Psi^{+}\big{|}+\big{|}\Psi^{+} \big{\rangle}\big{\langle}\Phi^{-}\big{|}+\big{|}\Phi^{+}\big{\rangle}\big{ \langle}\Phi^{-}\big{|}\end{split} \tag{37}\]
Given this decomposition only has a projector onto the Bell state \(|\Psi^{+}\rangle\), this appears to support Claim 3. However, the decomposition also has a coherence between two states which are both \(\Phi\)-correlated.
These three decompositions show that the statistical operator contains coherences between prohibited states (specifically between \(A2\) and \(D2\) in the \((D/A)\otimes(1/2)\) decomposition, between \(V-\) and \(V+\) in the \((H/V)\otimes(+/-)\) decomposition, and between \(\Phi^{-}\) and \(\Phi^{+}\) in the Bell-basis decomposition). These coherences are between states which are prohibited--specifically, between one state prohibited by the postselection, and another prohibited by the preselection. By having the state prohibited by preselection as its "ket", and the state prohibited by postselection as its "bra", the coherence is allowed by both the pre- and postselection, despite the two states it is formed of both individually being prohibited. As discussed in [38], these coherences between prohibited states form necessary components of the quantum descriptions of contextual systems, despite not being describable classically.
By changing basis, we see that all three of these coherences contain the projector \(|V2\rangle\langle V2|\), which has an anomalous weak value of -1/2 (as discussed in [39]). Given the weak values of \(|H2\rangle\langle H2|\), \(|V1\rangle\langle V1|\), and \(|H1\rangle\langle H1|\) are all +1/2, this negative weak value seems to cancel out the projection onto each of these states, and so prohibit the photon from having the property shared by \(V2\) and that state (\(\{2\}\) for \(H2\), \(\{V\}\) for \(V1\), and \(\{\Phi\}\) for \(H1\)).
However, each of these coherences only exists in one measurement basis: if we measure in the \((D/A)\otimes(1/2)\) basis, \(|V2\rangle\langle V2|\) only cancels out \(|H2\rangle\langle H2|\), leaving the coherence \(|D2\rangle\langle A2|\); if we measure in the \((H/V)\otimes(+/-)\) basis, \(|V2\rangle\langle V2|\) only cancels out \(|V1\rangle\langle V1|\), leaving the coherence \(|V+\rangle\langle V-|\); and if we measure in the Bell-basis,
\(|V2\rangle\langle V2|\) only cancels out \(|H1\rangle\langle H1|\), leaving the coherence \(|\Phi^{+}\rangle\langle\Phi^{-}|\). There is only one -1/2 term, but three +1/2 terms--but, in each basis, one of those +1/2 terms is cancelled out, and so it appears that the shared property of \(|V2\rangle\langle V2|\) with that state is prohibited.
This is different from a noncontextual prohibition of a property, where any state with that property should have no projector for _any_ choice of basis, not just for one choice of basis. This shows why coherences are important--the projector for a states with a given property in a contextual scenario can hide in the coherence; suppressed in that choice of basis (and so for that measurement, or question), but stopping us from holding the negation of both that property and another contextual property true simultaneously.
A single negative-projector state can only cancel one other state at a time, when, for it to be non-contextually true that the scenario doesn't possess these properties, all three would need to be cancelled. These coherences are therefore responsible for the mysterious violation of the noncontextual inequality (Eq. (31)), by stopping us from being able to ensure \(P(D2)\), \(P(V+)\) and \(P(\Phi^{+})\) are all zero simultaneously: forcing one of these states to be probability zero frees the non-zero projectors of the other two states, and so allows the other two probabilities to be non-zero.
The negative weak value of state \(V2\) implies the photon being in this state is suppressed more heavily than is classically possible, to the extent that it suppresses the photon being \(V\)-polarised (by cancelling out the positive weak value of \(V1\)), or being on path 2 (by cancelling out the positive weak value of \(H2\)), or having \(H1/V2\) correlation (by cancelling out the positive weak value of \(H1\)), depending on how the photon is measured. The weak value of \(V2\) being negative therefore indicates the system's behaviour will be contextual. The suppression of each of the properties of the photon being in state \(V2\) (the photon being \(V\)-polarised, the photon being on path 2, and the photon having a \(H1/V2\) correlation), relates to one of three contexts. Each context is linked to both of the other contexts (they each share one state with each other contexts); however, no state is in all three contexts. Therefore, the photon is never in a state where all three of these properties are suppressed simultaneously. This means things we would classically infer from all three of these properties being suppressed (such as the probability of a photon that passes the preselection being in state \(D+\) being lower than the sum of the probabilities of it being in states \(D2\), \(V+\), and \(\Phi^{+}\)), can be shown not to be the case (as per the violation of Eq. (31)).
## V Compound operators and basis bias
A major result of this paper is showing that we can talk about the quantum Cheshire cat scenario as a contextual scenario. However, being able to do so relies on being able to characterise the polarisation/path correlation prohibited by the compound operators' weak values. The quantum Cheshire cat is hard to understand precisely because this condition--the prohibition of the \(\Phi\)-correlation--is hard to extract from the original presentation of the scenario.
The main evidence for a paradox given in the original quantum Cheshire cat paper [2] is from weak values--there is no way to directly measure the polarisation or path-presence without disturbing the system, so we can observe no direct evidence of this paradox; and as one of us has discussed previously, inferring from weak values to properties is non-trivial at best [40]. However, given the weak value-related issues with the quantum Cheshire cat have been discussed heavily (see Duprey et al's review [41]), and often rely on an understanding of weak values as requiring weak measurement to obtain (which we now know not to be the case [42; 24; 43; 25]), we will focus on the problem of the construction of the compound operator, which reduces the Cheshire cat problem to only two weak values.
The claim that the photon separates from its polarisation in the quantum Cheshire cat scenario relies on a lack of symmetry. There is no mathematical difference between the path-presence qubit and the polarisation qubit in the scenario--both are just qubits. However, we usually interpret a photon's position very differently from its polarisation. This is expressed by the compound operators given in Eq. (10). We can now relate these compound operators to the weak values of projectors,
\[\begin{split}\langle|H1\rangle\langle H1|\rangle_{w}& =+1/2,\\ \langle|H2\rangle\langle H2|\rangle_{w}&=+1/2,\\ \langle|V1\rangle\langle V1|\rangle_{w}&=+1/2,\\ \langle|V2\rangle\langle V2|\rangle_{w}&=-1/2\end{split} \tag{38}\]
The weak value of the compound operator intended to describe the presence of polarisation in path 2 is then given by
\[\langle\hat{\sigma}_{HV}\otimes|2\rangle\langle 2|\rangle_{w}=\langle|H2 \rangle\langle H2|\rangle_{w}-\langle|V2\rangle\langle V2|\rangle_{w} \tag{39}\]
We can now see that the presence of a polarisation in path 2 is a direct consequence of the negative weak value of the projector on \(V2\). The apparent contradiction with Claim 1 is expressed by the weak value of the projector on path 2, which can be written as
\[\langle\mathds{1}\otimes|2\rangle\langle 2|\rangle_{w}=\langle|H2\rangle \langle H2|\rangle_{w}+\langle|V2\rangle\langle V2|\rangle_{w} \tag{40}\]
The contradiction between these two equations is a direct result of the coherence between \(A2\) and \(D2\) found when describing the pre- and postselection in the \((D/A)\otimes(1/2)\) basis. This specific basis then expresses a combination of Claim 1 with Inference 2. Other combinations are possible and can be constructed by basing the paradox on a different initial Claim. If we base the paradox on Claim 2, the compound operator is
\[\langle|V\rangle\langle V|\otimes\hat{\sigma}_{\pm}\rangle_{w}=\langle|V1 \rangle\langle V1|\rangle_{w}-\langle|V2\rangle\langle V2|\rangle_{w} \tag{41}\]
where \(\hat{\sigma}_{\pm}\) is path difference operator
\[\begin{split}\hat{\sigma}_{\pm}&=|+\rangle\langle+|-|- \rangle\langle-|\\ &=|1\rangle\langle 2|+|2\rangle\langle 1|\end{split} \tag{42}\]
The contradiction is that we find a path-bias in the polarisation \(V\), even though according to Claim 2 there is no \(V\)-polarisation, as given by the sum of the weak values
\[\langle|V\rangle\langle V|\otimes\mathds{1}\rangle_{w}=\langle|V1\rangle \langle V1|\rangle_{w}+\langle|V2\rangle\langle V2|\rangle_{w} \tag{43}\]
This version of the paradox can be traced back to the weak value of the coherence between \(V+\) and \(V-\). We therefore see that changing the Claim on which we base the paradox is equivalent to changing the basis.
If we base the paradox on Claim 3, the compound operator describes a bias between \(H1\) and \(V2\)_within_ the \(\Phi\)-correlation:
\[\langle\hat{B}_{\Phi}\rangle_{w}=\langle|H1\rangle\langle H1|\rangle_{w}- \langle|V2\rangle\langle V2|\rangle_{w} \tag{44}\]
The contradiction is that we find such a bias in the \(\Phi\) correlation, even though there seems to be no \(\Phi\)-correlation present, as given by the sum of the weak values,
\[\langle\hat{\Pi}(\Phi)\rangle_{w}=\langle|H1\rangle\langle H1|\rangle_{w}+ \langle|V2\rangle\langle V2|\rangle_{w} \tag{45}\]
This version of the paradox can be traced back to the weak value of the coherence between \(\Phi^{+}\) and \(\Phi^{-}\). It may be worth noting that the basis here is a basis of entangled states.
The analysis above shows that the original quantum Cheshire cat paradox is based on an arbitrary choice of a conditional bias, which combines two of the three Claims. The main reason why the initial choice was Claim 1 is that it corresponds to the basis choice of the \(1/2\) basis, which expresses our intuitive bias towards the idea that particles are naturally localised.
The formulation of all three versions of the paradox involves the contradiction between the weak value of a projector, and the weak value of a compound operator corresponding to a conditional bias. In all three cases, the conditional bias is away from the state \(V2\).
What does this mean? In the original quantum Cheshire cat paper [2], the weak value of the compound operators supposedly tell us that the "polarisation" is disembodied onto path 2. It seems harder to claim that when we base the paradox on Claim 2, the "localisation" of the photon is disembodied into \(V\)-polarisation.
Clearly, the problem is that we find it easier to imagine that empty space is somehow polarised than to imagine that an "empty" polarisation could be distributed in space. However, both concepts are actually equivalent. It is equally wrong to identify a particle with its position as it is to identify a particle with its polarisation. As a result, it is even possible to identify the correlation between position and polarisation as the fundamental property on which we can base the paradox. Ultimately, the paradox originates from a contradiction between these three features of the particle, as shown in full by the contextuality scenario involving all three claims.
## VI Conclusion
We have shown that the quantum Cheshire cat scenario is contextual. We did this by introducing three separate Claims regarding the path and polarisation of a photon within the quantum Cheshire cat protocol. Each of these Claims can be tested experimentally, and the quantum Cheshire cat paradox can be verified without any recourse to weak values. We also showed that the paradox can be expressed as a contradiction between only two statements, by combining any two claims using inference, where the disembodied polarisation corresponds to a combination of Claim 1 with Inference 2. The original quantum Cheshire cat paradox can thus be recovered from the general formulation we present.
We then went on to consider _why_ the scenario is contextual. By decomposing a statistical operator, formed by the pre- and postselection, in different bases corresponding to different measurement contexts, we observed coherences between states that were prohibited by either the pre- or postselection. We showed that these coherences between prohibited states cause the contextual behaviour--each of these coherences can be transformed into a projector on some state (depending on the context), minus a projector on state \(V2\). This minus, corresponding to the negative weak value of \(V2\), allows it to "cancel" out that state, but only in that context. Each of the three relevant contexts can be identified with one state that is cancelled out by a negative weak value contribution from \(V2\). The measurement performed on the system determines which cancellation is relevant. Therefore, the system not having the property shared by that state and state \(V2\) only occurs when the system is measured in that context.
In this paper, we have clarified how the quantum Cheshire cat paradox should be interpreted--specifically that the argument that the polarisation becomes "disembodied" results from only considering one specific pairing of the three mutually-incompatible properties in what is ultimately just a contextual system. This analysis allows us to properly relate the quantum Cheshire cat paradox to fundamental properties of quantum mechanics, including contextuality, weak values, and coherences between prohibited states. Investigating these relations further presents interesting new directions for research in quantum foundations.
_Acknowledgements -_ We thank Masataka Iinuma and Tomonori Matsushita for helpful comments on early versions of this paper. JRH thanks Prof James Ladyman and Prof John Rarity for useful earlier discussions of the quantum Cheshire cat paradox. JRH acknowledges support from Hiroshima University's Phoenix Postdoctoral Fellowship for Research, the University of York's EPSRC DTP grant EP/R513386/1, and the Quantum Communications Hub funded by EPSRC grants EP/M013472/1 and EP/T001011/1. MJ acknowledges support from JST SPRING, Grant Number JPMJSP2132. |
2308.16115 | System size dependence of the hadronic rescattering effect at energies
available at the CERN Large Hadron Collider | The first measurements of $\mathrm{K^{*}(892)^{0}}$ resonance production as a
function of charged-particle multiplicity in Xe$-$Xe collisions at
$\sqrt{s_{\mathrm{NN}}}=$ 5.44 TeV and pp collisions at $\sqrt{s}=$ 5.02 TeV
using the ALICE detector are presented. The resonance is reconstructed at
midrapidity ($|y|< 0.5$) using the hadronic decay channel $\mathrm{K^{*0}}
\rightarrow \mathrm{K^{\pm} \pi^{\mp}}$. Measurements of transverse-momentum
integrated yield, mean transverse-momentum, nuclear modification factor of
$\mathrm{K^{*0}}$, and yield ratios of resonance to stable hadron
($\mathrm{K^{*0}}$/K) are compared across different collision systems (pp,
p$-$Pb, Xe$-$Xe, and Pb$-$Pb) at similar collision energies to investigate how
the production of $\mathrm{K^{*0}}$ resonances depends on the size of the
system formed in these collisions. The hadronic rescattering effect is found to
be independent of the size of colliding systems and mainly driven by the
produced charged-particle multiplicity, which is a proxy of the volume of
produced matter at the chemical freeze-out. In addition, the production yields
of $\mathrm{K^{*0}}$ in Xe$-$Xe collisions are utilized to constrain the
dependence of the kinetic freeze-out temperature on the system size using the
hadron resonance gas in partial chemical equilibrium (HRG-PCE) model. | ALICE Collaboration | 2023-08-30T16:14:50Z | http://arxiv.org/abs/2308.16115v2 | # System size dependence of hadronic rescattering effect at LHC energies
###### Abstract
The first measurements of \(\mathrm{K^{*}}(892)^{0}\) resonance production as a function of charged-particle multiplicity in Xe-Xe collisions at \(\sqrt{s_{\mathrm{NN}}}=5.44\) TeV and pp collisions at \(\sqrt{s}=5.02\) TeV using the ALICE detector are presented. The resonance is reconstructed at midrapidity (\(|y|<0.5\)) using the hadronic decay channel \(\mathrm{K^{*0}}\to\mathrm{K^{\pm}}\pi^{\mp}\). Measurements of transverse-momentum integrated yield, mean transverse-momentum, nuclear modification factor of \(\mathrm{K^{*0}}\), and yield ratios of resonance to stable hadron (\(\mathrm{K^{*0}}\)/K) are compared across different collision systems (pp, p-Pb, Xe-Xe, and Pb-Pb) at similar collision energies to investigate how the production of \(\mathrm{K^{*0}}\) resonances depends on the size of the system formed in these collisions. The hadronic rescattering effect is found to be independent of the size of colliding systems and mainly driven by the produced charged-particle multiplicity, which is a proxy of the volume of produced matter at the chemical freeze-out. In addition, the production yields of \(\mathrm{K^{*0}}\) in Xe-Xe collisions are utilized to constrain the dependence of the kinetic freeze-out temperature on the system size using HRG-PCE model.
CERN-EP-2023-175
22 August 2023
Introduction
Production of hadrons consisting of light-flavoured quarks (u, d and s) have been extensively studied in heavy-ion collisions as well as in small collision systems like pp and p-Pb [1, 2, 3, 4, 5, 6, 7, 8] at LHC energies to investigate the bulk properties of strongly interacting quantum chromodynamics (QCD) matter of deconfined quarks and gluons, known as the quark-gluon plasma (QGP) [9, 10, 11, 12, 13, 14]. The produced QGP is modelled by hydrodynamical equations [15, 16]. The system while evolving cools down, and after a certain time, hadronization takes place [17, 18, 19, 20]. As the temperature of the system dials down further, it first reaches a space-time surface called chemical freeze-out surface [21] where the hadronic abundances get fixed, and then a kinetic freeze-out (\(T_{\rm kin}\)) surface where the hadrons momenta get frozen [22, 23]. After the kinetic freeze-out surface, particles stream freely to the detectors. In these collisions, several kinds of light and heavy flavour hadrons and resonances with different flavours of valence quark content, mass, and lifetime are produced. Each of these hadrons and resonances possesses unique characteristic features that can be exploited to study the properties of the medium. Hadron yields are used as an experimental input in the thermal model [24, 25, 26, 27, 28] to extract the chemical freeze-out temperature, baryon chemical potential and volume of the produced matter. The transverse-momentum (\(p_{\rm T}\)) spectra of hadrons are fitted with a hydrodynamics-based model, such as the blast-wave model [29], to obtain the kinetic freeze-out temperature [23, 30] and collective radial expansion velocity [30] of the medium. The phase between the chemical and kinetic freeze-out surface is termed as the hadronic phase [31]. Properties of the hadronic phase can be probed by studying short-lived resonance particles which decay via the strong interaction. Short-lived resonances have a lifetime comparable to that of the hadronic phase and, therefore, their decay products get engaged in regeneration [32, 33] and rescattering [34] processes. These processes depend on the hadronic cross section [35, 36, 37] of the decay products of the resonance inside the hadronic medium, the lifetime of the resonance particle, density of the hadron gas, and the hadronic phase lifetime. The presence of these final-state hadronic interactions leads to the modification of experimentally measured yields of resonance particles [38].
To probe the final-state hadronic interactions, the ALICE Collaboration has previously extensively studied the production of light flavour resonances with different lifetimes (\(\tau\)), e.g. K\({}^{*0}\) (\(\tau\)\(\approx\) 4 fm/\(c\)) and \(\phi\)(1020) (\(\tau\)\(\approx\) 40 fm/\(c\)) in pp, p-Pb and Pb-Pb collisions [6, 39, 40, 41, 42]. The \(p_{\rm T}\)-integrated yield of K\({}^{*0}\) relative to kaons is found to be suppressed in central Pb-Pb collisions compared to pp, peripheral Pb-Pb collisions, and to thermal model predictions, whereas no such suppression is observed for the \(\phi\) meson. The observed reduction in measurable yield suggests that the rescattering of K\({}^{*0}\) decay products in the hadronic phase dominates over regeneration, leading to the suppression of measurable yield. The suppression of K\({}^{*0}\) meson yields due to the rescattering is dominant at low \(p_{\rm T}\) (\(<\) 3 GeV/\(c\)) from the study of \(p_{\rm T}\)-differential K\({}^{*0}\)/K yield ratio [34]. Furthermore, at high \(p_{\rm T}\), the phenomenon of energy loss by energetic partons traversing the dense medium formed in high-energy heavy-ion collisions affects the production yield of K\({}^{*0}\) and \(\phi\) resonances [6, 43] compared to the pp collisions. The energy loss process depends on the lifetime of the dense matter, initial medium density, the path length traversed by the parton, and the parton flavour. The modification in the yield of high-\(p_{\rm T}\) particles is quantified using the nuclear modification factor (\(R_{\rm AA}\)) [44] defined as
\[R_{\rm AA}=\frac{1}{\langle T_{\rm AA}\rangle}\frac{{\rm d}^{2}N^{\rm AA}/({ \rm d}{\rm y}{\rm d}p_{\rm T})}{{\rm d}^{2}\sigma^{\rm pp}/({\rm d}{\rm y}{ \rm d}p_{\rm T})}, \tag{1}\]
where \({\rm d}^{2}N^{\rm AA}/({\rm d}{\rm y}{\rm d}p_{\rm T})\) is the yield of the particle in heavy-ion collisions and \(\sigma^{\rm pp}\) is its production cross section in pp collisions. The average nuclear overlap function is denoted by \(\langle T_{\rm AA}\rangle\) and can be estimated as \(\langle T_{\rm AA}\rangle=\langle N_{\rm coll}\rangle/\sigma_{\rm inel}\), where \(\langle N_{\rm coll}\rangle\) is the average number of binary nucleon-nucleon collisions obtained from Monte Carlo Glauber simulations [45] and \(\sigma_{\rm inel}\) is the inelastic pp cross section [46]. The \(R_{\rm AA}\) measurements for K\({}^{*0}\) and \(\phi\) in Pb-Pb collisions at \(\sqrt{s_{\rm NN}}\) = 2.76 and 5.02 TeV show that at high \(p_{\rm T}\) (\(>\) 6 GeV/\(c\)) energy loss for \(\pi\), K, p, K\({}^{*0}\) and \(\phi\) are consistent within uncertainties. This observation suggests that the partonic energy loss in the QGP medium is independent of the flavour of light quarks
(u, d, s) [6, 30, 43].
Recent measurements of light flavour hadron production in high-multiplicity pp and p-Pb collisions show some characteristics [47, 48, 49, 50, 51] which have so far been solely attributed to the medium created in heavy-ion collisions. The systems created in pp, p-Pb, and heavy-ion collisions, can be classified based on the final state average charged-particle pseudorapidity density measured at midrapidity (\(\langle dN_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}\)). In small collision systems, multiplicities range from a few to a few tens of charged particles per unit of pseudorapidity. In contrast, in Pb-Pb collisions, multiplicities of a few thousand charged-particles per unit of rapidity can be produced. Recent studies by the ALICE Collaboration at LHC energies show a smooth evolution of the yield or abundance of different hadron species as a function of \(\langle dN_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}\) across different collision systems and energies [7, 8]. In contrast, the mean transverse-momentum (\(\langle p_{\rm T}\rangle\)), which depends on the radial flow, follows a different trend across various colliding systems, rising faster in small collision systems (pp, p-Pb) compared to heavy-ion (Pb-Pb) collisions [8]. One of the primary motivations for studying resonances like K\({}^{*0}\) and \(\phi\) in high-multiplicity pp and p-Pb collisions is to search for the presence of a hadronic phase with a non-zero lifetime in a small collision system. A hint of suppression of K\({}^{*0}\) meson production in high-multiplicity pp and p-Pb collisions was previously reported by the ALICE Collaboration [40]. In fact, the suppression of K\({}^{*0}\)/K yield ratio evolves smoothly as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}\) from low-multiplicity pp collisions to central Pb-Pb collisions across different collision energies.
The measurement of K\({}^{*0}\) production yield in the collisions of medium-sized nuclei such as Xe-Xe provides the ultimate test for validating the picture of the smooth evolution of hadronic rescattering across different colliding systems by bridging the gap between p-Pb and Pb-Pb multiplicities. Using the data sets of pp, p-Pb, Xe-Xe, and Pb-Pb collisions, collected by the ALICE Collaboration at center-of-mass energies per nucleon pair (\(\sqrt{s_{\rm NN}}\)) of about 5 TeV, a systematic study of system-size dependence of hadronic rescattering is possible. In this article, the first measurements of K\({}^{*0}\) meson production at midrapidity (\(|y|<0.5\)) as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}\) in pp collisions at \(\sqrt{s}=5.02\) TeV and in Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV are presented. The measured K\({}^{*0}\) yield and K\({}^{*0}\)/K yield ratio in these collisions are compared with the results obtained from p-Pb and Pb-Pb collisions to understand the system-size dependency of K\({}^{*0}\) production and the hadronic rescattering effect. The yield ratio K\({}^{*0}\)/K is used to constrain the hadronic phase lifetime across different collision systems. Furthermore, the measured K\({}^{*0}\) yields in Xe-Xe and Pb-Pb collisions are used as an experimental input in a partial chemical equilibrium (PCE) based thermal model HRG-PCE [52] to constrain the kinetic freeze-out temperature. This is a novel procedure to extract \(T_{\rm kin}\) that is independent of assumptions about the flow velocity profile and the freeze-out hypersurface [52]. In addition, the mean values of transverse-momentum (\(\langle p_{\rm T}\rangle\)) of K\({}^{*0}\) in different collision systems are also compared to understand the evolution of radial flow from small collision systems to heavy-ion collisions. Moreover, the \(R_{\rm AA}\) of K\({}^{*0}\) at similar charged-particle multiplicity in Pb\(-\)Pb and Xe\(-\)Xe collisions are compared to shed light on the system-size dependence of parton energy loss.
The organization of the article is as follows: the ALICE experimental setup, data analysis technique, and sources of systematic uncertainties are described in Sec. 2, Sec. 3 and Sec. 4, respectively. Results are shown in Sec. 5, and the article is finally summarized in Sec. 6. Since the production of particles and antiparticles are in equal amounts at midrapidity at LHC energies [30], the results for K\({}^{*0}\) and \(\overline{\rm K^{*0}}\) are averaged and denoted as K\({}^{*0}\) throughout the article unless stated otherwise.
## 2 Experimental apparatus, event and track selection
The production yield of K\({}^{*0}\) meson is measured in Xe\(-\)Xe and pp collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV and \(\sqrt{s}=5.02\) TeV, respectively, using the data collected by the ALICE detector at the Large Hadron Collider (LHC). The Xe-Xe collision events were collected in the year 2017 with a magnetic field strength
0.2 T, whereas the pp collision events were collected with B = 0.5 T in year 2015. A full description of the ALICE detector and its performance can be found in [53, 54]. Analyzed events are selected using a minimum-bias trigger that requires at least one hit in both forward scintillator detectors V0A (\(2.8<\eta<5.1\)) and V0C (\(-3.7<\eta<-1.7\)) [55]. Pileup removal involves analyzing hits in the SPD detector, correlating cluster numbers in the ITS and TPC detectors, identifying multiple vertices with the SPD detector, and utilizing the correlation between the SPD and V0M detectors. Beam-induced background and pileup events are eliminated through an offline event selection process, as described in Refs. [8, 53] for pp and [30, 56] for Xe-Xe collisions. The results for pp collisions presented in this paper are based on the "INEL \(>0\)" event class, which is a subset of inelastic collisions where at least one charged particle is emitted in the pseudorapidity interval \(|\eta|<1\)[57]. In addition, selected events must have one primary collision vertex which is reconstructed using the two innermost layers of the Inner Tracking System (ITS) [58] and is located within \(\pm 10\) cm along the beam axis from the nominal center of the ALICE detector. Measurements for K\({}^{*0}\) production yields are carried out using 1.44 \(\times\) 10\({}^{6}\) and 100 \(\times\) 10\({}^{6}\) minimum-bias Xe-Xe and pp collision events. The selected events are categorized into distinct classes based on their centrality in heavy-ion collisions (Xe-Xe) or multiplicity in proton-proton (pp) collisions. These event classes are defined using percentiles of the hadronic cross section. The classification of event classes is accomplished by analyzing the signal deposited in both V0 detectors, referred to as the "V0M amplitude", which is proportional to the charged-particle multiplicity. Various measured observables, such as the transverse momentum (\(p_{\rm T}\)) spectrum, transverse-momentum-integrated yield (\({\rm d}N/{\rm d}y\)), mean transverse momentum (\(\langle p_{\rm T}\rangle\)), yield ratios of resonances to stable particles, kinetic freeze-out temperature (\(T_{\rm kin}\)), and nuclear modification factor (\(R_{\rm AA}\)), are presented for different multiplicity (or centrality for heavy-ion collisions) classes as a function of pseudorapidity density (\(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}\)) [59, 60].
In Xe-Xe collisions, the measurements are conducted in four different centrality classes: 0-30%, 30-50%, 50-70%, and 70-90%. The centrality classes of 0-30% and 70-90% represent central and peripheral collisions, respectively. On the other hand, in pp collisions, the measurements are performed in nine different multiplicity classes, as listed in Table 1, with class I having the highest multiplicity and class IX having the lowest multiplicity.
Charged tracks in a selected event are reconstructed using the ITS [58] and Time Projection Chamber (TPC) [61] detectors, which are located within a solenoid that provides a homogeneous magnetic field. In order to ensure good track quality, a set of track selection criteria are used, as done in previous works [40, 62]. Charged tracks coming from the primary collision vertex are selected with minimum \(p_{\rm T}\) of 0.15 GeV/\(c\) and \(|\eta|<0.8\). Selected tracks must have at least one hit in the two innermost layers of the ITS and must have crossed a minimum of 70 out of total 159 rows along the transverse readout plane of the TPC. The maximum \(\chi^{2}\) per space point in the TPC and ITS obtained from the track fit are required to be 4 and 36, respectively. To minimize the contribution of secondary charged particles, the distance of closest approach in the transverse plane of reconstructed tracks to the primary vertex (\({\rm DCA}_{\rm xy}\)) is required to be smaller than 7\(\sigma\), where \(\sigma\) is the \({\rm DCA}_{\rm xy}\) resolution. The \({\rm DCA}_{\rm xy}\) resolution is found to be \(p_{\rm T}\) dependent and is parameterized as \(\sigma=0.0105+0.0350/(p_{\rm T})^{1.1}\). The DCA in the longitudinal direction is required to be smaller than 2 cm. Selected charged particles are further identified via the TPC and the Time Of Flight (TOF) [63] detectors using their specific ionization energy loss \({\rm d}E/{\rm d}x\) in the TPC and flight time measured in the TOF. Pions (\(\pi\)) and kaons (K) are identified with the condition that their specific energy loss lies within 2 standard deviations (\(\sigma_{\rm TPC}\)) (for \(p>0.4\)), 4\(\sigma_{\rm TPC}\) (for \(0.3<p<0.4\)) and 6\(\sigma_{\rm TPC}\) (for \(p<0.3\)) from their expected \({\rm d}E/{\rm d}x\), where \(\sigma_{\rm TPC}\) corresponds to the \({\rm d}E/{\rm d}x\) resolution (typically \(\sim\)5% of the measured \({\rm d}E/{\rm d}x\) value) of the TPC. Furthermore, if the hit for a track is available in the TOF, the measured time of flight is required to be within 3\(\sigma\) from its expected value
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**V0M (\%)** & 0–1 & 1–5 & 5–10 & 10–20 & 20–30 & 30–40 & 40–50 & 50–70 & 70–100 \\ \hline
**Multiplicity classes** & I & \(\Pi\) & \(\Pi\) & \(\Pi\) & \(\Pi\) & V & VI & VII & VIII & IX \\ \hline \end{tabular}
\end{table}
Table 1: Analyzed multiplicity classes in pp collisions at \(\sqrt{s}=5.02\) TeV
for each particle species [64].
## 3 Analysis details
The K\({}^{*0}\) meson being a short-lived resonance is reconstructed using the invariant mass method [5] via its hadronic decay channel, K\({}^{*}(892)^{0}\rightarrow\) K\({}^{\pm}\pi^{\mp}\) with a branching ratio (BR) of 66% [65] for \(|y|<0.5\).
Oppositely charged kaons and pions are paired in the same event to reconstruct the resonance signal. The resulting invariant-mass distribution of unlike sign charge K\(\pi\) pair consists of a signal with significant combinatorial background, which is estimated using the mixed-event method [40] (for 0.4 \(<\)\(p_{\rm T}\)\(<\) 0.8 GeV/\(c\) in Xe-Xe collisions like-sign pairs from the same event [40] are used to get better description of the combinatorial background). The mixed-event invariant mass distribution is constructed by combining kaons from one event with oppositely charged pions from five other events. Only the events with a similar topology, such as an absolute difference in the \(z\)-coordinate of their collision vertex less than 1 cm and the centrality difference (for Xe\(-\)Xe) or multiplicity percentile (for pp) difference less than 5% are mixed. The mixed-event background is scaled to match the unlikesign foreground distribution in the invariant mass range 1.1-1.15 GeV/\(c^{2}\). The left panel of Fig. 1 shows the invariant-mass distribution of unlike charged \(\pi\)K pairs from the same-event along with the rescaled mixed-event background. The unlike sign \(\pi\)K invariant mass distribution with mixed-event background subtracted is shown in the right panel of Fig. 1. The combinatorial background subtracted invariant mass distribution consists of K\({}^{*0}\) signal and a residual background of correlated pairs. The correlated background pairs originate from jets, decays of resonances with misidentified daughters, and decays with multiple daughters. The combinatorial background subtracted invariant mass distribution is fitted with a combination of a non-relativistic Breit-Wigner distribution and a polynomial of second order. The Breit-Wigner distribution describes the signal of K\({}^{*0}\), whereas the residual background is modelled using the polynomial function. The width of K\({}^{*0}\) is kept fixed to its vacuum value in the fit procedure to estimate the signal, whereas it is allowed to vary freely to estimate the systematic uncertainty. Finally, raw yields of K\({}^{*0}\) in each \(p_{\rm T}\) interval and event class is obtained from the integral of the Breit-Wigner distribution as done in Ref. [34, 43].
Figure 1: The left panel shows the invariant mass distribution of unlike sign \(\pi\)K pairs from same and mixed events. The right panel shows the same but after the mixed-event background subtraction. The mixed event background subtracted invariant mass distribution is fitted with a combination of Breit–Wigner function [5] and second order polynomial distribution. The Breit–Wigner distribution represents the K\({}^{*0}\) signal and the second order polynomial describes the residual background.
The extracted raw yields (\(N^{\rm raw}\)) are further corrected for detector acceptance and reconstruction efficiency (\(A\times\epsilon_{\rm rec}\)) and BR of the decay channel. The \(A\times\epsilon_{\rm rec}\) is estimated using dedicated Monte Carlo (MC) event generators, PYTHIA8 [66] for pp collisions and HIJING [67] for Xe-Xe collisions, with particles propagated through a simulation of the ALICE detector using GEANT3 [68]. A weighting procedure of the \(A\times\epsilon_{\rm rec}\) is further used to account for the variation of \(A\times\epsilon_{\rm rec}\) over the width of a \(p_{\rm T}\) interval in the measured spectrum and for the mismatch in the shape of the spectrum in data and MC simulation [6]. The input \(p_{\rm T}\) distribution in MC is adjusted to match the real distribution using \(p_{\rm T}\)-dependent weights. These are defined as the ratio between the measured \(p_{\rm T}\) distribution after all corrections are applied and the default distribution in MC. In the first iteration, an appropriate fit function with parameters taken from similar analyses is used to parameterize the \(p_{\rm T}\) shape. After all corrections, the \(p_{\rm T}\) spectrum is fitted with the fit function again and the updated parameters are used to modify the weights in the next iteration. Such an iterative procedure is repeated until convergence. Finally, the yields are normalized by the number of accepted events (\(N^{\rm exc}_{\rm event}\)) to obtain the corrected \(p_{\rm T}\) spectrum in different event classes. Measurements in pp collisions are further corrected for the event loss and the signal loss, evaluated from the MC simulation. The signal loss correction (\(\rm f_{SL}\)) for \(\rm K^{*0}\) is calculated for each multiplicity class by taking the ratio of the simulated \(\rm K^{*0}\)\(p_{\rm T}\) spectrum before trigger and event selection with the corresponding \(p_{\rm T}\) spectra after applying all the selections. The \(\rm f_{SL}\) is dominant at low \(p_{\rm T}\) in 70-100% multiplicity class with the maximum value of 22%. The event loss correction (\(\rm f_{ev}\)) corresponds to the fraction of \(\rm INEL>0\) events that do not pass the event-selection criteria and is estimated in [8]. The \(\rm f_{ev}\) is not particle and \(p_{\rm T}\) dependent, and its value spans from 0.99 in 0-1% multiplicity class to 0.71 in 70-100% multiplicity class. The corrected \(p_{\rm T}\) spectrum can be expressed as
\[\frac{1}{N_{\rm event}}\frac{{\rm d}^{2}N}{{\rm d}{\rm y}d{\rm p}_{\rm T}}= \frac{1}{N^{\rm acc}_{\rm event}}\frac{{\rm d}^{2}N^{\rm raw}}{{\rm d}{\rm y}d {\rm p}_{\rm T}}\frac{{\rm f_{ev}f_{SL}}}{(A\times\epsilon_{\rm rec}){\rm BR}}, \tag{2}\]
## 4 Systematic uncertainties
Systematic uncertainties on the measured \(\rm K^{*0}\) yields originate from various sources, including the signal extraction method, track selection and particle identification criteria, the method used to match track segments in the ITS with tracks in the TPC, as well as uncertainties in the material budget and interaction cross section. The resulting changes in the \(\rm K^{*0}\) yields for each \(p_{\rm T}\) and multiplicity (centrality) interval, obtained from repeating the full analysis chain with the variations and corrections described below, are incorporated as systematic uncertainties. Table 2 provides a summary of the systematic uncertainties on the measured \(\rm K^{*0}\) yields. The reported uncertainties in the table are averaged over all centrality/multiplicity classes and presented for a low- and high-\(p_{\rm T}\) interval.
To evaluate the signal extraction uncertainty, several factors are varied, such as fitting ranges, mixed
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Systematic variation & \multicolumn{2}{|c|}{pp [\(p_{\rm T}\) (GeV/\(c\))]} & \multicolumn{2}{|c|}{Xe–Xe [\(p_{\rm T}\) (GeV/\(c\))]} \\ \hline & 0–0.4 & 10.0–14.0 & 0.4–0.8 & 8.0–12.0 \\ \hline Signal extraction (\%) & 7.4 & 9.6 & 12.7 & 11.5 \\ \hline Primary track selection (\%) & 1.9 & 5.0 & 7.2 & 7.1 \\ \hline Particle identification (\%) & 1.4 & 5.5 & 7.1 & 7.8 \\ \hline ITS–TPC matching (\%) & 2 & negl. & 6.4 & 8.6 \\ \hline Material budget (\%) & 1.8 & negl. & 1.4 & negl. \\ \hline Hadronic interaction (\%) & 2.6 & negl. & 2.3 & negl. \\ \hline Total (\%) & 8.7 & 12.3 & 17.6 & 17.8 \\ \hline \end{tabular}
\end{table}
Table 2: Systematic uncertainties on measured \(\rm K^{*0}\) yield in pp and Xe–Xe collisions at \(\sqrt{s}=5.02\) TeV and \(\sqrt{s_{\rm NN}}=5.44\) TeV respectively. The systematic uncertainties are shown for different sources for a low- and a high-\(p_{\rm T}\) interval.
event background rescaling region, residual background fit functions, and yield extraction methods. The default case involved fixed-width fits to the invariant mass distributions, based on the background shape. To assess the systematic uncertainty, the boundaries of the fitting ranges are adjusted by 20 MeV/\(c^{2}\) on both sides. The rescaling of the mixed-event background distribution is shifted to different ranges to examine its impact. The residual background is modeled using a third-order polynomial to study systematic effects. For the primary track selection, the criteria are varied following the procedure described in Ref. [40]. Uncertainties associated with the identification of primary daughter tracks are estimated by varying the selection criteria in the TPC and TOF. Furthermore, uncertainties related to the material budget and hadronic cross section are obtained from Ref. [40]. The total uncertainty, obtained by summing the uncertainties from each source in quadrature, is averaged over all multiplicity classes. In pp collisions, the total uncertainty ranges from 6.5% to 12.3%, while in Xe-Xe collisions, it ranges from 15% to 18%.
## 5 Results
The K\({}^{*0}\)\(p_{\rm T}\) spectra in pp collisions at \(\sqrt{s}=5.02\) TeV for different multiplicity classes after all corrections mentioned in Sec. 3 are shown in the upper panel of Fig. 2. The lower panel of Fig. 2 shows the ratios of the K\({}^{*0}\)\(p_{\rm T}\) spectra in different multiplicity classes to the corresponding spectrum in multiplicity integrated (INEL\(>\)0) pp collisions. An increase in the inverse slopes of the \(p_{\rm T}\) spectra from low to high multiplicity is clearly visible for \(p_{\rm T}<4\) GeV/\(c\). However, at higher \(p_{\rm T}\), the spectra in different multiplicity classes have the same shape, indicating that the low \(p_{\rm T}\) processes are primarily responsible for the change in the shape of the \(p_{\rm T}\) spectra from low to high multiplicity classes. The corrected \(p_{\rm T}\) distributions for K\({}^{*0}\) in four different centrality classes of Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV are shown in the left panel of Fig. 3. The right panel of Fig. 3 shows the comparison of the K\({}^{*0}\)\(p_{\rm T}\) spectrum between Xe-Xe and Pb-Pb collisions with similar final-state charged-particle multiplicity. At similar multiplicity values, the K\({}^{*0}\)\(p_{\rm T}\) distributions in Xe-Xe and Pb-Pb collisions are consistent within uncertainties. The
Figure 2: Upper panel: The \(p_{\rm T}\) spectra of K\({}^{*0}\) in various multiplicity classes of pp collisions at \(\sqrt{s}=5.02\) TeV. Lower panel: The ratios of the multiplicity-dependent \(p_{\rm T}\) spectra to the multiplicity-integrated INEL\(>0\) spectra. The statistical and systematic uncertainties are shown as bars and boxes, respectively.
final-state charged-particle multiplicity is a proxy of the volume of the produced matter. It is similar in the central collision of medium (Xe) and mid-central collisions of large (Pb) size nuclei. This indicates that the physics processes such as hadronic rescattering and radial flow, which determine the shape of the \(p_{\rm T}\) distribution in heavy-ion collisions, have a similar effect on the K\({}^{*0}\)\(p_{\rm T}\) spectra irrespective of the size of the colliding nuclei.
Figure 4: The \({\rm d}N/{\rm d}y\) (left panel) and \(\langle p_{\rm T}\rangle\) (right panel) of K\({}^{*0}\) as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle^{1/3}_{|\eta|<0.5}\) in pp collision at \(\sqrt{s}=5.02\) TeV and in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV. Measurements are compared with the results obtained in p–Pb [5] and Pb–Pb [6] collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. Bars and shaded boxes correspond to the statistical and systematic uncertainties, respectively.
Figure 3: The left panel shows the \(p_{\rm T}\) distributions of K\({}^{*0}\) meson in four different centrality classes of Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV. The right panel shows the comparison between the K\({}^{*0}\)\(p_{\rm T}\) spectrum in 0–30% Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV and in 20–30% Pb–Pb [6] collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV, both having similar multiplicities. The statistical and systematic uncertainties are shown by bars and boxes, respectively.
The transverse momentum integrated \(\rm K^{*0}\) yield \(\rm dN/dy\) and average transverse momentum \(\langle p_{\rm T}\rangle\) are extracted from the measured \(p_{\rm T}\) spectrum and the extrapolation to the unmeasured regions using a blast-wave function [6]. In pp collisions, \(\rm K^{*0}\) is measured down to \(p_{\rm T}=0\) GeV/\(c\). Therefore, no low-\(p_{\rm T}\) extrapolation is used to extract the \(\rm dN/dy\) and \(\langle p_{\rm T}\rangle\) in pp collisions. The contribution of the extrapolation on the extracted \(\rm dN/dy\) is \(\sim\)9% (\(\sim\)13%) in central (peripheral) Xe-Xe collisions. The systematic uncertainties on the extracted \(\rm dN/dy\), and \(\langle p_{\rm T}\rangle\) are estimated by varying the data points randomly up and down within their systematic uncertainty to obtain the softest and hardest spectra. An additional systematic uncertainty due to the extrapolation of \(p_{\rm T}\) spectra to \(p_{\rm T}=0\) GeV/\(c\) is evaluated in Xe\(-\)Xe collisions by using different fit functions (Levy-Tsallis, Boltzmann) for the extrapolation [69, 70]. The systematic uncertainty for the extrapolation is \(\sim\)2% and \(\sim\)1.7% on \(\rm dN/dy\) and \(\langle p_{\rm T}\rangle\), respectively.
Figure 4 shows the \(\rm dN/dy\) (left panel) and \(\langle p_{\rm T}\rangle\) (right panel) of \(\rm K^{*0}\) as a function of \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\) in pp collisions at \(\sqrt{s}=5.02\) TeV and in Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV, where \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\) is proportional to the linear (radial) path through the produced matter. Measurements are compared with the results obtained in p-Pb [5] and Pb-Pb collisions [6] at \(\sqrt{s_{\rm NN}}=5.02\) TeV. A smooth evolution of the \(\rm dN/dy\) as a function of \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\) is observed across all the collision systems. This suggests that \(\rm K^{*0}\) production is solely driven by final-state charged-particle multiplicity, which is used as a proxy for the system size [71]. The \(\langle p_{\rm T}\rangle\) of \(\rm K^{*0}\) increases with \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\) for all collision systems, indicating the increase of radial flow velocity from the low-multiplicity event class to the high-multiplicity event class. In contrast to the \(\rm dN/dy\), intensive variable \(\langle p_{\rm T}\rangle\) shows a strong dependency on the colliding system and does not scale with charged-particle multiplicity across all collision systems. The \(\langle p_{\rm T}\rangle\) of \(\rm K^{*0}\) increases more steeply in small collision systems compared to heavy-ion collisions. For \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\)\(>2\) the following ordering of \(\langle p_{\rm T}\rangle\) is observed for a fixed multiplicity: \(\langle p_{\rm T}\rangle\) (pp) \(>\langle p_{\rm T}\rangle\) (p-Pb) \(>\langle p_{\rm T}\rangle\) (Xe-Xe) \(\sim\langle p_{\rm T}\rangle\) (Pb-Pb). In the hydrodynamical expansion modeled by the blast wave, it is observed that small collision systems exhibit a larger pressure gradient and faster expansion of produced matter compared to heavy-ion collisions with similar charged-particle multiplicity. Furthermore, the \(\langle p_{\rm T}\rangle\) of \(\rm K^{*0}\) in Xe\(-\)Xe and Pb\(-\)Pb collisions are comparable at similar \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\), suggesting similar dynamical evolution of the system produced in the collision of large and medium size nuclei at LHC energy.
The left panel of Fig. 5 shows the \(p_{\rm T}\)-integrated \(\rm K^{*0}\)/K yield ratio as a function of \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\). Measurements in Xe-Xe collisions are compared with the yield ratios obtained in pp, p-Pb [5] and Pb-Pb [6] collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. The kaon yields in pp collisions at \(\sqrt{s}=5.02\) TeV are obtained through an extrapolation of kaon yields from pp collisions at \(\sqrt{s}=13\) TeV [72] and \(\sqrt{s}=7\) TeV [7]. To perform this extrapolation, the yields at both \(\sqrt{s}=13\) and \(\sqrt{s}=7\) TeV are fitted as a function of \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\) with a first-order polynomial. The resulting fit function value is then used to estimate the kaon yields at the corresponding \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\) for \(\sqrt{s}=5.02\) TeV. To assess the uncertainty in the yield estimation, a Gaussian distribution is constructed for each data point. The mean of the distribution corresponds to the value of the data point, while the standard deviation (\(\sigma\)) represents the associated statistical or systematic uncertainty. For each data point, a random value is sampled from its corresponding Gaussian distribution. It is assumed that the data points are uncorrelated with multiplicity. A linear fit is then applied to these randomly sampled values. This process is repeated thousands of times, generating multiple linear fits. The standard deviation of the fitting values obtained from these repetitions is considered as the uncertainty of the yield for a given multiplicity. The \(\rm K^{*0}\)/K yield ratio in different collision systems shows a smooth evolution with \(\langle\rm dN_{ch}/d\eta\rangle_{|\eta|<0.5}^{1/3}\), and is independent of the collision system at similar final-state charged-particle multiplicity. This further confirms the smooth evolution of hadron chemistry, observed for other light flavour hadrons [62]. The \(\rm K^{*0}\)/K yield ratio decreases with increasing event multiplicity. This decrease in the \(\rm K^{*0}\)/K yield ratio can be understood as the rescattering of \(\rm K^{*0}\) meson's decay daughters inside the hadronic phase [34]. Since the lifetime of
K\({}^{*0}\) is comparable to that of the hadronic phase, its decay products scatter in their passage through the hadronic medium changing their momenta and hence affecting the reconstruction of the parent particle, thereby decreasing the measured yield. Measurements in heavy-ion collisions are further compared with the EPOS3 model calculations with and without the hadronic phase [73]. The EPOS3 model calculations are for Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=2.76\) TeV, as no significant quantitative differences are expected between the two energies. In the presence of the hadronic phase, which is modelled by the UrQMD model [74], the EPOS3 generator qualitatively reproduces the multiplicity dependence of the K\({}^{*0}\)/K yield ratio. The canonical ensemble-based thermal model \(\gamma_{\rm c}\) CSM [26], which successfully describes the production of other light flavour hadrons in small collision systems and heavy-ion collisions, does not explain the multiplicity dependence of K\({}^{*0}\)/K yield ratio. The yield ratio is suppressed compared to the \(\gamma_{\rm c}\) CSM, and the suppression is more prominent in central Xe-Xe and Pb-Pb collisions. In the recent development of the Hadron Resonance Gas (HRG) model, the hadronic phase effect is modelled by a concept of partial chemical equilibrium (PCE) [52]. In this model, decays and regenerations of the resonances obey the law of mass action, ensuring an equilibrium between the abundance of different resonances and their decay products. By applying the HRG-PCE calculation, the measured data points of particle ratios (K\({}^{*0}\)/K), in heavy-ion collisions can be accurately described. The K\({}^{*0}\)/K yield ratio can also be used to get an estimate of the lower bound of the hadronic phase lifetime \(\tau\), i.e. the time between chemical and kinetic freeze-out. The K\({}^{*0}\)/K yield ratio at kinetic freeze-out can be expressed as \([{\rm K}^{*0}/{\rm K}]_{\rm kinetic}=[{\rm K}^{*0}/{\rm K}]_{\rm chemical }\times e^{-\tau/\tau_{K^{*0}}}\), where \(\tau_{K^{*0}}\) is the vacuum lifetime of K\({}^{*0}\), taken to be 4.16 fm\(/c\). The \([{\rm K}^{*0}/{\rm K}]\) yield ratio in the 70-100% multiplicity class of pp collisions at \(\sqrt{s}=13\) TeV is used as a proxy for the \([{\rm K}^{*0}/{\rm K}]_{\rm chemical}\) and the measured K\({}^{*0}\)/K yield ratio in different multiplicity or centrality classes of pp, p-Pb, Xe-Xe, and Pb-Pb collisions are used as \([{\rm K}^{*0}/{\rm K}]_{\rm kinetic}\). The above procedure estimates the lower bound of the \(\tau\) with the assumption that there is no regeneration of K\({}^{*0}\) in the hadronic medium. The hadronic phase lifetime obtained with this simple model is further scaled by a Lorentz factor \(\sqrt{1+(\frac{(p_{\rm T})}{\rm mass\ of\ K^{*0}})^{2}}\) and the extracted \(\tau\) values are shown in the right panel of Fig. 5 as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}^{1/3}\). The hadronic phase lifetime evolves smoothly with multiplicity. The lifetime of the hadronic phases produced in Xe-Xe and Pb-Pb collisions are consistent with each other at similar charged-particle multiplicity.
Figure 5: The left panel shows the measured K\({}^{*0}\)/K yield ratio along with model calculation. The right panel shows the lower limit of hadronic phase lifetime as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}^{1/3}\) in different collision systems. Bars and shaded boxes represent the statistical and systematic uncertainties, respectively.
The time span by the hadronic phase is reflected in the temperature difference between the chemical and the kinetic freeze-out. The kinetic freeze-out temperature is extracted using the HRG-PCE [52] model fit to the experimentally measured yields of \(\pi^{\pm}\), K\({}^{\pm}\), p(\(\overline{\rm p}\)), \(\phi\)[6], K\({}^{*0}\) in 0-30%, 30-50% and 50-70% centrality classes for Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV. The parameters of the fit are the baryon chemical potential, chemical freeze-out temperature, kinetic freeze-out temperature, and freeze-out volume of the system. The baryon chemical potential and chemical freeze-out temperature are fixed at 0 and 155 MeV, respectively at LHC energies [76, 77, 78, 79]. Figure 6 shows the kinetic freeze-out temperature obtained from the HRG-PCE fit in Xe-Xe collisions, and the results are compared with the Pb-Pb measurements [75]. The freeze-out temperature is found to increase systematically while moving from central to peripheral centrality class both for Xe-Xe and Pb-Pb collisions due to longer duration of hadronic phase, though the uncertainties are larger in Xe-Xe collisions. The freeze-out temperatures in both collision systems are consistent within uncertainties at similar charged-particle multiplicity. The difference between chemical and kinetic freeze-out temperature supports the presence of a hadronic phase with a finite lifetime in Xe-Xe collisions, a long-lived one in central collisions, and a short-lived one in peripheral collisions.
Furthermore, to understand the \(p_{\rm T}\) dependence of the hadronic rescattering effect, the K\({}^{*0}\)/K yield ratios in Xe-Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV are shown in Fig. 7 for two different \(p_{\rm T}\) intervals, \(0.4<p_{\rm T}<2.0\) GeV\(/c\) and \(2.0<p_{\rm T}<4.0\) GeV\(/c\). The results are also compared with the \(\phi\)/K [62] yield ratio. In the low \(p_{\rm T}\) range, the K\({}^{*0}\)/K yield ratio decreases from peripheral Xe-Xe collisions to central Xe-Xe collisions, whereas \(\phi\)/K remains more or less constant with system size. The observed low \(p_{\rm T}\) suppression of measured K\({}^{*0}\) yield can be attributed to the rescattering effect of the decay products of K\({}^{*0}\) in the hadronic phase. The lifetime of \(\phi\) mesons is one order of magnitude larger than that of K\({}^{*0}\); therefore, the \(\phi\) meson decay daughters are not expected to be affected by the rescattering in the hadronic phase. As a result, the \(\phi\)/K yield ratio remains constant within uncertainties across the whole range of multiplicities. In contrast to low \(p_{\rm T}\), at high \(p_{\rm T}\), both the K\({}^{*0}\)/K and \(\phi\)/K yield ratios remain flat as a
Figure 6: The kinetic freeze-out temperature estimated using the fit of HRG-PCE model to the measured yields of \(\pi^{\pm}\), K\({}^{\pm}\), p(\(\overline{\rm p}\)), \(\phi\), K\({}^{*0}\) in different centrality classes of Xe\(-\)Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV. Results are compared with extracted kinetic freeze-out temperature in Pb\(-\)Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV [75].
function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle^{1/3}_{|\eta|<0.5}\). This suggests that the rescattering effect is a low transverse momentum phenomenon.
The left panel of Fig. 8 shows the comparison of the nuclear modification factor \(R_{\rm AA}\) of K\({}^{*0}\) in Xe-Xe and Pb-Pb systems at similar final-state charged-particle multiplicity. The \(R_{\rm AA}\) values are found to be less than unity at high \(p_{\rm T}\) in both systems. Similar \(R_{\rm AA}\) is observed at both low momentum (hydro-like expansion) and high momentum (partonic energy loss) in Xe\(-\)Xe and Pb\(-\)Pb collisions at similar
Figure 8: The left panel shows the nuclear modification factor as a function of \(p_{\rm T}\) for the K\({}^{*0}\) meson in 0\(-\)30% Xe\(-\)Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV and in 20\(-\)30% Pb\(-\)Pb collisions [6] at \(\sqrt{s_{\rm NN}}=5.02\) TeV. The right panel shows the R\({}_{\rm AA}\) of K\({}^{*0}\) as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle^{1/3}_{|\eta|<0.5}\) for \(4.0<p_{\rm T}<12.0\) GeV/\(c\) in Xe–Xe collisions. The results are compared to the \(R_{\rm AA}\) of charged hadron [56]. Statistical and systematic uncertainties are represented by bars and shaded boxes.
Figure 7: The K\({}^{*0}\)/K and \(\phi\)/K yield ratios as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle^{1/3}_{|\eta|<0.5}\) in Xe–Xe collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV. The left and right panels show the measurements for a low-\(p_{\rm T}\) and a high-\(p_{\rm T}\) interval, respectively. Statistical and systematic uncertainties are represented by bars and shaded boxes.
charged-particle multiplicity. The centrality dependence of energy loss is studied by measuring the \(p_{\rm T}\)-integrated \(R_{\rm AA}\) in the range \(4.0<p_{\rm T}<12.0~{}{\rm GeV}/c\). The \(p_{\rm T}\)-integrated \(R_{\rm AA}\) of K\({}^{*0}\) as a function of \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}^{1/3}\) is shown in the right panel of Fig. 8. Measurements are compared with the results of charged hadrons and are found to be consistent within uncertainties. This suggests that jet quenching does not significantly affect the light-flavour particle species composition for the leading particles. The \(R_{\rm AA}\) is smaller in central Xe-Xe collisions compared to the peripheral collisions. This reflects more energy loss via multiple partonic interactions in central collisions, as expected from the longer path length traversed by the hard partons in central collisions.
## 6 Conclusion
The ALICE Collaboration has reported measurements of K\({}^{*0}\) meson at midrapidity (\(|y|<0.5\)) for different centrality and multiplicity classes in Xe-Xe and pp collisions at \(\sqrt{s_{\rm NN}}=5.44\) TeV and \(\sqrt{s}=5.02\) TeV, respectively. Both \(p_{\rm T}\)-integrated K\({}^{*0}\) yield and K\({}^{*0}\)/K yield ratio are found to smoothly evolve with \(\langle{\rm d}N_{\rm ch}/{\rm d}\eta\rangle_{|\eta|<0.5}^{1/3}\), independent of the size of the colliding nuclei, confirming a universal scaling of hadron chemistry or relative abundance of hadron species with final-state charged-particle multiplicity at LHC energies. In contrast, the \(\langle p_{\rm T}\rangle\), which depends on the radial expansion velocity of the produced matter, rises more steeply in smaller collision systems compared to the heavy-ion collisions. This indicates that the matter produced in small collision systems expands more rapidly compared to the system produced in heavy-ion collisions. The K\({}^{*0}\)/K ratio decreases with increasing final-state charged-particle multiplicity. This decrease in the K\({}^{*0}\)/K yield ratio can be attributed to the rescattering of decay daughters of K\({}^{*0}\) in the hadronic phase. In addition, the \(p_{\rm T}\)-differential yield ratio K\({}^{*0}\)/K confirms the dominance of rescattering effect at low \(p_{\rm T}\). Moreover, the nuclear modification factor for K\({}^{*0}\) is similar in Xe-Xe and Pb-Pb collisions at similar charged-particle multiplicity indicating a scaling of the parton energy loss with final-state charged-particle multiplicity, independent of the size of the collision system.
The decreasing K\({}^{*0}\)/K ratio is qualitatively described by the EPOS3 model in presence of hadronic afterburner. The best description of the measurement is provided by the PCE based thermal model, which models the rescattering and the regeneration effect using the law of mass action. In contrast, the canonical ensemble based thermal model does not describe the measured K\({}^{*0}\)/K yield ratio. Furthermore, the lower limit of hadronic phase lifetime is extracted using K\({}^{*0}\)/K yield ratios in different colliding systems. A smooth evolution of the lifetime is observed as a function of multiplicity. The kinetic freeze-out temperature is extracted using the HRG-PCE model. A higher temperature is obtained for more peripheral collisions implying an early decoupling of the produced hadrons.
## Acknowledgements
The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the ALICE detector: A. I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation (ANSL), State Committee of Science and World Federation of Scientists (WFS), Armenia; Austrian Academy of Sciences, Austrian Science Fund (FWF): [M 2467-N36] and Nationalstiftung fur Forschung, Technologie und Entwicklung, Austria; Ministry of Communications and High Technologies, National Nuclear Research Center, Azerbaijan; Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Financiadora de Estudos e Projetos (Finep), Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) and Universidade Federal do Rio Grande do Sul (UFRGS),
Brazil; Bulgarian Ministry of Education and Science, within the National Roadmap for Research Infrastructures 2020-2027 (object CERN), Bulgaria; Ministry of Education of China (MOEC), Ministry of Science & Technology of China (MSTC) and National Natural Science Foundation of China (NSFC), China; Ministry of Science and Education and Croatian Science Foundation, Croatia; Centro de Aplicaciones Tecnologicas y Desarrollo Nuclear (CEADEN), Cubaenergia, Cuba; Ministry of Education, Youth and Sports of the Czech Republic, Czech Republic; The Danish Council for Independent Research | Natural Sciences, the VILLUM FONDEN and Danish National Research Foundation (DNRF), Denmark; Helsinki Institute of Physics (HIP), Finland; Commissariat a l'Energie Atomique (CEA) and Institut National de Physique Nucleaire et de Physique des Particules (IN2P3) and Centre National de la Recherche Scientifique (CNRS), France; Bundesministerium fur Bildung und Forschung (BMBF) and GSI Helmholtzzentrum fur Schwerionenforschung GmbH, Germany; General Secretariat for Research and Technology, Ministry of Education, Research and Religions, Greece; National Research, Development and Innovation Office, Hungary; Department of Atomic Energy Government of India (DAE), Department of Science and Technology, Government of India (DST), University Grants Commission, Government of India (UGC) and Council of Scientific and Industrial Research (CSIR), India; National Research and Innovation Agency - BRIN, Indonesia; Istituto Nazionale di Fisica Nucleare (INFN), Italy; Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) and Japan Society for the Promotion of Science (JSPS) KAKENHI, Japan; Consejo Nacional de Ciencia (CONACYT) y Tecnologia, through Fondo de Cooperacion Internacional en Ciencia y Tecnologia (FONCICYT) and Direccion General de Asuntos del Personal Academico (DGAPA), Mexico; Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Netherlands; The Research Council of Norway, Norway; Commission on Science and Technology for Sustainable Development in the South (COMSATS), Pakistan; Pontificia Universidad Catolica del Peru, Peru; Ministry of Education and Science, National Science Centre and WUT ID-UB, Poland; Korea Institute of Science and Technology Information and National Research Foundation of Korea (NRF), Republic of Korea; Ministry of Education and Scientific Research, Institute of Atomic Physics, Ministry of Research and Innovation and Institute of Atomic Physics and University Politehnica of Bucharest, Romania; Ministry of Education, Science, Research and Sport of the Slovak Republic, Slovakia; National Research Foundation of South Africa, South Africa; Swedish Research Council (VR) and Knut & Alice Wallenberg Foundation (KAW), Sweden; European Organization for Nuclear Research, Switzerland; Suranaree University of Technology (SUT), National Science and Technology Development Agency (NSTDA), Thailand Science Research and Innovation (TSRI) and National Science, Research and Innovation Fund (NSRF), Thailand; Turkish Energy, Nuclear and Mineral Research Agency (TENMAK), Turkey; National Academy of Sciences of Ukraine, Ukraine; Science and Technology Facilities Council (STFC), United Kingdom; National Science Foundation of the United States of America (NSF) and United States Department of Energy, Office of Nuclear Physics (DOE NP), United States of America. In addition, individual groups or members have received support from: European Research Council, Strong 2020 - Horizon 2020 (grant nos. 950692, 824093), European Union; Academy of Finland (Center of Excellence in Quark Matter) (grant nos. 346327, 346328), Finland.
|
2308.00777 | Observable Primordial Gravitational Waves from Cosmic Inflation | I will review briefly how inflation is expected to generate a stochastic
background of primordial gravitational waves (GWs). Then, I will discuss how
such GWs can be enhanced by a stiff period following inflation, enough to be
observable. I will present examples of this in the context of hybrid inflation
with $\alpha$-attractors, or a period of hyperkination in Palatini gravity. | Konstantinos Dimopoulos | 2023-08-01T18:33:24Z | http://arxiv.org/abs/2308.00777v1 | # Observable Primordial Gravitational Waves from Cosmic Inflation
###### Abstract
I will review briefly how inflation is expected to generate a stochastic background of primordial gravitational waves (GWs). Then, I will discuss how such GWs can be enhanced by a stiff period following inflation, enough to be observable. I will present examples of this in the contact of hybrid inflation with \(\alpha\)-attractors, or a period of hyper-kination in Palatini gravity.
Keywords:primordial gravitational waves, cosmic inflation, \(\alpha\)-attractors, Palatini modified gravity
## 1 Introduction
The history of the Universe requires special initial conditions which are arranged by cosmic inflation. In a nutshell, cosmic inflation can be defined as a period of accelerated expansion in the Early Universe. Inflation results in the Universe being large, spatially flat and uniform, in accordance to observations [1]. Inflation also generates the primordial density perturbations (PDPs), which are necessary for galaxies and galaxy clusters to form [2]. The PDPs reflect themselves on the cosmic microwave background (CMB) radiation, through the Sachs-Wolfe effect. Agreement with CMB observations is spectacular, when inflation is quasi-de Sitter and space expands exponentially [3].
However, there is another generic prediction of inflation beyond the acoustic peaks observed in the CMB, namely the generation of a stochastic spectrum of primordial gravitational waves (GWs) [4]. This prediction will soon be tested, either indirectly, through CMB polarisation observations, or directly through interferometers (Advanced LIGO, LISA).
How does inflation produce these GWs? Below, I attempt a brief overview. I use natural units with \(c=\hbar=1\) and \(8\pi G=m_{P}^{-2}\), with \(m_{P}=2.43\times 10^{18}\,\)GeV being the reduced Planck mass. The signature of the metric is positive.
## 2 Particle production of gravitational waves during cosmic inflation
Following the recipe of linearised gravity, we consider a comoving perturbation of the metric \(h_{ij}\), such that the line element of spatially flat FRW spacetime is
\[{\rm d}s^{2}=a^{2}(\tau)[-{\rm d}\tau^{2}+(\delta_{ij}+h_{ij}){\rm d}x^{i}{\rm d }x^{j}]\,, \tag{1}\]
where \(a(\tau)\) is the scale factor of the Universe as a function of conformal time \(\tau\) and \(\delta_{ij}\) is equal to one when \(i=j\) and to zero otherwise, with \(i,j=1,2,3\). The metric perturbation is symmetric \(h_{ij}=h_{ji}\), traceless \(h_{i}^{i}=0\) and transverse \(\nabla_{i}h^{ij}=0\). This means that it is corresponds to two degrees of freedom (6-1-3=2), which are the two polarisations, \(\oplus\) and \(\otimes\), of the GWs.
Thus, by Fourier transform, we can write
\[h_{ij}(\tau,{\bf x})=\sqrt{16\pi G}\int\frac{{\rm d}^{3}k}{(2\pi)^{3/2}}h_{ij} (\tau,{\bf k})\,e^{i{\bf k}\cdot{\bf x}}\,, \tag{2}\]
with
\[h_{ij}(\tau,{\bf k})=\sum_{s=\oplus,\otimes}h_{k}^{s}e_{ij}^{s}({\bf k})\,, \tag{3}\]
where \(e_{ij}^{2}\) is symmetric \(e_{ij}^{2}=e_{ji}^{s}\), traceless \(e_{i}^{s\ i}=0\) and transverse \(k^{i}e_{ij}^{s}=0\), with \(k^{i}\) being the 3-wavevector of the GWs.
The second-order GW action is
\[S_{\rm GW}=\frac{1}{64\pi G}\int{\rm d}^{4}x\sqrt{-g}\,g^{\mu\nu}\partial_{ \mu}h_{ij}\,\partial_{\nu}h^{ij}\,, \tag{4}\]
where \(g\) is the determinant of the metric \(g_{\mu\nu}\) and \(\mu,\nu=0,1,2,3\). From the above action one obtains the equation of motion (EoM), which reads
\[h^{\prime\prime}_{ij}+2\frac{a^{\prime}}{a}h^{\prime}_{ij}-\nabla^{2}h_{ij}=0 \;\Rightarrow\;h_{k}^{s\,\prime\prime}+2\frac{a^{\prime}}{a}h_{k}^{s\,\prime} +k^{2}h_{k}^{s}=0\,, \tag{5}\]
where the prime denotes derivative with respect to conformal time.
The above equations show that the metric perturbation polarisations behave as free massless scalar fields \(\psi_{k}^{s}(\tau)\) which can be written as \(h_{k}^{s}(\tau)=\sqrt{16\pi G}\psi_{k}^{s}(\tau)\). To study particle production, we introduce the Muchanov-Sasaki variable \(v_{k}^{s}(\tau)\equiv a(\tau)\psi_{k}^{s}(\tau)\). In terms of this variable, the EoM becomes the well-known Muchanov-Sasaki equation
\[v_{k}^{s\,\prime\prime}+\left(k^{2}-\frac{a^{\prime\prime}}{a}\right)v_{k}^{s} =0\,, \tag{6}\]
where \(v_{k}^{2}=ah_{k}^{s}m_{P}/\sqrt{2}\).
To proceed we quantize the metric perturbations (second quantization) be expanding them in terms of creation and annihilation operators as
\[v^{s}(\tau,{\bf x})=\int\frac{{\rm d}^{3}k}{(2\pi)^{3/2}}\left[v_{k}^{s}\hat{a }_{\bf k}^{s}\,e^{i{\bf k}\cdot{\bf x}}+(v_{k}^{s})^{*}\hat{a}_{\bf k}^{s\, \dagger}\,e^{-i{\bf k}\cdot{\bf x}}\right]\,, \tag{7}\]
where \(\hat{a}^{s}_{\bf k}\) and \(\hat{a}^{s\,\dagger}_{\bf k}\) are creation and annihilation operators respectively, which satisfy the algebra
\[[\hat{a}^{s}_{\bf k},\hat{a}^{r\,\dagger}_{\bf q}]=\delta^{sr}\delta^{(3)}({\bf q }-{\bf k})\quad\mbox{and}\quad[\hat{a}^{s}_{\bf k},\hat{a}^{r}_{\bf q}]=[\hat{a }^{s\,\dagger}_{\bf k},\hat{a}^{r\,\dagger}_{\bf q}]=0\,, \tag{8}\]
where the value of \(\delta^{sr}\) is unity when \(s=r\) or zero otherwise.
Inserting the above in Eq. (6), the solution for the mode functions is
\[v^{s}_{k}(\tau)=\frac{1}{\sqrt{2k}}\left(1-\frac{i}{k\tau}\right)e^{-ik\tau}\,. \tag{9}\]
The above solution, in the subhorizon limit \(-k\tau\to+\infty\) becomes \(v^{s}_{k}\to e^{-ik\tau}/\sqrt{2k}\), which is the well-known Bunch-Davis vacuum [5]. In the superhorizon limit \(-k\tau=\frac{k}{aH}\to 0\), the above solution becomes \(v^{s}_{k}\to\frac{i}{\sqrt{2k}}\frac{aH}{k}\), where \(H\) is the Hubble parameter. Thus, in the superhorizon limit we find
\[h^{s}_{k}=\frac{\sqrt{2}\,v^{s}_{k}}{am_{P}}=\frac{iH}{m_{P}k^{3/2}}\ \Rightarrow\ |h^{s}_{k}|^{2}=\frac{H^{2}}{m_{P}^{2}k^{3}}\approx\mbox{constant}\,, \tag{10}\]
where we considered that \(H\approx\) constant in quasi-de Sitter inflation.
Therefore, we see that on superhorizon scales inflation generates a spectrum of primordial GWs. The value of their spectrum is
\[{\cal P}_{h}(k)=\frac{k^{3}}{2\pi^{2}}\langle h_{ij}(k)\,h^{ij}(k)\rangle= \frac{k^{3}}{\pi^{2}}\sum_{s=\oplus,\otimes}|h^{s}_{k}|^{2}=\frac{2H^{2}}{\pi ^{2}m_{P}^{2}}=64\pi G\left(\frac{H}{2\pi}\right)^{2}\,. \tag{11}\]
The scalar perturbations corresponding to the PDPs have spectrum
\[{\cal P}_{\zeta}(k)=\frac{H^{2}}{8\pi^{2}\epsilon m_{P}^{2}}\,, \tag{12}\]
where \(\epsilon\equiv-\dot{H}/H^{2}\), with the dot denoting derivative with respect to the cosmic time. From the above, we find the consistency equation \(r\equiv{\cal P}_{h}/{\cal P}_{\zeta}=16\epsilon\), which can be tested in the near future. At the moment, the CMB observations impose only an upper bound on \(r\): \(0\leq r<0.036\)[6].
## 3 Density of the gravitational waves
In view of Eq. (4), the energy-momentum tensor of the GWs is
\[T^{\rm GW}_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S_{\rm GW}}{\delta g^{ \mu\nu}}=\frac{\langle\nabla_{\mu}h_{ij}\nabla_{\nu}h^{ij}\rangle}{32\pi G}\,, \tag{13}\]
from which we can read the density \(\rho_{\rm GW}=a^{-2}T_{00}\). Using Eq. (5), we can write
\[\langle\rho_{\rm GW}\rangle=\int_{0}^{+\infty}\frac{k^{3}}{2\pi^{2}}\sum_{s= \oplus,\otimes}\frac{|h^{s\,\prime}_{k}|^{2}+k^{2}|h^{s}_{k}|^{2}}{a^{2}}\,{ \rm d}(\ln k)\,. \tag{14}\]
Similarly, we find the isotropic pressure \(p_{\rm GW}=a^{-2}T_{i}^{i}\), for which
\[\langle p_{\rm GW}\rangle=\int_{0}^{+\infty}\frac{k^{3}}{2\pi^{2}}\sum_{s=\oplus,\otimes}\frac{|{h_{k}^{s}}^{\prime}|^{2}-\frac{1}{3}k^{2}|h_{k}^{s}|^{2}}{a^{2 }}\,{\rm d}(\ln k)\,. \tag{15}\]
The above integrals are dominated by the subhorizon limit \(k\gg aH\). In this limit we have \(h_{k}^{s}\propto e^{-ik\tau}\), which implies \({h_{k}^{s}}^{\prime}=-ik\,h_{k}^{s}\Rightarrow|{h_{k}^{s}}^{\prime}|^{2}=k^{2} |h_{k}^{s}|^{2}\). Therefore, for the barotropic parameter we find
\[w_{{}_{\rm GW}}=\frac{p_{\rm GW}}{\rho_{\rm GW}}=\frac{\frac{2}{3}k^{2}|h_{k} ^{s}|^{2}}{2k^{2}|h_{k}^{s}|^{2}}=\frac{1}{3}\,. \tag{16}\]
Thus, we find that the density of the gravitational waves redshifts as radiation with the Universe expansion \(\rho_{\rm GW}\propto a^{-3(1+w_{{}_{\rm GW}})}=a^{-4}\).
In view of the above, the density parameter of the GWs per logarithmic momentum interval is
\[\Omega_{\rm GW}(k)\equiv\frac{1}{\rho_{c}}\frac{{\rm d}\rho_{\rm GW}}{{\rm d }\ln k}=\frac{k^{3}}{2\pi^{2}}\frac{8\pi G}{3H^{2}}\sum_{s=\oplus,\otimes}\frac {|{h_{k}^{s}}^{\prime}|^{2}+k^{2}|h_{k}^{s}|^{2}}{a^{2}}\,. \tag{17}\]
where \(\rho_{c}=3H^{2}/8\pi G\) is the critical density. The time evolution of \(\Omega_{\rm GW}(\tau,k)\) is given by
\[\Omega_{\rm GW}(\tau,k)=\frac{k^{2}\Delta_{h}^{2}(\tau,k)}{12a^{2}H^{2}}\,, \tag{18}\]
where \(\Delta_{h}^{2}(\tau,k)=T_{h}(\tau,k)\mathcal{P}_{h}(k)\). The transfer function is given by \(T_{h}=\frac{1}{2}(a_{k}/a)^{2}\), where \(a_{k}\) is the value of \(a(\tau)\) at the moment of horizon re-entry of the scale with momentum \(k\). Today we have \(T_{h}^{0}=\frac{1}{2}(a_{k}/a_{0})^{2}=\Omega_{R}\left(\frac{a_{0}H_{0}}{a_{ k}H_{k}}\right)^{2}\), where \(\Omega_{R}\simeq 10^{-4}\) is the density parameter of radiation at present and '0' denotes today.
Switching to frequency \(f\), we employ the relation \(f=\frac{k}{2\pi}\frac{a_{k}}{a_{0}}\). We end up with the expression [7]
\[\Omega_{\rm GW}(f)\propto f^{-2(\frac{1-3w}{1+3w})}\,, \tag{19}\]
where \(w\) is the barotropic parameter of the Universe at the time of horizon reentry. Thus, we see that, for modes which re-enter the horizon during the radiation era, because \(w=\frac{1}{3}\), we have \(\Omega_{\rm GW}(f)=\) constant. That is, the spectrum is flat and unfortunately, its value is unobservable in the near future (see Fig. 1).
## 4 Kination
The inflationary paradigm suggests that the Universe inflates when dominated by the potential density of a scalar field, called the inflaton field. Non-oscillatory inflation considers a runaway inflaton scalar potential, with its minimum displaced at infinity [8]. Is such models, after the inflaton field \(\phi\) rolls away from the inflationary plateau, which is a relatively flat part of its scalar potential \(V(\phi)\), it becomes dominated by its kinetic energy density \(\frac{1}{2}\dot{\phi}^{2}\), because the potential
reduces drastically and becomes negligible. The Universe is still dominated by the inflaton field, but the latter is kinetically dominated, with barotropic parameter \(w=\frac{\frac{1}{2}\dot{\phi}^{2}-V}{\frac{1}{2}\dot{\phi}^{2}+V}\approx 1\). This phase is called _kination_[9]. Eq. (19) suggests that, for the modes that re-enter the horizon during kination we have \(\Omega_{\rm GW}(f)\propto f\).
Therefore, the GW spectrum features a peak. The corresponding frequencies, however, are unobservable at the moment because kination cannot be extended arbitrarily to later times, and therefore, lower frequencies. The reason is that, if kination is prolonged, the GW peak becomes too large and threatens to destabilise the delicate process of Big Bang Nucleosynthesis (BBN). The upper bound to \(\Omega_{\rm GW}\) is obtained as follows.
At the time of BBN we require that \(\Omega_{\rm GW}^{\rm BBN}<10^{-2}\) so that BBN is not harmed. Using that the density of GWs redshifts as radiation we can estimate the corresponding bound at present. We find
\[\Omega_{\rm GW}^{0}=\left.\frac{\rho_{\rm GW}^{0}}{\rho_{c}^{0}}=\left.\frac {\rho_{\rm GW}}{\rho_{r}}\right|_{0}\,\Omega_{R}=\left.\frac{\rho_{\rm GW}}{ \rho_{r}}\right|_{\rm BBN}\Omega_{R}=\Omega_{\rm GW}^{\rm BBN}\Omega_{R}<10^{ -6}\,, \tag{20}\]
where \(\rho_{r}\) is the density of radiation and we used that \(\rho_{\rm GW}/\rho_{r}=\,\)constant. Thus, the sharp peak in \(\Omega_{\rm GW}(f)\) of kination cannot be extended to observable frequencies (see Fig. 1).
## 5 Stiff period
If the peak in \(\Omega_{\rm GW}(f)\) is not so sharp then it might be extended to observable frequencies without disturbing BBN. This may be possible if \(\frac{1}{3}<w<1\). Indeed, In Ref. [10] it is shown that, when \(0.46\leq w\leq 0.56\) and \(1\) MeV\(<T_{\rm reh}<150\,\)MeV, then the GW peak can be extended to frequencies low enough to be observable in the near future by Advanced LIGO and LISA, where \(T_{\rm reh}\) is the reheating temperature, that is the temperature of the thermal bath at the onset of the usual radiation era of the hot Big Bang.
How can this possibility be realised? I have presented a concrete model to this end in Ref. [11]. Consider two flat directions \(\varphi\) and \(\sigma\) in field space, which meet at an Enhanced Symmetry Point (ESP) such that they are characterised by the standard hybrid potential [12]
\[V(\varphi,\sigma)=\frac{1}{2}g^{2}\sigma^{2}\varphi^{2}+\frac{1}{4}\lambda( \varphi^{2}-M^{2})^{2}+V(\sigma)\,, \tag{21}\]
where \(g<1\) is a perturbative interaction coupling, \(\lambda<1\) is a perturbative self-coupling and \(V(\sigma)\) is some unknown potential for the inflaton field \(\sigma\), which forces it to vary (roll) to smaller values. In the above \(M\) is the vacuum expectation value (VEV) of the waterfall field \(\varphi\). Below we consider that \(M\sim m_{P}\), which is why \(\varphi\) can be called a flat direction (only lifted by Planck-suppressed interactions).
The waterfall field is non-canonical. Indeed, the the Lagrangian density is
\[{\cal L}=-\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma-\frac{\frac{1}{2 }\partial_{\mu}\varphi\partial^{\mu}\varphi}{(1-\varphi^{2}/M^{2})^{2}}-V( \varphi,\sigma)\,, \tag{22}\]
i.e. there are poles at the VEVs of \(\varphi\) due to non-trivial geometry in field space. This is the standard setup in \(\alpha\)-attractors, where the poles could be due to a non-trivial Kahler metric in supergravity [13]. We can define a canonically normalised waterfall field \(\phi\) using the transformation \(\varphi=M\tanh(\phi/M)\). In terms of \(\phi\), the scalar potential in Eq. (21) assumes the form
\[V(\phi,\sigma)=\frac{1}{2}g^{2}M^{2}\sigma^{2}\tanh^{2}(\phi/M)+\frac{\frac{1} {4}\lambda M^{4}}{\cosh^{4}(\phi/M)}+V(\sigma)\,, \tag{23}\]
where the minima along the waterfall direction have been displaced at infinity.
When the inflaton expectation value is large, the waterfall field is heavy and is pushed towards the origin. At the origin, \(\varphi\) is canonical, so the hybrid mechanism operates normally. The waterfall transition occurs when the inflaton reaches the critical value \(\sigma_{c}=(\sqrt{\lambda}/g)M\)[12]. Afterwards, the waterfall field finds itself on top of a potential hill and is released along its runaway direction towards large (absolute) values.
Near the origin, when \(\phi\ll M\) (without loss of generality we assume \(\phi>0\)), the runaway waterfall potential is approximated as
\[V\simeq\frac{\frac{1}{4}\lambda M^{4}}{\left[1+\frac{1}{2}(\phi/M)^{2}\right] ^{4}}\simeq\frac{1}{4}\lambda M^{4}\left[1-2\left(\frac{\phi}{M}\right)^{2} \right]\,. \tag{24}\]
Because \(M\sim m_{P}\) (see below), the waterfall field undergoes a period of quadratic hilltop inflation, while \(\phi\) dominates the Universe [14].
Eventually, \(\phi\gg M\) and the waterfall potential is approximated as
\[V\simeq\frac{\frac{1}{4}\lambda M^{4}}{\left[\frac{1}{2}\exp(\phi/M)\right]^{ 4}}\simeq 4\lambda M^{4}e^{-4\phi/M}\,. \tag{25}\]
During this roll, there is an attractor solution (power-law inflation [15]) in which the barotropic parameter of the rolling scalar field is
\[w=-1+\frac{16}{3}\left(\frac{m_{P}}{M}\right)^{2}\,. \tag{26}\]
Thus, the value \(M\approx 1.88\,m_{P}\) results in \(w\simeq\frac{1}{2}\), which means that there is a stiff period of the Universe history when the GW modes re-entering the horizon correspond to a peak with \(\Omega_{\rm GW}(f)\propto f^{2/5}\) [cf. Eq. (19)], which is not as sharp as the one due to kination and can be extended to observable frequencies (see Fig. 1).
A multiple of mechanisms can be responsible for reheating at the appropriate time. In Ref. [11], Ricci reheating is assumed as an example [16], where the Universe is reheated by the decay of a spectator field non-minimally coupled to gravity. It is shown that the appropriate \(T_{\rm reh}\) is obtained with non-minimal coupling \(\xi\simeq 30\).
## 6 Hyperkination
There is another example of creating enhanced primordial GWs by inflation, this time by truncating a peak in the GW spectrum generated by a stiff period. I have investigated this with collaborators in Ref. [17]. It can be done as follows.
In Palatini modified gravity we consider
\[\mathcal{L}=\frac{1}{2}m_{P}^{2}R+\frac{1}{2}\alpha R^{2}+\frac{1}{2}\xi\varphi^ {2}R-\frac{1}{2}\partial_{\mu}\varphi\,\partial^{\mu}\varphi-V(\varphi)\,, \tag{27}\]
where \(\alpha\) and \(\xi\) are non-perturbative coefficients. Switching to the Einstein frame we obtain
\[\mathcal{L}=\frac{1}{2}m_{P}^{2}R-\frac{1}{2}\partial_{\mu}\phi\,\partial^{ \mu}\phi+\frac{1}{4}\alpha\frac{h^{2}+4\alpha V}{h^{2}m_{P}^{4}}(\partial_{\mu }\phi\,\partial^{\mu}\phi)^{2}-\frac{Vm_{P}^{4}}{h^{2}+4\alpha V}\,, \tag{28}\]
where \(h(\varphi)=m_{P}^{2}+\xi\varphi^{2}\) and we employed the field redefinition
\[\frac{\mathrm{d}\phi}{\mathrm{d}\varphi}=\sqrt{\frac{hm_{P}^{2}}{h^{2}+4 \alpha V}}\,. \tag{29}\]
In the above there is a strange quartic kinetic term. Such a term can be considered in general k-inflation models (no need for Palatini modified gravity) [18].
The EoM is
\[\left[1+3\alpha\left(1+\frac{4\alpha V}{h^{2}}\right)\frac{\dot{ \phi}^{2}}{m_{P}^{4}}\right]\ddot{\phi}+3\left[1+\alpha\left(1+\frac{4\alpha V }{h^{2}}\right)\frac{\dot{\phi}^{2}}{m_{P}^{4}}\right]H\dot{\phi}\] \[+3\alpha^{2}\frac{\dot{\phi}^{4}}{m_{P}^{4}}\frac{\mathrm{d}}{ \mathrm{d}\phi}\left(\frac{V}{h^{2}}\right)+\frac{\mathrm{d}}{\mathrm{d}\phi} \frac{Vm_{P}^{4}}{h^{2}+4\alpha V}=0\,. \tag{30}\]
Then, from the energy-momentum tensor we can obtain the energy density and pressure of the field, which read
\[\rho_{\phi} =\frac{1}{2}\left[1+\frac{3}{2}\alpha\left(1+\frac{4\alpha V}{h^{2 }}\right)\frac{\dot{\phi}^{2}}{m_{P}^{4}}\right]\dot{\phi}^{2}+\frac{Vm_{P}^{4 }}{h^{2}+4\alpha V}\,, \tag{31}\] \[p_{\phi} =\frac{1}{2}\left[1+\frac{1}{2}\alpha\left(1+\frac{4\alpha V}{h^{2 }}\right)\frac{\dot{\phi}^{2}}{m_{P}^{4}}\right]\dot{\phi}^{2}-\frac{Vm_{P}^{4 }}{h^{2}+4\alpha V}\,. \tag{32}\]
After exiting the inflationary plateau, the inflaton field \(\phi\) becomes dominated by its kinetic energy density, i.e. it becomes oblivious to the potential \(V\). Then the above reduce to
\[\left(1+3\alpha\frac{\dot{\phi}^{2}}{m_{P}^{4}}\right)\ddot{\phi}+3\left(1+ \alpha\frac{\dot{\phi}^{2}}{m_{P}^{4}}\right)H\dot{\phi}=0 \tag{33}\]
and
\[\rho_{\phi}=\frac{1}{2}\left(1+\frac{3}{2}\alpha\frac{\dot{\phi}^{2}}{m_{P}^{4 }}\right)\dot{\phi}^{2}\quad\text{and}\quad p_{\phi}=\frac{1}{2}\left(1+\frac{ 1}{2}\alpha\frac{\dot{\phi}^{2}}{m_{P}^{4}}\right)\dot{\phi}^{2}\,. \tag{34}\]
Thus, when the quadratic kinetic term dominates one can effectively set \(\alpha=0\) and we have regular kination with \(w=1\). However, when the quartic kinetic term dominates we can effectively consider only the \(\alpha\)-depended terms. Then, \(w=p_{\phi}/\rho_{\phi}=\frac{1}{3}\). We call this period _hyperkination_. Eq. (19) suggests that the corresponding part of the GW spectrum is flat as in the radiation era. This means that the peak generated by kination has been truncated, which implies that the kinetic regime can last longer without disturbing BBN. As such, the GW signal can be amply boosted at observable frequencies as shown in Fig. 1.
## 7 Conclusions
Cosmic Inflation resolves the fine-tunings of the hot Big Bang and provides seeds for structure formation. Inflation is spectacularly verified by CMB observations. Another generic prediction of inflation is a superhorizon spectrum of primordial gravitational waves (GWs) generated through particle production. The form of the resulting GW spectrum depends on the post-inflation history. However, when GW modes re-enter the horizon during radiation domination they form a flat spectrum, too faint to be observable at present.
A stiff period in the Universe history enhances primordial GWs forming a peak in their spectrum. Non-oscillatory inflation is followed by such a period, dominated by the inflaton's kinetic energy density, called kination, but the frequencies of the peak are too high. The GW peak can be extended to observable frequencies if the stiff period is milder than that of kination, with \(w\approx 1/2\). A model realisation of this possibility considers two flat directions which intersect at an ESP and give rise to the hybrid mechanism with Planckian waterfall VEV, which is also a kinetic pole of the waterfall field, as in \(\alpha\)-attractors.
Another possibility to obtain a boost in primordial GWs down to observable frequencies is by considering higher order kinetic terms, as with k-inflation. This is possible to realise in Palatini modified gravity. Considering \(R+R^{2}\) gravity and a non-minimally coupled scalar field, results in additional quartic kinetic terms. When the quartic kinetic terms dominate, this gives rise to to hyperkination. Hyperkination is followed by regular kination, when the kinetic terms become canonical. The resulting truncated GW peak can be extended to observable frequencies without disturbing BBN.
Forthcoming observations of Advanced LIGO, LISA, DesiGO and BBO may well detect the primordial GWs generated by inflation. Detection of primordial GWs will not only confirm a prediction of inflation but offer tantalising evidence of the quantum nature of gravity, because the Bunch-Davis vacuum of virtual gravitons is assumed as an initial condition for the generation of GWs during inflation by particle production.
#### Acknowledgements
This work was funded in part by STFC with the consolidated grant: ST/X000621/1.
Figure 1: Plot of \(\Omega_{\rm GW}(f)\) superimposed with the observational expectations of LISA and Advanced LIGO (taken by Ref. [19]). The frequency range extends up to the scale of inflation, corresponding to the largest possible frequency, of GW which re-enter the horizon right at the end of inflation (assumed at the energy scale of grand unification). The uplift in the spectrum at low frequencies corresponds to the matter era of the hot Big Bang. The case when inflation is directly followed by the radiation era of the hot Big Bang (prompt reheating) is depicted by the low horizontal dashed thick line. As shown, the predicted spectrum is well beyond the observational capabilities of LISA and Advanced LIGO. The case when an early era of kination follows right after inflation corresponds to the purple line. As shown, there is a sharp peak in the spectrum (\(\Omega_{\rm GW}(f)\propto f\)), which however cannot be larger than the upper horizontal dashed line, which is the BBN constraint \(\Omega_{\rm GW}<10^{-6}\). Thus, the kination peak cannot extend to low frequencies and is not near the expected observations of LISA and Advanced LIGO. The case of a stiff period with \(w\approx\frac{1}{2}\) following right after inflation is depicted with the red line. The peak of GW spectrum is milder (\(\Omega_{\rm GW}(f)\propto f^{2/5}\)), which means that it can be extended to lower frequencies without violating the BBN bound. As shown, the spectrum will be detectable by both LISA and Advanced LIGO. Finally, the case when a period of hyperkination (with a flat spectrum) and then regular kination follows the end of inflation is shown with a green line, which well overlaps with the expected observations of LISA and Advanced LIGO. In all cases, after reheating, the usual radiation era of the hot Big Bang begins and the GW spectra become flat as the frequency is lowered. |
2306.08687 | Norm-guided latent space exploration for text-to-image generation | Text-to-image diffusion models show great potential in synthesizing a large
variety of concepts in new compositions and scenarios. However, the latent
space of initial seeds is still not well understood and its structure was shown
to impact the generation of various concepts. Specifically, simple operations
like interpolation and finding the centroid of a set of seeds perform poorly
when using standard Euclidean or spherical metrics in the latent space. This
paper makes the observation that, in current training procedures, diffusion
models observed inputs with a narrow range of norm values. This has strong
implications for methods that rely on seed manipulation for image generation,
with applications to few-shot and long-tail learning tasks. To address this
issue, we propose a novel method for interpolating between two seeds and
demonstrate that it defines a new non-Euclidean metric that takes into account
a norm-based prior on seeds. We describe a simple yet efficient algorithm for
approximating this interpolation procedure and use it to further define
centroids in the latent seed space. We show that our new interpolation and
centroid techniques significantly enhance the generation of rare concept
images. This further leads to state-of-the-art performance on few-shot and
long-tail benchmarks, improving prior approaches in terms of generation speed,
image quality, and semantic content. | Dvir Samuel, Rami Ben-Ari, Nir Darshan, Haggai Maron, Gal Chechik | 2023-06-14T18:12:15Z | http://arxiv.org/abs/2306.08687v3 | # Norm-guided latent space exploration for text-to-image generation
###### Abstract
Text-to-image diffusion models show great potential in synthesizing a large variety of concepts in new compositions and scenarios. However, their latent seed space is still not well understood and has been shown to have an impact in generating new and rare concepts. Specifically, simple operations like interpolation and centroid finding work poorly with the standard Euclidean and spherical metrics in the latent space. This paper makes the observation that current training procedures make diffusion models biased toward inputs with a narrow range of norm values. This has strong implications for methods that rely on seed manipulation for image generation that can be further applied to few-shot and long-tail learning tasks. To address this issue, we propose a novel method for interpolating between two seeds and demonstrate that it defines a new non-Euclidean metric that takes into account a norm-based prior on seeds. We describe a simple yet efficient algorithm for approximating this metric and use it to further define centroids in the latent seed space. We show that our new interpolation and centroid evaluation techniques significantly enhance the generation of rare concept images. This further leads to state-of-the-art performance on few-shot and long-tail benchmarks, improving prior approach in terms of generation speed, image quality, and semantic content.
## 1 Introduction
Text-to-image diffusion models have demonstrated an exceptional ability to generate new and unique images. They map random samples (seeds) from a high-dimensional space, conditioned on a user-provided text prompt, to a corresponding image. Unfortunately, the seed space, and the way diffusion models map it into the space of natural images are still poorly understood. This may have a direct effect on generation quality. For example, these models have difficulty generating images of rare concepts, and specialized methods have been proposed to resolve this issue [49]. Our limited understanding of the seed space is further demonstrated by the fact that standard operations on seeds, such as interpolating between two seeds or finding the centroid of a given set of seeds, often result in low-quality images with poor semantic content (Figure 1 left). Thus, methods based on the exploration and manipulation of seed spaces face a considerable challenge.
The aim of this paper is to propose simple and efficient tools for exploring the seed space and to demonstrate how these tools can be used to generate rare concepts. Our main observation is that there is a specific property, the norm of the seed, which plays a key role in analyzing and exploring the seed space. In more concrete terms, since seeds are sampled from a multidimensional Gaussian distribution, the norm of the seeds is determined by the \(\chi\) distribution. For high dimensional Gaussian distributions, such as the ones used by diffusion models, the \(\chi\) distribution is concentrated around a
specific positive number. Consequently, diffusion models are biased toward inputs with this norm, resulting in lower-quality images when the norm of the seed is very different.
To account for this bias, we propose to use a prior distribution over the norms in the seed space based on the \(\chi\) distribution to guide exploration. While prior-based exploration techniques for seed spaces have been proposed before [2; 4], the advantage of our prior is that it does not rely upon an expansive estimation of the empirical data distribution nor on complex computations and hence can be applied to very high dimensional latent spaces. Yet, as we show below, our prior still significantly improves exploration techniques in seed space.
As a first step, we propose a novel method for interpolating between two seeds. In contrast to Linear Interpolation (LERP) or Spherical Linear Interpolation (SLERP) [54], we formulate this problem as finding a likelihood-maximizing path in seed space according to the aforementioned prior. In addition to providing us with an interpolating path, we also demonstrate that the optimal value of this optimization problem defines a new non-Euclidean metric structure over the seed space. Figure 1 compares our interpolation paths to two other frequently used interpolation methods in 2D and in image space. The improvement of the image quality along the path is evident. Specifically, the 2D example (right panel) illustrates that LERP and SLERP paths cross low-probability areas whereas our path maintains a high probability throughout. The same phenomenon is shown for images (left panel) where it is apparent that intermediate points in the paths generated by the baseline methods have a significantly lower quality.
As a next step, we build on our newly defined metric to define a generalized centroid for a set of seeds. In contrast to the standard definition of the centroid in Euclidean spaces, we define the centroid as the point that minimizes the distances to the seeds according to the new distance function (also known as the Frechet mean for that given metric). We show how to discretize the two optimization problems above and solve them using a simple and efficient optimization scheme. We call our approach **NAO** for _Norm-Aware Optimization_.
We evaluate NAO extensively. First, we directly assess the quality of images generated by our methods, showing higher quality and better semantic content. Second, we use our seed space interpolation and centroid finding methods in two tasks: (1) Generating images of rare concepts, and (2) Augmenting semantic data for few-shot classification and long-tail learning. For these tasks, our experiments indicate that seed initialization with our prior-guided approach improves SoTA performance and at the same time has a significantly shorter running time (up to X10 faster) compared to other approaches.
## 2 Related Work
**Text-guided diffusion models.** Text-guided diffusion models involve mapping random seed (noise) \(z_{T}\) and textual condition \(P\) to an output image \(z_{0}\) through a denoising process [7; 47; 41]. The inversion of this process can be achieved using a deterministic scheduler (e.g. DDIM [56]), allowing for the recovery of the latent code \(z_{T}\) from a given image \(z_{0}\). Detailed overview in the supplemental.
Figure 1: (Left) Visual comparison of different interpolation methods between two seeds from high-dim space of StableDiffusion [44]. Images generated using Stable Diffusion [44]. (Right) Linear, spheric, and likelihood-based interpolation methods in 2D space, where the norm of samples has a \(\chi\) distribution (log) PDF. Both linear interpolation and SLERP [54] do not adhere to the structure of the seed space of diffusion models (quantified in Table 1).
**Rare concept generation with text-to-image models.** Diffusion models excel in text-to-image generation [7, 41, 47], but struggle with rare fine-grained objects (_e.g_.payphone or tiger-cat in StableDiffusion [44]) and compositions (_e.g_.shaking hands) [33, 49]. Techniques like pre-trained image classifiers and text-driven gradients have been proposed to improve alignment with text prompts, but they require pre-trained classifiers or extensive prompt engineering [17, 26, 34, 38, 39, 47, 65]. Other approaches using segmentation maps, scene graphs, or strengthening cross-attention units also face challenges with generating rare objects [5, 10, 19, 20, 71]. SeedSelect [49] is a recent approach that optimizes seeds in the noise space to generate rare concepts. However, it suffers from computational limitations and long generation times. This paper aims to address these limitations by developing efficient methods that significantly reduce generation time while improving the quality of generated images.
**Latent space interpolation.** Interpolation is a well-studied topic in computer graphics [55, 23, 58]. Linear Interpolation (LERP) is commonly used for smooth transitions by interpolating between two points in a straight line. Spherical Linear Interpolation (SLERP) [54], on the other hand, offers computing interpolation along the arc of a unit sphere, resulting in smoother transitions along curved paths. Image interpolations in generative models are obtained by three main approaches: (1) Linear or spherical interpolation between two latent vectors [1, 42, 53, 72], (2) Image-to-Image translation approaches [36] and (3) Learning an interpolation funciton or metrics based on the data [4, 11, 29, 53]. [3] observed that linearly traveling a normally distributed latent space leads to suboptimal results and proposed an interpolation based on a Riemannian metric. [2] further proposed a methodology to approximate the induced Riemannian metric in the latent space with a locally conformally flat surrogate metric that is based on a learnable prior. Note, that as opposed to our approach, these priors do not have a closed-form solution, they work on a relatively low dimensional latent space of a VAE (constrained and compact latent space), and they learn the metric from the data itself. In this paper, we do not assume any of the above. We introduce a novel interpolation approach that effectively use the inherent structure of the latent space to achieve correct interpolation without any additional data, on the high-dimensional seed space of a diffusion model.
**Data Augmentation via Latent Space Exploration.** Previous methods, such as [16, 35] and [6], have proposed techniques for semantic data augmentation using the latent space of generative models. These methods involve imposing uniform latent representations and applying linear interpolation or learning mappings to sample specific areas in the latent space. However, these approaches require training generative models from scratch. In contrast, this paper demonstrates a more efficient approach by utilizing the latent space of a pre-trained diffusion model for creating data augmentations without the need for additional model fine-tuning.
## 3 A norm-based prior over seeds
We start with reviewing statistical properties of samples in a seed latent space \(z_{T}\in\mathbb{R}^{d}\), with \(d\) denoting the dimension of the space.
Figure 2: **(a)** Progressively changing the norm of a fixed seed, which initially has a norm of \(\sqrt{d}=128\). Images are generated by Stable Diffusion [44]. The visual quality of generated images degrades as the norm diverges away from \(\sqrt{d}\). **(b)** Mean per-class FID of the generated images in relation to the seed norms. **(c)** Mean per-class accuracy of the generated images, as determined by a state-of-the-art pre-trained classifier, as a function of the seed norm.
In diffusion models, it is common to sample \(z\) from a high-dimensional standard Gaussian distribution \(z_{T}\sim\mathcal{N}(0,I_{d})\). For such multivariate Gaussians, the \(L_{2}\) norm of samples has a \(\chi\) (Chi) distribution. \(||z_{T}||=\sqrt{\sum_{i=0}^{d}{z^{i}}^{2}}\sim\chi^{d}=||z_{T}||^{d-1}e^{-||z_{T }||^{2}/2}/(2^{d/2-1}\Gamma(\frac{d}{2}))\) where \(\Gamma(\cdot)\) is the Gamma function and \(||\cdot||\) is the standard Euclidean norm. Importantly, as the dimension grows, the distribution of the norm tends to be highly concentrated around the mean, since at a high dimension, the variance approaches a constant 0.5.
This strong concentration is illustrated in the inset figure on the right, for \(d=16384=128^{2}\), the seed dimension used by Stable Diffusion [44]. At this dimension, the mean is also very close to the mode of the distribution, and both approach \(\sqrt{d}=128\). This property means that samples drawn from a multi-variate high-dimensional Gaussian distribution are concentrated around a specific value \(r=\text{mode}(\chi^{d})\approx\sqrt{d}\). Our key observation is that diffusion models are trained with inputs sampled from the above normal distribution, and therefore the models are only exposed to inputs with norm values close to \(r\) during training. We hypothesize that this causes the model to be highly biased toward inputs with similar norm values.
We conducted several experiments to validate this bias. First, we inspected the visual quality of images generated with different norm values, all sharing the same direction of the seed vector. Figure 2(a) visually illustrates the sensitivity of the Stable Diffusion model to the input norm, showing that quality degrades as the norm drifts away from the mode. Second, we conducted a systematic quantitative experiment and measured the impact of seed norm on image quality in terms of FID and classification accuracy scores using an ImageNet1k pre-trained classifier. Figures 2(b)-2(c), show again that image quality depends on the seed having a norm close to the mode. Full details of these experiments are given in Section 5 and supplemental material.
We conclude that the norm of the seed constitutes a key factor in the generation of high-quality images. Below we discussed how this fact is used for seed optimization and interpolation.
## 4 Norm-guided seed exploration
Based on the above results, we define a prior over the seed space as \(\mathcal{P}(z_{T}):=\chi^{d}(||z_{T}||)\). This probability density function represents the likelihood of a seed with norm \(||z_{T}||\) to be drawn from the Gaussian distribution. We now describe simple and efficient methods for seed interpolation and centroid finding using that prior.
### Prior induced interpolation between two seeds
We first tackle the task of finding an interpolation path between the seeds of two images. The derivation of this interpolation path illustrates the advantages of using the prior in a simple setup, and will also be used later for finding centroids for sets of seeds. As seen in Figure 1 (see also Figure 3), a linear interpolation path between seeds consists of seeds that yield low-quality images. Instead, we define a better path \(\gamma:[0,1]\rightarrow\mathbb{R}\) as the solution to the following optimization problem: Given two images, \(I_{1}\) and \(I_{2}\), and their corresponding inversion seeds, \(z_{T}^{1}\) and \(z_{T}^{2}\), derived by inversion techniques (e.g. DDIM Inversion [56]), we aim to maximize the log-likelihood of that path under our prior, defined as the line-integral of the log-likelihood of all points on the path.
Equivalently, we minimize the negative log-likelihood of the path, which is strictly positive, yielding
\[\inf_{\gamma}\ -\int_{\gamma}\log\mathcal{P}(\gamma)ds\quad\text{s.t.}\quad \gamma(0)=z_{T}^{1},\gamma(1)=z_{T}^{2}. \tag{1}\]
Here, the infimum is taken with respect to all differentiable curves \(\gamma\) and \(\int_{\gamma}W(\gamma)ds\) denotes the line integral of a function \(W:\mathbb{R}^{d}\rightarrow\mathbb{R}\) over the curve \(\gamma\)1. We denote the optimal value obtained for optimization problem (1) as \(f(z_{T}^{1},z_{T}^{2})\). It turns out that \(f\) defines a (non-euclidean) distance on the seed space. This is stated formally in the following proposition.
Footnote 1: When \(\gamma\) is differentiable, the integral can be calculated using the following formula: \(\int_{0}^{1}W(\gamma(t))||\gamma^{\prime}(t)||dt\).
**Proposition 1**.: _For any distribution yielding strictly positive negative log-likelihood, and specifically when \(\mathcal{P}\) is the \(\chi^{d}\) distribution, then \(f(\cdot,\cdot)\) is a distance function on \(\mathbb{R}^{d}\)._
See the supplementary material for proof. It is important to note that the optimization problem does not only provide us with a path that maximizes the log-likelihood of our prior, but it also defines a new metric structure on the seed space that will prove useful in Section 4.2.
To approximate the solution to problem (1) in practice, we discretize the path into a sequence of piece-wise linear segments, that connect a series of points \(z_{T}^{1}=x_{0},\ldots,x_{n}=z_{T}^{2}\) and replace the integral with its corresponding Riemann sum over that piece-wise linear path:
\[\begin{split}\underset{x_{0},\ldots,x_{n}}{\text{minimize}}& -\sum_{i=1}^{n}\log\mathcal{P}\bigg{(}\frac{x_{i}+x_{i-1}}{2} \bigg{)}\|x_{i}-x_{i-1}\|\\ &\text{s.t.}\quad x_{0}=z_{T}^{1},x_{n}=z_{T}^{2},\quad\|x_{i}-x_ {i-1}\|\leq\delta,i\in\{1,\ldots,n\}\end{split} \tag{2}\]
To facilitate a good approximation of the continuous integration, we also constrain consecutive path points to be close (see implementation at the end of Sec. 4.2).
Figures 1-3 illustrate paths resulting from the optimization of the discretized optimization problem. Our optimized path consistently produces higher-quality images compared to other methods. A quantitative evaluation is given in Section 5.
### Prior induced centroid
Having defined a new metric structure in seed space, we are now ready to tackle the problem of finding a centroid of multiple seeds. To this end, we assume to have a set of images \(\{I_{1},I_{2},\ldots,I_{k}\}\) with their inversions \(\{z_{T}^{1},z_{T}^{2},\ldots,z_{T}^{k}\}\) and we wish to find the centroid of these images. For example, one possible use of such a centroid would be to find a good initialization for a seed associated with a rare concept based on a few images of that concept. This enables us to merge information between seeds and generate a more reliable fusion of the images that would not be achievable through basic interpolation between two seeds.
Recall that the centroid of a set of points is defined as the point that minimizes the sum of distances between all the points. Perhaps the simplest way to define a centroid in our case is by using the Euclidean distance, where the centroid definition boils down to a simple formula - the average of the seeds. Unfortunately, as we show in Section 3 this definition of a centroid is not suitable for our purposes due to the incompatibility of the centroid's norm with the diffusion model. Instead, we propose to use a generalization of the Euclidean centroid, called the Frechet mean, which is induced by the distance function \(f\) defined above. To achieve this, we formulate an optimization problem that seeks the centroid \(c\in\mathbb{R}^{d}\) that minimizes the distances \(f(c,z_{T}^{i})\) to all the seeds:
Figure 4: Comparing different centroid finding methods in 2D space on the contour of the \(\chi\) distribution (log) PDF.
Figure 3: Qualitative comparison between different interpolation methods between two image seeds. ”Jeep” is a common concept while ”Tiger cat” is a rare concept. Images generated using SD [44].
\[c^{*},\gamma_{1}^{*},\ldots,\gamma_{k}^{*}=\underset{c,\gamma_{1},\ldots,\gamma_{k} }{\text{argmin}}\left(-\sum_{l=1}^{k}\int_{\gamma_{i}}\log\mathcal{P}(\gamma_{i} )ds\right), \tag{3}\]
where \(\gamma_{i}\) are paths between the common centroid \(c\) and the inversion \(z_{T}^{i}\). This optimization problem can be discretized in the same way we discretized Equation (1), i.e. by defining all the paths as piecewise linear paths defined by a sequence of points and adding constraints on the distances between successive points in the path. See the supplemental for a discretized version.
Figure 4 illustrates an example of a centroid in 2D found with our approach. Figure 5 shows images resulting from the centroids found in seed space. Section 5 provides quantitative results.
**Application to seed optimization methods.** Diffusion models often encounter significant imbalances in the distribution of concepts within their training data [49]. This imbalance presents a challenge for standard pre-trained diffusion models, leading to difficulties in accurately generating rare concepts. One proposed solution, as suggested by [49], involves employing a seed optimization technique. SeedSelect begins with a randomly generated seed that produces an incorrect image and progressively optimizes the seed until a plausible image is generated. However, the method described in [49] is time-consuming, requiring up to 5 minutes to generate a single image. Our approach can serve as an initial point for SeedSelect to reduce substantially its optimization time by generating initial seeds that result in realistic and quality images.
In our case, given few images, generating new data can be obtained in the following manner: First, find a centroid \(c^{*}\) and the interpolation paths \(\gamma_{i}^{*}\) between image inversion in the seed space. Next, new data is generated by sampling points along the paths from the givens seeds to the centroid, and using them as initializations for SeedSelect.
**Implementation** We implemented a simple optimization algorithm that optimizes the discretized problems using Pytorch and the Adam optimizer. To speed up convergence we initialize the optimization variables: the centroid is initialized with the Euclidean centroid and path variables are initialized as the values of the linear path between the points and the centroid. We implement the constraints \(c(x)\leq 0\) using a soft penalty term of the form \(\alpha\cdot\text{ReLU}(c(x))\) where \(\alpha\) is a hyper-parameter. We note that in practice there is no guarantee that this optimization scheme converges to the optimal value of the optimization problem, however, we see in practice that high-quality paths are obtained.
Figure 5: Comparing different centroid optimization approaches on common and rare concepts of ImageNet1k. We further initialized SeedSelect [49] with the centroids and run it for up to 5 iterations (\(\sim\)20 sec on a single A100 GPU).
Experiments
We evaluate our approach in terms of image _quality_ and also consider generation _time_. We start by studying the quality and semantic content of images generated with our seeding approach and evaluate them in two applications: (1) rare concept generation and (2) semantic data augmentation, aimed at enhancing few-shot classification and long-tail learning. An ablation study can be found in the supplementary material.
**Direct evaluation of interpolation and centroid finding:** Table 1 compares FID scores and the accuracy of images generated using different interpolation and centroid finding methods. For the interpolation experiment, we randomly selected a class from ImageNet1k, and obtained a pair of images and their corresponding seeds through inversion [56]. For interpolation methods that require path optimization, we used paths with 10 sampled points. We then select three seeds along the path (also for LERP and SLERP), with uniform intervals, and feed them into StableDiffusion [44], to generate 3 new images per pair. We repeated the process above to obtain 100 images per class. For the centroid experiment, we used 3-25 seed points obtained from the inversion of additional images (randomly selected) from the train set. We repeated this process for 50 random ImageNet1k classes. Mean FID scores were then calculated (against real ImageNet1k images), along with mean per-class accuracy using a pre-trained classifier. See supplementary material for more details. Our optimized path consistently produces higher-quality images compared to other methods.
We compared our approach to the following baselines:
**Euclidean** is the standard Euclidean centroid calculated as the mean of the seeds. **Normalized Euclidean** is the same as Euclidean, but the centroid is projected to the sphere induced by the \(\chi\) distribution. **Sphere Projection** first normalizes the seed to a sphere with radius \(r=Mode(\chi)\), then finds the centroid on a sphere by optimizing a point that minimizes the sum of geodesic paths between the seeds to the centroid, as presented in [9]. See supplemental for more details. **NAO-path** and **NAO-centroid** are our methods presented in sections 4.1 and 4.2, respectively. The high accuracy and low FID levels of **NAO** in Table 1 demonstrate that our interpolation approach outperforms other baselines in terms of image quality and content. We further put these results to test in downstream tasks (in Sec. 5.1-5.3).
### Rare-concept generation
Following [49] we compared different centroid optimization strategies in rare-concept generation.
**Dataset.** The evaluation is performed on ImageNet1k classes ordered by their prevalence in the LAION2B dataset [51]. LAION2B [51] is a massive "in the wild" dataset that is used for training foundation diffusion models (_e.g_. Stable Diffusion [44]).
**Compared Methods.** We conducted a comparative evaluation between different centroid optimization strategies. **SeedSelect [49]** is a baseline method where a seed is randomly sampled and no centroid is calculated. Other baselines were presented at the beginning of Section 5.
**Experimental Protocol.** For every class in ImageNet, we randomly sampled subsets of training images, calculated their centroid in seed space, and generated an image using Stable Diffusion directly or as input to SeedSelect [49]. The class label was used as the prompt. This process is repeated until 100 images are generated for each class. We then used a SoTA pre-trained classifier [60] to test if the generated images are from the correct class or not (more details can be found in the supplementary). We use this measure to evaluate the quality of the generated image, verifying that a strong classifier correctly identifies the generated image class. We also report Mean FID score between the generated images and the real images, mean centroid initialization time \(\hat{T}_{Init}\) and mean SeedSelect optimization time until convergence \(T_{Opt}\) on a single NVIDIA A100 GPU. The results summarized in Table 2 show that our NAO method substantially outperforms other baselines, both in accuracy and in FID
\begin{table}
\begin{tabular}{l c c} & **Acc** & **FID** \\ \hline \hline
**Interpolation methods** & & \\ \hline LERP & 0.0 & 50.59 \\ SLERP [54] & 30.41 & 18.44 \\
**NAO-path (ours)** & 51.59 & **6.78** \\ \hline
**Centroid computation methods** & & \\ \hline Euclidean & 0.0 & 54.88 \\ Normalized Euclidean & 27.95 & 37.04 \\ Sphere Projection & 40.81 & 14.28 \\
**NAO-centroid (ours)** & 67.24 & **5.48** \\ \hline \end{tabular}
\end{table}
Table 1: Comparing FID and accuracy of images generated by SD through sampling from different interpolation and centroid computation methods.
score. Furthermore, NAO gives a better initialization point to SeedSelect [49], yielding significantly faster convergence without sacrificing accuracy or image quality.
Next, we evaluate NAO as a semantic data augmentation method on two learning setups: (1) Few-shot classification, and (2) Long-tail classification. We aim to show that our approach not only achieves faster generation speed but also attains state-of-the-art accuracy results on these benchmarks.
total of 100 classes, with 64 classes used for meta-training, 16 classes for meta-validation, and 20 classes for meta-testing. The dataset includes 50,000 training images and 10,000 testing images, with an equal number of images distributed across all classes. **(3) CIFAR-FS [8]:** Created from CIFAR-100 [31] by using the sampling criteria as miniImageNet. Has 64 classes for meta-training, 16 classes for meta-validation, and 20 classes for meta-testing; each class containing 600 images.
Following all previous baselines, we report classification accuracy as the metric. We report our results with 95% confidence intervals on the meta-testing split of the dataset.
**Compared Methods.** We conducted a comparative evaluation of our approach with several state-of-the-art methods for few-shot learning. These methods fall into three categories based on their approach. (A) Methods that do not use pre-training nor use class labels during training: **Label-Hallucination [27]** and **FeLMi [45]**; (B) Methods that use class labels during training: **SEGA [67]**; and (C) Methods that utilize a classifier pre-trained on external data and also use class labels during training: **SVAE [66]**, **Vanilla Stable Diffusion (Version 2.1) [44]**, **Textual Inversion [21]**, **DiffAlign [46]**, and **SeedSelect [49]**. The last four methods are semantic data augmentation methods.
**Experimental Protocol.** For a fair comparison with prior work, we follow the training protocol in [46] and [49]. We generated 1,000 additional samples for each novel class using SeedSelect. It was initialized with seeds found with NAO using the centroid and interpolation samples of the few-shot images provided during meta-testing and prompted it with the corresponding class name. We used a ResNet-12 model for performing N-way classification and trained it using cross-entropy loss on both real and synthetic data.
**Results.** Tables 2(a) and 2(b) compare NAO with SoTA approaches on few-shot classification benchmarks: CUB, miniImageNet, and CIFAR-FS. NAO consistently outperforms all few-shot methods on CUB [63] and miniImageNet [62], and reaches comparable results to SeedSelect [49] on CIFAR-FS [8]. Table 4 further compares the mean run time of SeedSelect with and without NAO on these datasets, on a single NVIDIA A100 GPU. The results highlight the competence of our approach in generating rare and fine-grained classes, to reach top accuracy with a five-fold reduction in the runtime.
### Long-tail learning
**Datasets.** We further evaluated NAO on long-tailed recognition task using the **ImageNet-LT**[37] benchmark. ImageNet-LT [37] is a variant of the ImageNet dataset [15] that is long-tailed. It was created by sampling a subset of the original dataset using the Pareto distribution with a power value of \(\alpha=6\). The dataset contains 115,800 images from 1,000 categories, with the number of images per class ranging from 5 to 1,280.
**Compared Methods.** We compared our approach with several state-of-the-art long-tail recognition methods. These methods fall into three categories based on their approach. (A) Long-tail learning methods that do not use any pretraining nor employ class labels for training: **CE** (naive training with cross-entropy loss), **MetaSAug [32]**, **smDragon [48]**, **CB LWS [28]**, **DRO-LT [50]**, **Ride [64]** and **Paco [13]**. (B) Methods that use class labels as additional information during training: **DRAGON [48]**. (C) Methods that were pre-trained on external datasets and use class labels as additional information during training: **VL-LTR [59]**, **Vanilla Stable Diffusion (Version 2.1) [44]** and **SeedSelect [49]**. **MetaSAug [32]**, **Vanilla Stable Diffusion** and **SeedSelect [49]** are semantic augmentations methods. Note that **VL-LTR [59]**, compared to other models, further fine-tuned the pre-trained model (CLIP [40]) on the training sets.
**Experimental Protocol.** Following previous methods, we use a ResNet-50 model architecture, train it on real and generated data, and report the top-1 accuracy over all classes on class-balanced test sets.
**Results.** Tables 2(c) evaluates our approach compared to long-tail recognition benchmarks. NAO reaches SoTA results albeit simple, compared to other complex baselines.
## 6 Conclusion
This paper proposes a set of simple and efficient tools for exploring the seed space of text-to-image diffusion models. By recognizing the role of the seed norm in determining image quality based on the \(\chi\) distribution as prior, we introduce a novel method for seed interpolation and define a non-Euclidean
metric structure over the seed space. Furthermore, we redefine the concept of a centroid for a set of seeds and present an optimization scheme based on the new distance function. Experimental results demonstrate that these optimization schemes, biased toward the \(\chi\) distribution mode, generate higher-quality images compared to other approaches. Despite the simplicity and effectiveness of our approach, there are several limitations to be aware of. Firstly, compared to standard interpolation and centroid calculation, it involves an additional optimization step. Secondly, our centroid and/or the samples along our interpolation paths may not produce plausible and semantically correct images on their own, necessitating the use of SeedSelect optimization. Lastly, although our method is expected to be applicable to all diffusion models, we specifically evaluated it with the open-source Stable Diffusion [44] model in this study.
|
2303.12001 | ViC-MAE: Self-Supervised Representation Learning from Images and Video
with Contrastive Masked Autoencoders | We propose ViC-MAE, a model that combines both Masked AutoEncoders (MAE) and
contrastive learning. ViC-MAE is trained using a global featured obtained by
pooling the local representations learned under an MAE reconstruction loss and
leveraging this representation under a contrastive objective across images and
video frames. We show that visual representations learned under ViC-MAE
generalize well to both video and image classification tasks. Particularly,
ViC-MAE obtains state-of-the-art transfer learning performance from video to
images on Imagenet-1k compared to the recently proposed OmniMAE by achieving a
top-1 accuracy of 86% (+1.3% absolute improvement) when trained on the same
data and 87.1% (+2.4% absolute improvement) when training on extra data. At the
same time ViC-MAE outperforms most other methods on video benchmarks by
obtaining 75.9% top-1 accuracy on the challenging Something something-v2 video
benchmark . When training on videos and images from a diverse combination of
datasets, our method maintains a balanced transfer-learning performance between
video and image classification benchmarks, coming only as a close second to the
best supervised method. | Jefferson Hernandez, Ruben Villegas, Vicente Ordonez | 2023-03-21T16:33:40Z | http://arxiv.org/abs/2303.12001v3 | # Visual Representation Learning from Unlabeled Video using
###### Abstract
Masked Autoencoders (MAEs) learn self-supervised representations by randomly masking input image patches and a reconstruction loss. Alternatively, contrastive learning self-supervised methods encourage two versions of the same input to have a similar representation, while pulling apart the representations for different inputs. We propose ViC-MAE, a general method that combines both MAE and contrastive learning by pooling the local feature representations learned under the MAE reconstruction objective and leveraging this global representation under a contrastive objective across video frames. We show that visual representations learned under ViC-MAE generalize well to both video classification and image classification tasks. Using a backbone ViT-B/16 network pre-trained on the Moments in Time (MiT) dataset, we obtain state-of-the-art transfer learning from video to images on Imagenet-1k by improving 1.58% in absolute top-1 accuracy from a recent previous work. Moreover, our method maintains a competitive transfer-learning performance of 81.50% top-1 accuracy on the Kinetics-400 video classification benchmark. In addition, we show that despite its simplicity, ViC-MAE yields improved results compared to combining MAE pre-training with previously proposed contrastive objectives such as VicReg and SiamSiam.
## 1 Introduction
Self-supervised visual representation learning has led to great success in image benchmarks [10, 28, 8, 27]. This success has been mainly driven by two paradigms: Joint-embedding methods and masked image modeling (MIM). Joint-embedding methods learn representations that are invariant to specific transformations, these methods are either contrastive [10, 28, 8], or negative free methods [12, 4]. More recently, masked image modeling has emerged as a successful alternative to joint embedding methods. These methods work by randomly masking out parts of the input and forcing a model to predict the masked parts [3, 27, 21, 55].
Self-supervised methods from the image domain have been successfully replicated for _video_ representation learning with remarkable success [21, 55, 45, 22]. These methods yield strong video feature representations that transfer to a range of downstream video recognition tasks. However, there is still a gap in performance in the _video-to-image_ transfer learning setting where it is difficult to obtain good image features by relying solely on video pre-training. Learning from video should also yield good image representations since videos naturally contain complex changes in pose, viewpoint, deformations, among others. These variations can not be simulated through the standard image augmentations used in joint-embedding methods or in MIM methods. In this work, we propose Video Contrastive Masked AutoEncoding (ViC-MAE) and show that our method improves _video-to-image_ transfer performance while maintaining performance on video representation learning.
Figure 1: ViC-MAE operates over video frames using masked image modeling at the frame level and contrastive learning at the temporal level. Since our model operates over video frames, it can take advantage of viewpoint and temporal consistency which are absent in data augmentations over isolated images.
The work proposed by Gordeon et.al. [24] uses two distinct frames from a video as augmentations for instance discrimination similar to contrastive methods getting good results for video benchmarks but still relatively modest results in image benchmarks i.e. ImageNet. Feichtenhofer et.al. [21] uses a simple masked image modelling objective (pixel reconstruction) that obtains state-of-the-art results on video benchmarks and very strong results on ImageNet but still below the same method applied only on images. More recently, Parthasarathy et.al. [42] becomes the first to obtain results that rival ImageNet results by modifying the MoCLR [49] framework to videos using a larger crop size, temporal augmentations, and multi-scale contrastive pooling, but most importantly; this work devises a data collection methodology to address the domain mismatch. Given these encouraging results, we take a step back and ask the questions: Do we really need to collect more data to obtain good image representations from video? Are negative examples as used in [24, 42] actually needed to learn good video-to-image representations? Can we combine the simplicity of masked image modeling over the same frame and contrastive learning over different frames to learn global video representations?
With these question in mind, we propose to leverage contrastive learning and masked image modeling for videos in a single framework which we refer as ViC-MAE (**V**ideo **C**ontrastive MAE). As illustrated in Figure 1, we sample two frames from a single video and use contrastive learning over the time dimension to make the representation learned by the encoder similar. This forces the encoder to learn a global representation for videos, and then we use masked image modeling over single frames with a simple reconstruction loss to encourage the encoder to also learn local features of the frames of the video. We also attempted to combine MAE with standard contrastive methods such as VicReg [4], and SiamSiam [12] by using the \([\text{CLS}]\) token as a global video representation, but found that this simple strategy is insufficient to obtain good image recognition performance. Instead we propose to aggregate the local features learned by the MAE encoder using a global pooling layer. Then we use this aggregated global feature representation using a contrastive loss over this global video representation. We found this approach to be superior to using the \([\text{CLS}]\) token. We use the ViT architecture [18] as our base model as this is the standard architecture used for previous masked image modeling methods [21, 55]. Our models are then finetuned for various image recognition tasks to demonstrate the transfer capabilities of our method. Based on our experiments, we report the following findings:
1. Training with large frame gaps improves image classification performance. Joint-embedding methods usually require strong augmentations, which in our video pre-training setting come naturally from choosing large gaps in between sampled frames.
2. Training with negative pairs surpasses methods that only train with positive samples. This is in line with results for other methods that train on videos and evaluate on images [24, 42].
3. Training with strong image transformations as augmentations is not necessary. This is in contrast to other works that still need to apply strong color and view augmentations to achieve good results [24, 42].
Our contributions can be summarized as follows: (1) We obtain the best _video-to-image_ transfer learning results in the Imagenet-1k benchmark, (2) We propose ViC-MAE by combining contrastive learning with masked image modeling and show that our proposed method achieves superior accuracy than strong alternatives based on exisiting methods (VicReg, Siamsiam), and (3) We show superior transfer learning accuracy on a wide array of downstream image classification tasks compared to a baseline MAE pre-trained network.
## 2 Related Work
Our work is related to general self-supervised learning methods from video, and methods specifically targeting image representation learning from video in some form. In this section, we provide a summary for representative works.
Self-supervised learning from videos.Self-supervised learning in the video domain provides a way to exploit the time dimension as a learning signal that encourages models to learn a richer representation in comparison to learning only from images. In the past few years, this has involved the creation of pretext training tasks that leverage prior knowledge about videos such as frame continuity and forecasting [47, 52, 51, 37, 35, 17], object tracking [1, 53, 43, 54], and others. Other approaches also leverage facts about videos, but design the training in a contrastive learning paradigm [4, 12, 56, 58, 24, 42]. These methods leverage the temporal continuity of videos to sample negative and positive pairs for training under a contrastive learning objective. More recently, self-supervised learning approaches based on masked auto encoders (MAE) [27] rely on masked image modeling adapted to video data to pre-train models. These models can be later used for transfer learning to downstream tasks [21, 55, 50]. These methods train models by learning to reconstruct missing patches from a video in the form of either spatiotemporal patches or random patches. In contrast to the aforementioned works, our approach combines the discriminative representation learning of contrastive methods with the generative learning of masked image modeling methods in a unified pre-training strategy that is applicable to image and video downstream tasks.
Learning image representations from video.While datasets such as ImageNet provide a large and diverse source of data for the development of perceptual systems, it still an incomplete representation of the world and how it is experienced by visual recognition models at test time. Image datasets lack a temporal dimension which provides a richer source of information for intelligent agents in the form of object deformations, temporal occlusions, multiple views, lighting changes, and more. This missing information causes models developed from image datasets to lack robustness when used in real world applications in which the inputs will be a continuous stream of frames in the form of video. To this end, there have been recent works that focus on learning robust image representations from video data. Video Contrastive Noise Estimation (VINCE) [24] argues that video provides natural image augmentations for free, and these can improve performance over artificially produced augmentations and even pretraining on ImageNet. Video Frame-level Similarity (VFS) [58] shows that using the time dimension to learn correspondences can produce models learned on video datasets that transfer to downstream image tasks. In [56], they use cycle consistency that first maps an image from a video into a similar image to another video, and then map that image back to the closest frame within the initial video which could vary slightly from the origin of the cycle. Feichtenhofer et.al [21] uses masked visual modeling for video representation learning but the model is shown to be useful on image level tasks. More recently, Piergiovanni et.al. [44] proposed models that can simultaneously learn from image and video datasets while Parthasarathy et.al [42] proposes a video dataset curation procedure that addresses the domain mismatch between video and image datasets. In contrast, our method aims to learn representations from any video dataset using ViC-MAE which learns representations from video that generalize to both image and video datasets.
## 3 Method
We propose ViC-MAE for space-time feature learning, which works using contrastive learning at the time level and
Figure 2: **ViC-MAE** inputs two distant frames from a video using a siamese backbone (shared weights), and randomly masks them, before passing them trough a ViT-Base model which learns a representation of local features using masked image modeling. A global representation of the video is then constructed by global pooling of the local features learned by the ViT-Base model trained to reconstruct individual patches using an \(\ell_{2}\) loss. A standard predictor and a target encoder are used with a contrastive learning loss over the batch dimension to pull global representations closer for frames in the same video and push apart representations from frames in different videos. The use of an aggregation layer before the predictor network aids to avoid collapse of the learned global representations.
masked image modelling at the space level.
### Background
We provide here some background terminology and review of closely related methods that we build upon.
Masked image modeling.Masked image modeling provides a way to learn visual representations in a self supervised manner. These methods learn representations by first masking out parts of the input and then training a model to fill in the blanks using a simple reconstruction loss. In order to do this, we use an encoder \(f_{\theta}\) that takes the non-masked input and learns a representation \(x\), such that a decoder \(d_{\phi}\) can reconstruct the masked part of the input. More formally, let \(x\) be the representation learned by the encoder for masked image \(I\) with mask \(M\) such that \(f_{\theta}(I\odot M)\). A decoder \(d\) is then applied to obtain the first loss over masked and unmasked tokens \(d_{\phi}(x)\). This defines the following reconstruction loss which is only computed over masked tokens:
\[\mathcal{L}_{I}^{\text{MASK}}=\left\|d_{\phi}(f_{\theta}(I\odot M))\odot(1- M)-I\odot(1-M)\right\|_{2}^{2}. \tag{1}\]
Contrastive learning.In common image-level contrastive methods, learning with negatives is achieved by pushing the representation of the positive pairs (different augmented views of the same image) to be close to each other while pulling the representation of negative pairs further apart. More formally, let \(I\) and \(I^{\prime}\) be two augmented views of the same image. Contrastive learning uses a siamese network with a predictor encoder \(\mathcal{P}\) and a target encoder \(\mathcal{T}\)[58, 10]. The output of these networks are \(l_{2}\)-normalized to be \(p=\mathcal{P}(I)/\|\mathcal{P}(I)\|_{2}\) and \(z=\mathcal{T}(I^{\prime})/\|\mathcal{T}(I^{\prime})\|_{2}\). Given a positive pair from a minibatch of size \(N\), the other \(2(N-1)\) examples are treated as negative examples. The objective then is to minimize the Info-NCE loss as defined in [40]. When learning with negatives \(\mathcal{P}\) and \(\mathcal{T}\) typically share the same architecture and weights.
Negative-free representation learning.Global visual representation learning without negative examples has been achieved recently using a variety of methods, achieving similar performance to contrastive learning methods. By not using negative examples, the objective becomes simpler, just minimizing the cosine feature distance for two different views of the same input. The issue with this type of optimization is that it can lead to representation collapse [26, 12]. There are several ways to avoid representation collapse such as the methods proposed by Chen et al. [12] (SiamSiam) and Bardes et al. [4] (VicReg). Siamsiam introduces an asymmetry between the predictor encoder \(\mathcal{P}\) and the target encoder \(\mathcal{T}\) by adding one extra multi-layer perceptron to the predictor encoder to stop the gradients that are backpropagated from the loss of the target network. VicReg instead uses two regularization terms: (i) A term that maintains the variance of each embedding dimension above a threshold, and (ii) A term that decorrelates each pair of variables. The variance term (i) forces the embedding vectors of samples within a batch to be different while the covariance term (ii) prevents the collapse of the representations.
### Combining MAE with Contrastive Methods.
One trivial way to combine MAE with contrastive learning methods is to use the \([\text{CLS}]\) token of the transformer as a global video feature representation. This representation allows us to use any contrastive learning loss without modifications to the underlying ViT-B/16 transformer encoder.
This combination works as follows: Sample two frames \(I_{i},I_{j}\) from a video and perform patch-level masking. The two frames are processed by the ViT-B/16 model \(f_{\theta}\) producing token representations \(f_{\theta}(I_{i})=\{x_{i}^{\text{CLS}},x_{i}^{1},x_{i}^{2},\cdots,x_{i}^{L}\}\), where \(L\) is the sequence length of the transformer model. This is divided into two disjoint sets. The set \(\{x_{i}^{1},x_{i}^{2},\cdots,x_{i}^{L}\}\) represents the local features of the frame \(i\) and are used for masked image modeling following Eq. 1. Then, the \(x_{i}^{\text{CLS}}\) token can be used as a global representation with a contrastive loss.
We experiment with this approach using the SiamSiam loss [12] and the VicReg loss [4]. We review here these methods and how to combine them with MAEs, but the reader is referred to the original works for a more in-depth explanation of these methods [12, 4].
SiamSiam.A combination of SiamSiam and MAE, which we refer to as _MAE + SiamSiam_ uses the \(x_{i}^{\text{CLS}}\) token which represents the global video representation as follows: We pass \(x_{i}^{\text{CLS}}\) to a projector network \(\mathcal{P}\) to obtain \(p_{i}\triangleq\mathcal{P}(x_{i}^{\text{CLS}})/\|\mathcal{P}(x_{i}^{\text{CLS }})\|_{2}\). A similar procedure is followed for frame \(j\), but the global representation is not passed to the projector network \(\mathcal{P}\) in order to obtain \(z_{j}\triangleq x_{i}^{\text{CLS}}/\|x_{i}^{\text{CLS}}\|_{2}\). The SiamSiam objective is then applied as follows:
\[\mathcal{L}_{p_{i},z_{j}}^{\text{Siamsi}}=\|p_{i}-z_{j}\|_{2}^{2}=2(1-p_{i} \cdot z_{j}). \tag{2}\]
VicReg.A combination of VicReg and MAE, which we refer to as _MAE + VicReg_ uses the \(x_{i}^{\text{CLS}}\) token which represents the global video representation as follows: We pass it to a projector network \(\mathcal{P}\) to obtain \(p_{i}\triangleq\mathcal{P}(x_{i}^{\text{CLS}})/\|\mathcal{P}(x_{i}^{\text{CLS }})\|_{2}\), we repeat this procedure for frame \(j\) using the target network \(\mathcal{T}\) to obtain \(z_{i}\triangleq\mathcal{T}(x_{i}^{\text{CLS}})/\|\mathcal{T}(x_{i}^{\text{ CLS}})\|_{2}\). The loss is calculated at the embedding level on \(p_{i}\) and \(z_{j}\). The video frames are processed in batches, let us denote \(P=[p^{1},\cdots,p^{n}]\) and \(Z=[z^{1},\cdots,z^{n}]\), where each \(p^{m}\) and \(z^{m}\) are the global representation of video \(m\) after the projector network and target network respectively in a batch of size \(n\) vectors of dimention \(d\). Let us denote by \(p_{l}\) the vector composed of
each value at dimension \(l\) in all vectors in \(P\). The variance loss of VicReg is then calculated as follows:
\[v(P)=\frac{1}{d}\sum_{l=1}^{d}\text{max}(0,\gamma-S(p_{i},\epsilon)), \tag{3}\]
where \(S(z,\epsilon)=\sqrt{\text{Var}(z)+\epsilon}\) and \(\gamma\) is a constant target value for the standard deviation, fixed to 1. The covariance loss of VicReg can be calculated as:
\[c(P)=\frac{1}{d}\sum_{l\neq k}^{d}[\text{Cov}(p^{m})]_{l,k}^{2}, \tag{4}\]
where \(\text{Cov}(p^{m})=\frac{1}{N-1}\sum_{m}(p^{m}-\bar{p})(p^{m}-\bar{p})^{T}\). The final VicReg loss over the batch is defined as:
\[\mathcal{L}_{p_{i},z_{j}}^{\text{vsk}_{\text{org}}}=\frac{\lambda}{n}\|p_{i}-z _{j}\|_{2}^{2}+\mu\left[v(P)+v(Z)\right]+\nu\left[c(P)+c(Z)\right]. \tag{5}\]
We perform experiments using these two combinations of MAE and contrastive losses as baseline comparisons for our method but found them to be underperforming with only contrastive or only masked methods. In other words, it is not trivial to adapt constrastive learning methods to be used in combination with masked autoencoders.
### ViC-Mae
Building on masked image modeling and image-level similarity learning, we propose to learn spatio-temporal representations by using masking image modeling at the frame level and image level similarity at the time level. This means that each video frame is pulled towards a global video representation in the latent space. This can lead to representations that are invariant to object deformations, appearance changes and viewpoint variations. See Figure 2 for a general overview of our model.
Given a video with \(T\) frames \(\{I_{1},I_{2},\cdots,I_{T}\}\), we sample two frames \(I_{i},I_{j}\) as a positive pair input during one step of training. After an input image tokenizer layer we obtain a set of patch-level token representations \(X_{i}\) and \(X_{j}\) for each frame. Then, we apply token masking by generating a different random mask \(M_{i}\) and \(M_{j}\) and apply them to both of the corresponding input frames to obtain a subset of input visible tokens \(X_{i}^{(v)}\) and \(X_{j}^{(v)}\). These visible token sets are then forwarded to a ViT encoder which computes a set of representations \(f_{\theta}(X_{i}^{(v)})\) and \(f_{\theta}(X_{j}^{(v)})\) respectively. Finally, for the first frame we compute \(\hat{I}_{i}=d_{\phi}(f_{\theta}(X_{i}^{(v)}+f_{m}))\) where we have added a mask token \(f_{m}\) to let the decoder know which patches were masked and allows to predict patch-shaped outputs through \(\hat{I}_{i}\). These output patches are then trained to minimize the \(\ell_{2}\) loss with the true patches in the input image:
\[\mathcal{L}_{i}^{\text{MASK}}=\|\hat{I}_{i}-I_{i}\|_{2}^{2}. \tag{6}\]
So far we have described only a standard masked autoencoder (MAE). In order to apply contrastive pre-training we use a separate prediction branch in the network by applying a global pooling operator \(\Omega\) over the output representations \(f_{\theta}(X_{i}^{(v)})\) from the main branch and \(f_{\theta}(X_{j}^{(v)})\) from the siamese copy of the network. This step simplifies the formulation of our method and avoids using additional complicated losses or the gradient-stop operator to avoid feature representation collapse since the pooled features can not default to the zero vector as they also are being trained to reconstruct patches. We experiment using various aggregation methods including _mean_ pooling, _max_ pooling, and _generalized mean_ (GeM) pooling [46].
These global representations are then forwarded to a predictor encoder \(\mathcal{P}\) and a target encoder \(\mathcal{T}\) to obtain frame representations:
\[p_{i}\triangleq\mathcal{P}(\Omega(f_{\theta}(X_{i}^{(v)})))/\|\mathcal{P}( \Omega(f_{\theta}(X_{i}^{(v)}))))\|_{2},\]
and
\[z_{j}\triangleq\mathcal{T}(\Omega(f_{\theta}(X_{j}^{(v)})))/\|\mathcal{T}( \Omega(f_{\theta}(X_{j}^{(v)}))))\|_{2}\]
respectively. The predictor network \(\mathcal{P}\) and target network \(\mathcal{T}\) are symmetrical and we use standard blocks designed for contrastive learning [4, 10, 12]. These blocks consist of a Linear \(\rightarrow\) BatchNorm1d \(\rightarrow\) ReLU block repeated \(2\) times. From these representations, we apply the InfoNCE contrastive learning loss as follows:
\[\mathcal{L}_{p_{i},z_{j}}^{\text{NEG}}=-\log\frac{\text{exp}(p_{i}\cdot z_{j}/ \tau)}{\sum_{k=1}^{2N}\mathbbm{1}\left[p_{i}\neq z_{k}\right]\text{exp}(p_{i} \cdot z_{k}/\tau)}, \tag{7}\]
where the denominator includes a set of negative pairs with representations \(z_{k}\) computed for frames from other videos in the same batch, \(\mathbbm{1}\left[p_{i}\neq z_{k}\right]\in\{0,1\}\) is an indicator function evaluating to \(1\) when \(p_{I}\neq z_{k}\) and \(\tau\) denotes a temperature parameter.
The final loss is \(\mathcal{L}=\mathcal{L}^{\text{MASK}}+\lambda\mathcal{L}^{\text{NEG}}\), where \(\lambda\) is a hyperparameter controlling the relative influence of both losses. In practice, we use an schedule to gradually introduce the contrastive loss and let the model learn good local features at the beginning of training.
## 4 Experiment Settings
We perform experiments to demonstrate the performance of our method on fine-tuning tasks on the ImageNet benchmark, as well, as other image recognition datasets. For reference we also evaluate our method on the Kinetics dataset for action recognition to show that our model is able to maintain performance on video benchmarks. Full details are in Appendix A.
**Architecture**. We use the standard Vision Transformer (ViT) architectures [18] and conduct experiments fairly across
benchmarks and methods using the ViT-B/16 configuration. For masked image modeling we use a small decoder as proposed by He et.al [27]. For the contrastive learning part we experiment with two alternatives.
* _MAE + \(\{\)SiamSiam or VicReg\(\}\)_. The predictor consists of the backbone network \(f_{\theta}\) and a projector followed by a predictor as in Bardes et.al [4]. The target encoder consists of the backbone \(f_{\theta}\) and the projector, which are shared between the two encoders.
* _ViC-MAE_. The predictor and the target networks share the same architecture consisting of the backbone network \(f_{\theta}\) and a projector following Bardes et.al [4].
When using the MAE + \(\{\)SiamSiam or VicReg\(\}\) combinations, we use the \([\)CLS\(]\) token from the ViT architecture which is typically used to capture a global feature from the transformer network and is used to fine-tune the network for downstream tasks such as classification.
**Pre-Training.** We adopt the Moments in Time [38] and the Kinetics-400 dataset [30] for self supervised pre-training. They consist on \(\sim\)1000K and \(\sim\)300K videos of varied length respectively. We sample frames from these videos using distant sampling, which consists of splitting the video in non-overlapping sections and sampling one frame from each section. Frames are resized to 224px size, horizontal flipping and random cropping with a scale range of \([0.5,1]\), are used as the only data augmentation, unless specified otherwise.
**Settings**. ViC-MAE pre-training follows previously used configurations [27, 21]. We use the AdamW optimizer with a batch size of 512. We evaluate the pre-training quality by end-to-end finetuning. When evaluating on video datasets we follow the common practice of multi-view testing: taking \(K\) temporal clips (\(K=7\) on Kinetics) and for each clip taking 3 spatial views to cover the spatial axis (this is denoted as \(K\times 3\)). The final prediction is the average of all views.
## 5 Results and Ablations
We first perform experiments to analyze the different elements of the ViC-MAE framework. All the experiments are under the _learning with negative pairs_ setting using mean pooling over the ViT-B/16 features unless specified otherwise. Linear evaluation and end-to-end finetuning runs are done over 100 epochs.
### Main result
Our main result evaluates _video-to-image_ transfer learning and we use as our testbed the ImageNet-1K benchmark. We present our results in Table 1, along with video downstream accuracy on Kinetics-400. We compare ourselves fairly to previous reported results in the literature that also use the ViT/B-16 backbone. The previous reported state of the art comes from the work of Piergiovanni et al. [44], which use a novel Tube sampling methodology that allows them to train on video and images at the same time and obtains 81.40% top-1 accuracy on end-to-end finetuning transferring from the Kinetics-600 dataset. Our method surpasses this result by an absolute improvement of 1.58% points of accuracy transferring from the Moments in Time dataset. However TubeViT is still the best model under a ViT/B-16 architecture on the Kinetics-400 benchmark. Another key result from this table is that our method even when trained on Kinetics-400 still performs best than other methods in _video-to-image_ transfer. Other previous results on the same problem that use different backbones include the works of Gordon et al. [24], Xu & Wang [58], and Wu & Wang [56]. However these all use a ResNet-50 backbone and obtain 54.5%, 33.8%, and 55.6% top-1 accuracies on linear evaluation. Since those works are not using the same setting we chose not to include them alongside the others.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{**ImageNet-1K**} & \multicolumn{2}{c}{**Kinetics-400 \(\uparrow\)**} \\ \cline{2-6}
**Method** & **Pre-train.** & Top-1 & Top-5 & Top-1 & Top-5 \\ \hline Scratch & - & 71.39 & 88.45 & - & - \\ TubeViT [44] & K600 & 81.40 & - & **88.6** & **97.6** \\ MAE [21, 27]\({}^{*}\) & K400 & 81.34 & 95.4 & 81.3 & 94.9 \\ ViC-MAE (ours) & K400 & 82.80 & 96.6 & 81.5 & 95.1 \\ ViC-MAE (ours) & MiT & **82.98** & **96.8** & 81.0 & 94.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Transfer learning results from video pre-training to the ImageNet dataset**. The pre-training data is a video dataset (MiT, K600 or K400). All self-supervised method are evaluated end-to-end with supervised finetuning on IN1K. Best results are inn bold. \(\dagger\)Kinetics-400 results are from models trained on any of the aforementioned video datasets and evaluated on a Kinetics-400. (*) The transfer results from Kinetics-400 to Imagenet-1K for MAE were obtained by replicating the results based on correspondence with the original authors.
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**ImageNet-1K**} \\ \cline{2-3} & Top-1 & Top-5 \\ \hline MAE [27] + SiamSiam [12] & 58.58 & 82.88 \\ MAE [27] + VicReg [4] & 63.86 & 84.07 \\ ViC-MAE (ours) & **67.66** & **86.22** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Combining MAE and contrastive methods is not trivial.** Linear evaluation on the ImageNet-1K dataset using types of contrastive learning. We use the \([\)CLS\(]\) token as the global video representation and apply common contrastive methods, but these do not result on the best performance, which is obtained with our method.
Combining MAE with contrastive learning is non trivial, and we set to test these by comparing our model with MAE models with alternative contrastive learning objectives Siamsiam [12] and VicReg [4]. We present our results using linear evaluation on Table 2. We use the \([\text{CLS}]\) token as the global video representation for contrastive pre-training. We can notice in this table that competing methods underperform compared to our model which uses pooling of the local features by an absolute margin of \(>3\%\) over the _MAE + VicReg_ model. See Appendix B for an evaluation of our method against baselines that combine MAE with contrastive learning on the problem of semi-supervised learning on ImageNet.
### Transfer learning performance.
We evaluate transfer learning performance of our model across a diverse array of 12 downstream image classification tasks [7, 32, 5, 57, 31, 36, 14, 41, 20, 39]. Table 3 shows the results of four models based on a ViT/B-16 backbone. We perform linear evaluation (See appendix for details on the metrics used to evaluate each of these models). We train two models using two video datasets. The first model is a baseline MAE model pre-trained on randomly sampled frames from the videos on the Moments in time dataset and the Kinetics-400 dataset. The second model is our full ViCMAE model pre-trained on each of the same two datasets. Our model significantly outperforms the other baselines on 9 out of 12 datasets, whereas the MAE trained on Kinetics is superior on only 3 (i.e. Cars, Aircraft and Pets).
### Ablations
We investigate in this section the effect of various frame-level image transformations used to augment the data, the effect of our choice of frame separation, and the choice of pooling operator.
**Augmentations.** We perform an ablation study to check whether the use of strong color augmentations on the target encoder is necessary as it is crucial in standard self-supervised methods for images. The results are presented in Table 6. Using only color augmentations meaning that the sampled frame in the target encoder is color augmented but not spatially augmented the performance is reduced by \(>2\%\) on linear evaluation on the Imagenet dataset. Using a combination of strong color augmentations and spatial augmentations, though it increases performance; it is not superior to using only strong spatial augmentations. This is stark contrast with previous methods that necessitate strong color augmentations to be able to learn using contrastive learning. In the following experiments, we only use strong spatial augmentations and discard the use of color augmentations entirely.
**Frame separation**. This is an essential design component of our framework, and in this experiment we aim to see the effect of frame separation on the performance of our method. We follow the two methods of sampling frames from Xu et.al [58]. Results are shown in Table 4. The first method is _Continuous sampling_, which consists in selecting a starting index \(i\) and then sampling a frame in the interval \((i,i+\delta]\), where \(\delta\) is the frame separation. A frame separation of \(0\) indicates that the predictor and the target networks receive the same frame. The second method is _distant sampling_, where the video is split into \(n\) intervals of the same size, where \(n\) is the number of frames to use for contrastive learning and then one frame is selected randomly from each interval. In our experiment, we observe that increasing the frame separation when using _continous sampling_ increases the performance of the model. We observe the best perfor
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Frame** & \multicolumn{2}{c}{**ImageNet-1K**} \\ \cline{2-3}
**separation** & Top-1 & Top-5 \\ \hline
0 & 63.25 & 83.34 \\
2 & 64.47 & 84.31 \\
4 & 65.25 & 84.64 \\
8 & 65.89 & 84.91 \\ \hline D & 67.66 & 86.22 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation on frame separation**. Linear evaluation on the ImageNet-1K dataset using different frame separation. 0 means sample the same frame. D stands for distant sampling and the rest are using continuos sampling.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline
**Model** & Pre-train. & Food & CIFAR10 & CIFAR100 & Birdsnap & SUN397 & Cars & Aircraft & VOC2007 & DTD & Pets & Caltech101 & Flowers \\ \hline MAE [27]\(\ddagger\) & K400 & 74.54 & 94.86 & 79.49 & 46.51 & 64.33 & **60.10** & **63.24** & 83.07 & 78.01 & **89.49** & 93.28 & 93.38 \\ MAE [27]\(\ddagger\) & MiT & 76.23 & 94.47 & 79.50 & 47.98 & 65.32 & 59.48 & 60.67 & 83.46 & 78.21 & 88.42 & 93.08 & 94.17 \\ ViC-MAE (ours) & K400 & 76.56 & 93.64 & 78.80 & 47.56 & 64.75 & 58.96 & 60.14 & 83.74 & 78.53 & 87.65 & 92.27 & 93.35 \\ ViC-MAE (ours) & MiT & **77.39** & **94.92** & **79.88** & **48.21** & **65.64** & 59.76 & 60.96 & **84.77** & **79.27** & 88.85 & **93.53** & **94.62** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison of transfer learning performance of our approach** with supervised baselines across 12 natural image classification datasets. All results correspond to linear evaluation. Best results are shown in bold. \(\ddagger\)MAE trained on MiT and K400 randomly sample a frame from the video to compute a reconstruction loss; these models are trained and evaluated by us.
mance using _distant sampling_ with \(n=2\) (labelled \(D\) in Table 4). We posit that further increasing frame separation offers potentially stronger augmentations. In the following experiments, we only use strong spatial augmentations combined with distant frame sampling.
**Pooling type**. Since this is an important step in our proposed method, we test which operator \(\Omega\) used to aggregate local features performs best at producing global features. We report our results in Table 5. We try common types of pooling (_mean_, _max_) as well as, _generalized mean pooling_. We found _mean_ to be more effective in creating a global representation for the video, and we use it for all other experiments.
### Limitations
Having shown that ViC-MAE is able to learn useful representations from video data that transfer well to image classification and that surpasses previous models on the same set-up, we contextualize our results by discussing state of the art results in these problems and limitations of our method.
Comparing with models of a similar computational budget our model is able to perform on par with previous results on the Kinetics-400 dataset. However, compared to TubeViT [44] our model still underpeforms by \(7.1\%\) points in absolute accuracy. A model that works well across both images and video might still need pre-training in both domains. Compared to MaskFeat [55], our model underperforms by \(0.7\%\) points in absolute accuracy (82.2% vs. 81.5%). Our model nevertheless is able to surpass the MViTv1-B model [19], the TimeSformer model [6] and the ViVit-B model [2] by \(0.3\%\), \(0.8\%\), and \(1.5\%\) points in absolute accuracy respectively (81.2%, 80.7%, and 80% vs. 81.5%). Compared to models that only perform contrastive learning on videos our model underperforms compared to DINO [8] by \(1\%\) points in absolute accuracy (82.5% vs. 81.5%). These results contextualize ViC-MAE against high performing models that are using either stronger backbones or additional supervision. We posit that a model trained on a combination of video and image data is likely to perform best across domains.
Finally, we compare our ViC-MAE to a number of state of the art in-domain Imagenet-pretrained models trained with a similar computational budget. We found that most models trained on video including our model, underperform most of the models in this category. The domain gap between any video dataset and images on Imagenet-1k still seems not to have been closed. Compared to a model that uses masked image modeling, the original MAE [27] and to the MaskFeat model [55], our model underperforms by \(0.7\%\) points in absolute accuracy (83.6% & 83.6% vs. 82.98%, respectively). Compared to a model that uses contrastive learning, DINO [8], MoCov2 [11], and BeiT [3] our model underperforms by \(1.1\%\), \(1\%\), and \(0.3\%\) points in absolute accuracy (84 %, 83.9%, and 83.2% vs 82.9% respectively). These results show that the gap from models pre-trained purely on video still exists but we believe ViC-MAE is a step forward in closing that gap.
## 6 Conclusion
In this work, we have introduced ViC-MAE, a method that allows to use unlabeled videos to learn useful representation for image recognition tasks. We achieve this randomly sampling frames from a video and using contrastive learning to pull together frames from the same video and push apart frames from different videos, likewise we also use masked image modeling on each frame to learn good local features of the scene presented in each frame. The main contribution of our work is showing that is possible to combine masked image modeling and contrastive learning by pooling the local representations of the MAE prediction heads into a global representation that is used for contrastive learning. The design choices that we have taken, when designing ViC-MAE show that our work is easily extensible in various different ways. For example, improvements in contrastive learning for images can be directly adapted into our framework. Likewise, pixel reconstruction can be replaced by features that are important for video representation like object correspondence, or optical flow.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Color** & **Spatial** & \multicolumn{2}{c}{**ImageNet-1K**} \\ \cline{3-4}
**Augm.** & **Augm.** & Top-1 & Top-5 \\ \hline \multirow{3}{*}{\(\diagup\)} & & 65.40 & 84.03 \\ & & 67.66 & 86.22 \\ \cline{1-1} & & 66.03 & 85.01 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Ablation on different augmentations**. Linear evaluation on the ImageNet-1K dataset using different augmentations. Color augs include random color jitter, grayscale conversion and gaussian blur. Spatial augs are random resized crop and horizontal flip.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Model**} & **Pooling** & \multicolumn{2}{c}{**ImageNet-1K**} \\ \cline{3-4} & **type** & Top-1 & Top-5 \\ \hline ViC-MAE (Ours) & GeM & 66.92 & 85.50 \\ ViC-MAE (Ours) & max & 67.01 & 85.59 \\ ViC-MAE (Ours) & mean & 67.66 & 86.22 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Ablation on pooling type**. Linear evaluation on the ImageNet-1K dataset using different types of pooling. The hyperparameter \(\lambda\) is set to \(0.025\) and introduced using an schedule.
## Acknowledgements
The authors would like to thank Google Cloud and the CURE program from Google Research for providing funding for this research effort. We are also thankful for support from the Department of Computer Science at Rice University.
|
2306.06872 | History Semantic Graph Enhanced Conversational KBQA with Temporal
Information Modeling | Context information modeling is an important task in conversational KBQA.
However, existing methods usually assume the independence of utterances and
model them in isolation. In this paper, we propose a History Semantic Graph
Enhanced KBQA model (HSGE) that is able to effectively model long-range
semantic dependencies in conversation history while maintaining low
computational cost. The framework incorporates a context-aware encoder, which
employs a dynamic memory decay mechanism and models context at different levels
of granularity. We evaluate HSGE on a widely used benchmark dataset for complex
sequential question answering. Experimental results demonstrate that it
outperforms existing baselines averaged on all question types. | Hao Sun, Yang Li, Liwei Deng, Bowen Li, Binyuan Hui, Binhua Li, Yunshi Lan, Yan Zhang, Yongbin Li | 2023-06-12T05:10:58Z | http://arxiv.org/abs/2306.06872v1 | # History Semantic Graph Enhanced Conversational KBQA
###### Abstract
Context information modeling is an important task in conversational KBQA. However, existing methods usually assume the independence of utterances and model them in isolation. In this paper, we propose a **H**istory **S**emantic **G**raph **E**nhanced KBQA model (**HSGE**) that is able to effectively model long-range semantic dependencies in conversation history while maintaining low computational cost. The framework incorporates a context-aware encoder, which employs a dynamic memory decay mechanism and models context at different levels of granularity. We evaluate HSGE on a widely used benchmark dataset for complex sequential question answering. Experimental results demonstrate that it outperforms existing baselines averaged on all question types.
## 1 Introduction
In recent years, with the development of large-scale knowledge base (KB) like DBPedia (Auer et al., 2007) and Freebase (Bollacker et al., 2008), Knowledge Base Question Answering (KBQA) (Wang et al., 2020; Ye et al., 2021; Yan et al., 2021; Yadati et al., 2021; Das et al., 2021; Wang et al., 2022) has become a popular research topic, which aims to convert a natural language question to a query over a knowledge graph to retrieve the correct answer. With the increasing popularity of AI-driven assistants (e.g., Siri, Alexa and Cortana), research focus has shifted towards conversational KBQA (Shen et al., 2019; Kacupaj et al., 2021; Marion et al., 2021) that involves multi-turn dialogues.
A common solution to the task of conversational KBQA is to map an utterance to a logical form using semantic parsing approach (Shen et al., 2019; Guo et al., 2018). The state-of-the-art semantic parsing approach (Kacupaj et al., 2021) breaks down the process into two stages: a logical form is first generated by low-level features and then the missing details are filled by taking both the question and templates into consideration. Other approaches (Dong and Lapata, 2016; Liang et al., 2016; Guo et al., 2018) mainly focus on first detecting entities in the question and then mapping the question to a logical form.
Despite the inspiring results of the semantic parsing methods mentioned above, most of them fail to model the long-range semantic dependency in conversation history. Specifically, they usually directly incorporate immediate two turns of conversations and ignore the conversation history two turns away. To demonstrate the importance of long-range conversation history, Figure 1 shows an example illustrating the task of conversational KBQA. After the question "who is the president of the United States", the user consecutively proposes three questions that involve Coreference and Ellipsis phenomena (Androutsopoulos et al., 1995). Only when the system understands the complete conversation history can the system successfully predict the answer. Though existing contextual semantic parsing models (Iyyer et al., 2017; Suhr et al., 2018; Yu et al., 2019) can be used to model conversation history, a survey (Liu et al., 2020) points out that their performance is not as good as simply concatenating the conversation history, which is the most common conversation history modeling technique.
To tackle the issues mentioned above, we pro
Figure 1: An example illustrating the task of conversational KBQA.
pose a **H**istory **S**emantic **G**raph **E**nhanced Conversational KBQA model (HSGE) for conversation history modeling. Specifically, we convert the logical forms of previous turns into history semantic graphs, whose nodes are the entities mentioned in the conversation history and edges are the relations between them. By applying graph neural network on the history semantic graph, the model can capture the complex interaction between the entities and improve its understanding of the conversation history. From the perspective of practice, using the history semantic graph to represent the conversation history is also more computationally efficient than directly concatenating the conversation history. Besides, we design a context-aware encoder that addresses user's conversation focus shift phenomenon Lan and Jiang (2021) by introducing temporal embedding and allows the model to incorporate information from the history semantic graph at both token-level and utterance-level.
To summarize, our major contributions are:
* We propose to model conversation history using history semantic graph, which is effective and efficient. As far as we know, this is the first attempt to use graph structure to model conversation history in conversational KBQA.
* We design a context-aware encoder that utilizes temporal embedding to address the shift of user's conversation focus and aggregate context information at different granularities.
* Extensive experiments on the widely used CSQA dataset demonstrate that HSGE achieves the state-of-the-art performance averaged on all question types.
## 2 Related Work
The works most related to ours are those investigating semantic parsing-based approaches in conversational KBQA. Given a natural language question, traditional semantic-parsing methods Zettlemoyer and Collins (2009); Artzi and Zettlemoyer (2013) usually learn a lexicon-based parser and a scoring function to produce a logical form. For instance, Zettlemoyer and Collins (2009) propose to learn a context-independent CCG parser Long et al. (2016) utilizes a shift-reduce parser for logical form construction.
Recently, neural semantic parsing approaches are gaining attention with the development of deep learning Qu et al. (2019); Chen et al. (2019). For example, Liang et al. (2016) introduces a neural symbolic machine (NSM) extended with a key-value memory network. Guo et al. (2018) proposes D2A, a neural symbolic model with memory augmentation. S2A+MAML Guo et al. (2019) extends D2A with a meta-learning strategy to account for context. Shen et al. (2019) proposes the first multi-task learning framework MaSP that simultaneously learns type-aware entity detection and pointer-equipped logical form generation. Plepi et al. (2021) introduces CARTON which utilizes pointer networks to specify the KG items. Kacupaj et al. (2021) proposes a graph attention network to exploit correlations between entity types and predicates. Marion et al. (2021) proposes to use KG contextual data for semantic augmentation.
While these methods have demonstrated promising results, they typically only consider the immediate two turns of conversations as input while neglecting the context two turns away. Though Guo et al. (2018) introduces a Dialog Memory to maintain previously observed entities and predicates, it fails to capture their high-order interaction information. By introducing history semantic graph, our model HSGE can not only memorize previously appeared entities and predicates but also model their interaction features using GNN to gain a deeper understanding of conversation history.
## 3 Method
The structure of our proposed HSGE model is illustrated in Figure 2. The model consists of six components: Word Embedding, TransformerConv Layer, Context-aware Encoder, Entity Recognition Module, Concept-aware Attention Module and Grammar-Guided Decoder.
### Grammar
We predefined a grammar with various actions in Table 4, which can result in different logical forms that can be executed on the KG. Analogous to Kacupaj et al. (2021), each action in this work consists of three components: a semantic category, a function symbol and a list of arguments with specified semantic categories. Amongst them, semantic categories can be classified into two groups depending on the ways of instantiation. One is referred to as entry semantic category (i.e., \(\{e,p,tp,num\}\) for entities, predicates, entity types and numbers) whose instantiations
are constants parsed from a question. Another is referred to as intermediate semantic category (i.e., \(\{set,dict,boolean,number\}\)) whose instantiation is the output of an action execution.
### Input and Word Embedding
To incorporate the recent dialog history from previous interactions, the model input for each turn contains the following utterances: the previous question, the previous answer and the current question. Utterances are separated by a [SEP] token and a context token [CLS] is appended at the beginning of the input as the semantic representation of the entire input.
Specifically, given an input \(u\), we use WordPiece tokenization Wu et al. (2016) to tokenize the conversation context into token sequence \(\{w_{1},...,w_{n}\}\), and then we use the pre-trained language model BERT Devlin et al. (2018) to embed each word into a vector representation space of dimension \(d\). Our word embedding module provides us with an embedding sequence \(\{x_{1},...,x_{n}\}\), where \(x_{i}\in\mathbb{R}^{d}\) is given by \(x_{i}=\texttt{BERT}(w_{i})\).
### History Semantic Graph
To effectively and efficiently model conversation history that contains multiple turns, we design **History Semantic Graph**, inspired by the recent studies on dynamically evolving structures Hui et al. (2021). As the conversation proceeds, more and more entities and predicates are involved, which makes it difficult for the model to capture the complex interactions among them and reason over them. Thus, we hope to store these information into a graph structure and empower the model with strong reasoning ability by applying GNN onto the graph. Considering that we are trying to model the interactions between entities and predicates which are naturally included in logical forms, one good solution is to directly convert the logical forms into KG triplets as shown in Figure 3. By doing so, we guarantee the quality of the graph because the entities and predicates are directly related to the answers of previous questions, while also injecting history semantic information into the graph.
Graph Construction.Specifically, we define the history semantic graph to be \(\mathcal{G}=<\mathcal{V},\mathcal{E}>\), where \(\mathcal{V}=set(e)\cup set(tp)\), \(\mathcal{E}=set(p)\), and \(e,tp,p\) denote entity, entity type and predicate, respectively. We define the following rules to transform the actions defined in Table 4 to the KG triplets:
* For each element \(e_{i}\) in the operator result of \(set\to find(e,p)\), we directly add <\(e_{i},p,e\)> into the graph.
* For each element \(e_{i}\) in the operator result of \(set\to find\_reverse(e,p)\), we directly add <\(e,p,e_{i}\)> into the graph.
* For each entity \(e_{i}\in\mathcal{V}\), we also add the
Figure 3: Illustration example for history semantic graph construction.
Figure 2: Model architecture of HSGE, which includes Word Embedding, TransformerConv Layer, Context-aware Encoder, Entity Recognition Module, Concept-aware Attention Module and Grammar-Guided Decoder.
<\(e_{i},IsA,tp_{i}\)> to the graph, where \(tp_{i}\) is the entity type of entity \(e_{i}\) extracted from Wiki-data knowledge graph.
* For the \(find\) and \(find\_reverse\) actions that are followed by \(filter\_type\) or \(filter\_multi\_types\) action for entity filtering, we would add the element in the filtering result to the graph, which prevents introducing unrelated entities into the graph.
It is worth mentioning that we choose to transform these actions because they directly model the relationship between entities and predicates. Besides, as the conversation proceeds and new logical forms are generated, more KG triplets will be added to the graph and the graph will grow larger. However, the number of nodes involved in the graph is still relatively small and is highly controllable by only keeping several recent KG triplets. Considering the \(O(N^{2})\) computational complexity of Transformer encoders Vaswani et al. (2017), it would be more computationally efficient to model conversation history using history semantic graph than directly concatenating previous utterances.
Graph Reasoning.Given constructed history semantic graph \(\mathcal{G}\), we first initialize the embeddings of nodes and relations using BERT, i.e., \(\texttt{BERT}(e_{i}/p_{i})\), where \(e_{i}\) and \(p_{i}\) represent the text of node and relation, respectively. Then we follow TransformerConv Shi et al. (2020) and update node embeddings as follows:
\[H=\text{TransformerConv}(E,\mathcal{G}) \tag{1}\]
where \(E\in\mathbb{R}^{(|\mathcal{V}|+|\mathcal{E}|)\times d}\) denotes the embeddings of nodes and relations.
### Context-aware Encoder
Temporal Information Modeling.As the conversation continues and further inquiries are raised, individuals tend to focus more on recent entities, which is also called Focal Entity Transition phenomenon Lan and Jiang (2021). To incorporate this insight into the model, we introduce temporal embedding to enable the model to distinguish newly introduced entities. Specifically, given the current turn index \(t\) and previous turn index \(i\) in which entities appeared, we define two distance calculation methods:
* **Absolute Distance**: The turn index of the previous turn in which the entities were mentioned, i.e., \(D=t\).
* **Relative Distance**: The difference in turn indices between the current turn and the previous turn in which the entities were mentioned, i.e., \(D=t-i\).
For each method, we consider two approaches for representing the distance: unlearnable positional embedding and learnable positional embedding. For unlearnable positional encoding, the computation is defined using the following sinusoid function Vaswani et al. (2017):
\[\left\{\begin{array}{l}e_{t}(2i)=sin(D/10000^{2i/d}),\\ e_{t}(2i+1)=cos(D/10000^{2i/d}),\end{array}\right. \tag{2}\]
where \(i\) is the dimension and \(D\) is the absolute distance or relative distance.
For learnable positional encoding, the positional encoding is defined as a learnable matrix \(E_{t}\in\mathbb{R}^{M\times d}\), where \(M\) is the predefined maximum number of turns.
Then we directly add the temporal embedding to obtain temporal-aware node embeddings.
\[\bar{h}_{i}=h_{i}+e_{t}, \tag{3}\]
where \(h_{i}\) is the embedding of node \(e_{i}\).
Semantic Information Aggregation.As the conversation progresses, user's intentions may change frequently, which leads to the appearance of intention-unrelated entities in history semantic graph. To address this issue, we introduce token-level and utterance-level aggregation mechanisms that allow the model to dynamically select the most relevant entities. These mechanisms also enable the model to model contextual information at different levels of granularity.
* **Token-level Aggregation**: For each token \(x_{i}\), we propose to attend all the nodes in the history semantic graph to achieve fine-grained modeling at token-level: \[\begin{split} x_{i}^{t}&=\text{MHA}(x_{i},\bar{H},\bar{H}),\\ \bar{x}_{i}&=x_{i}^{t}+x_{i},\end{split}\] (4) where MHA denotes the multi-head attention mechanism and \(\bar{H}\) denotes the embeddings of all nodes in the history semantic graph.
* **Utterance-level Aggregation**: Sometimes the token itself may not contain semantic information, e.g., stop words. We further propose to incorporate history information at the
utterance-level for these tokens:
\[\begin{split} x_{i}^{u}&=\text{MHA}(x_{[\text{CLS}]}, \bar{H},\bar{H}),\\ \bar{x}_{i}&=x_{i}^{u}+x_{i},\end{split} \tag{5}\]
where \(x_{[\text{CLS}]}\) denotes the representation of the [CLS] token.
Then, history-semantic-aware token embeddings are forwarded as input to the encoder of Transformer [20] for deep interaction:
\[h^{(enc)}=\text{Encoder}(\bar{X};\theta^{(enc)}), \tag{6}\]
where \(\theta^{(enc)}\) are encoder trainable parameters.
### Grammar-Guided Decoder
After encoding all the semantic information into the hidden state \(h^{(enc)}\), we utilize stacked masked attention mechanism [20] to generate sequence-formatted logical forms. Specifically, in each decoding step, our model predicts a token from a small decoding vocabulary \(V^{(dec)}=\{start,end,e,p,tp,...,find\}\), where all the actions from the Table 4 are included. On top of the decoder, we employ a linear layer alongside a softmax to calculate each token's probability distribution in the vocabulary. The detailed computation is defined as follows:
\[\begin{split} h^{(dec)}&=\text{Decoder}(h^{(enc)}; \theta^{(dec)}),\\ p_{t}^{(dec)}&=\text{Softmax}(W^{(dec)}h_{t}^{( dec)}),\end{split} \tag{7}\]
where \(h_{t}^{(dec)}\) is the hidden state at time step \(t\), \(\theta^{(dec)},W^{(dec)}\) are decoder trainable parameters, \(p_{t}^{(dec)}\in\mathbb{R}^{|V^{(dec)}|}\) is the probability distribution over the decoding vocabulary at time step \(t\).
### Entity Recognition Module
Entity recognition module aims to fill the entity slot in the predicted logical forms, which consists of entity detection module and entity linking module.
Entity Detection.The goal of entity detection is to identify mentions of entities in the input. Previous studies [23] have shown that multiple entities of different types in a large KB may share the same entity text, which is a common phenomenon called Named Entity Ambiguity. To address this issue and inspired by [17], we adopt a type-aware entity detection approach using BIO sequence tagging. Specifically, the entity detection vocabulary is defined as \(V^{(ed)}=\{O,\{B,I\}\times\{TP_{i}\}_{i=1}^{N^{(tp)}}\}\), where \(TP_{i}\) denotes the \(i\)-th entity type label, \(N^{(tp)}\) stands for the number of distinct entity types in the knowledge graph and \(|V^{(ed)}|=2\times N^{(tp)}+1\). We leverage LSTM [1] to perform the sequence tagging task:
\[\begin{split} h^{(ed)}&=\text{LeakyReLU}(\text{ LSTM}(h^{(enc)};\theta^{(l)})),\\ p_{t}^{(ed)}&=\text{Softmax}(W^{(ed)}h_{t}^{(ed)}),\end{split} \tag{8}\]
where \(h^{(enc)}\) is the encoder hidden state, \(\theta^{(l)}\) are the LSTM trainable parameters, \(h_{t}^{(ed)}\) is the LSTM hidden state at time step \(t\), and \(p_{t}^{(ed)}\) is the probability distribution over \(V^{(ed)}\) at time step \(t\).
Entity Linking.Once we detect the entities in the input utterance, we perform entity linking to link the entities to the entity slots in the predicted logical form. Specifically, we define the entity linking vocabulary as \(V^{(el)}=\{0,1,...,M\}\) where \(0\) means that the entity does not link to any entity slot in the predicted logical form and \(M\) denotes the total number of indices based on the maximum number of entities from all logical forms. The probability distribution is defined as follows:
\[\begin{split} h^{(el)}&=\text{LeakyReLU}(W^{(el_{1 })}[h^{(enc)};h^{(ed)}]),\\ p_{t}^{(el)}&=\text{Softmax}(W^{(el_{2})}h_{t}^{(el) }),\end{split} \tag{9}\]
where \(W^{(el_{1})},W^{(el_{2})}\) are trainable parameters, \(h_{t}^{(el)}\) is the hidden state at time step \(t\), and \(p_{t}^{(el)}\) is the probability distribution over the tag indices \(V^{(el)}\) at time step \(t\).
### Concept-aware Attention Module
In the Concept-aware Attention Module, we first model the complex interaction between entity types and predicates, then we predict the entity types and predicates for the logical form.
To begin with, we first develop an entity-to-concept converter to replace the entities in each factual triple of Wikidata KG with corresponding concepts (i.e., entity types). Take an instance in Figure 3 as example, the factual triple (Joe Biden, IsPresidentOf, USA) can be transformed to two concept-level tuples (Person, IsPresidentOf), and (IsPresidentOf, Country) in the concept graph. Then, we initialize node embeddings using their texts with BERT and apply Graph Attention Networks (GAT) [17] to project the KG information into the embedding space.
Finally, we model the task of predicting the correct entity type or predicate of the logical form as a classification task. For each time step of decoding, we directly calculate the probability distribution at time step \(t\) as:
\[\begin{split} h_{t}^{(c)}&=\text{LeakyReLU}(W^{(c)}[h _{\text{[CLS]}}^{(enc)};h_{t}^{(dec)}]),\\ p_{t}^{(c)}&=\text{Softmax}(h^{(g)T}h_{t}^{(c)}), \end{split} \tag{10}\]
where \(h^{(g)}\) is the updated entity type and predicate embedding and \(p_{t}^{(c)}\) is the probability distribution over them at time step \(t\).
### Training
The framework consists of four trainable modules: Entity Detection Module, Entity Linking Module, Grammar-guided Decoder and Concept-aware Attention Module. Each module consists of a loss function that can be used to optimize the parameters in itself. We use the weighted average of all the losses as our loss function:
\[L=\lambda_{1}L^{ed}+\lambda_{2}L^{el}+\lambda_{3}L^{dec}+\lambda_{4}L^{c}, \tag{11}\]
where \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\) are the weights that decide the importance of each component. The detailed loss calculation method is in Appendix B. The multi-task setting enables modules to share supervision signals, which benefits the model performance.
## 4 Experiments
### Experimental Setup
Dataset.We conduct experiments on CSQA (Complex Sequential Question Answering) dataset 1(Saha et al., 2018). CSQA was built based on the Wikidata knowledge graph, which consists of 21.1M triples with over 12.8M entities, 3,054 entity types and 567 predicates. CSQA dataset is the largest dataset for conversational KBQA and consists of around 200K dialogues where training set, validation set and testing set contain 153K, 16K and 28K dialogues, respectively. Questions in the dataset are classified as different types, e.g., simple questions, logical reasoning and so on.
Footnote 1: [https://amritasaha1812.github.io/CSQA](https://amritasaha1812.github.io/CSQA)
Metrics.To evaluate HSGE, We use the same metrics as employed by the authors of the CSQA dataset as well as the previous baselines. **F1 score** is used to evaluate the question whose answer is comprised of entities, while **Accuracy** is used to measure the question whose answer is a number or a boolean number. Following (Marion et al., 2021), we don't report results for "Clarification" question type, as this question type can be accurately modeled with a simple classification task.
Baselines.We compare HSGE with the latest five baselines that include D2A (Guo et al., 2018), S2A+MAML (Guo et al., 2019), MaSP (Shen et al., 2019), OAT (Marion et al., 2021) and LASAGNE (Kacupaj et al., 2021).
### Overall Performance
Table 1 summarizes the results comparing the HSGE framework against the previous baselines. From the result, we have three observations:
(1) The D2A and S2A-MAML models exhibit superior performance on the _Simple Question (Direct)_ question type. This can likely be attributed to their ability to memorize context information previously mentioned in the conversation. However, these models fail to model the complex interaction between entities, resulting in inferior performance on other question types.
(2) OAT achieves superior performance on three question types, which might be attributed to its incorporation of additional KG information. However, its performance is not consistent across all question types, leading to a low overall performance averaged on all question types.
(3) Our method HSGE achieves the new SOTA on the overall performance averaged on all question types. There are two possible reasons for the improvement. First, the incorporation of HSG allows the modeling of longer dependencies within the context, enabling the model to handle situations where the user asks about entities that were previously mentioned. Second, by utilizing graph neural network to facilitate information flow in HSG, the interaction among previously appeared entities, entity types and predicates are better captured, which endows our model with stronger reasoning ability.
### Ablation Study
In this section, we first conduct experiments to verify the effectiveness of each model component. Then, we investigate the effects of different model choices inside the Context-aware Encoder. Finally, we compare our HSGE with the most widely used concatenation method.
Effect of HSG and TIM.To show the effectiveness of each component, we create two ablations
by directly removing history semantic graph (HSG) and temporal information modeling (TIM), respectively. As shown in Table 2, HSGE outperforms all the ablations across all question types, which verifies the importance of each model component.
It is worth mentioning that after removing HSG, the performance of our method on some question types that require reasoning (i.e., _Logical Reasoning, Quantitative Reasoning (Count)_) drops significantly. We think that the reason might be the utilization of graph neural network on HSG empowers the model with great reasoning ability, which further benefits model performance.
Comparison of Internal Model Choice.In context-aware encoder, we design two distance calculation methods (i.e., absolute distance and relative distance) for temporal information modeling, as well as two information aggregation granularities (i.e., token-level and utterance-level aggregation) for semantic information aggregation. To study their effects, we conduct experiments by fixing one setting while changing the other. And the comparison result is shown in Figure 4.
From the results, it is obvious that we can get the following conclusions: (1) Token-level aggregation method performs better than utterance-level aggregation method. This is because the token-level aggregation allows the model to incorporate context information at a finer granularity and the information unrelated to the target token can be removed. (2) Absolute distance method performs better than relative distance method. The reason may be that although both distance calculation methods can provide temporal information, absolute distance is more informative since the model can derive relative distance using absolute distance while the opposite is not true.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline
**Methods** & & D2A & S2A-MAML & MaSP & OAT & LASAGNE & HSGE \\ \hline
**Question Type** & **\#Example** & \multicolumn{5}{c}{F1 Score} \\ \hline Comparative & 15K & 44.20 & 48.13 & 68.90 & **70.76** & 69.77 & 69.70 \\ Logical & 22K & 43.62 & 44.34 & 69.04 & 81.57 & 89.83 & **91.24** \\ Quantitative & 9K & 50.25 & 50.30 & 73.75 & 74.83 & 86.67 & **87.37** \\ Simple (Coreferenced) & 55K & 69.83 & 71.18 & 76.47 & **79.23** & 79.06 & 78.73 \\ Simple (Direct) & 82K & 91.41 & **92.66** & 85.18 & 82.69 & 87.95 & 89.38 \\ Simple (Ellipsis) & 10K & 81.98 & 82.21 & 83.73 & **84.44** & 80.09 & 80.53 \\ \hline
**Question Type** & **\#Example** & \multicolumn{5}{c}{Accuracy} \\ \hline Verification (Boolean) & 27K & 45.05 & 50.16 & 60.63 & 66.39 & 78.86 & **82.17** \\ Quantitative (Count) & 24K & 40.94 & 46.43 & 43.39 & 71.79 & 55.18 & **72.88** \\ Comparative (Count) & 15K & 17.78 & 18.91 & 22.26 & 36.00 & 53.34 & **53.74** \\ \hline \hline
**Overall** & 260K & 64.47 & 66.54 & 70.56 & 75.57 & 78.82 & **81.38***\$
\end{table}
Table 1: HSGE’s performance comparison on the CSQA dataset. HSGE achieves new state-of-the-art on the overall performance averaged on all question types. We use the paired t-test with \(p\leq 0.01\). The superscripts refer to significant improvements compared to LASAGNE(\({}^{*}\)), OAT(\({}^{\dagger}\)) and MaSP(\({}^{\lx@sectionsign}\)).
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Methods & Ours & w/o HSG & w/o TIM \\ \hline Question Type & \multicolumn{3}{c}{F1 Score} \\ \hline Comparative & **69.70** & 69.47 & 69.55 \\ Logical & **91.24** & 87.99 & 89.99 \\ Quantitative & **87.37** & 86.63 & 86.71 \\ Simple (Coref) & **78.73** & 77.78 & 78.17 \\ Simple (Direct) & **89.38** & 88.64 & 88.97 \\ Simple (Ellipsis) & **80.53** & 78.60 & 79.95 \\ \hline Question Type & \multicolumn{3}{c}{Accuracy} \\ \hline Verification & **82.17** & 79.70 & 78.05 \\ Quantitative (Count) & **72.88** & 69.00 & 71.29 \\ Comparative (Count) & **53.74** & 52.70 & 53.14 \\ \hline \hline Overall & **81.38***\$ & 79.87 & 80.36 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation Study. We use the paired t-test with \(p\leq 0.01\). The superscripts refer to significant improvements compared to w/o HSG(\({}^{*}\)) and w/o TIM(\({}^{\dagger}\)).
Figure 4: The comparison between token/utterance-level aggregation and between absolute/relative distance on five selected question types.
Comparison with Concatenation Method.One of the most widely used methods for context modeling is to directly concatenate history conversations Liu et al. (2020). To analyze its effectiveness, we remove HSG and observe the performance of seven representative question types using the concatenation of history conversations as input, which is shown in Figure 5.
As we can see, at the initial stages of concatenation turn number increase, the performances on some question types increase a little while remaining unchanged or even decreasing on others, leading to an almost unchanged overall performance. It is reasonable because history turns contain useful semantic information, which leads to performance gain. However, as more conversation turns are introduced into the model, more noisy tokens will also be introduced into the model, which leads to performance degradation. Besides, the introduction of more context tokens will also lead to an increase in computational cost with the \(O(N^{2})\) complexity.
It is worth noting that the best setting of concatenation method still performs worse than HSGE. It is mainly because we use attention mechanism to dynamically select the most related entities from the HSG, which achieves effective history modeling while avoiding introducing noisy information. And as we only extract entities and predicates from history conversations, the size of the graph is relatively small and the increase in computational cost as the conversation progresses is marginal.
### Subtask Analysis
The task of conversational KBQA involves multiple subtasks, each of which can directly impact the final model accuracy. To gain a deeper understanding of HSGE, we compare its performance of each subtask with the current SOTA model LASAGNE in Table 3. We can observe that most of the subtask's performance in HSGE is better than that of LASAGNE and mostly achieves accuracy above 90%. Amongst them, the improvement in Entity Detection is the largest. We think the main reason is that the token-level aggregation mechanism endows each token with richer semantic information.
### Error Analysis
In this section, we randomly sample 200 incorrect predictions and analyze their error causes:
Entity Ambiguity.Entity ambiguity refers to the situation where there exist multiple entities with the same text and type in the Wikidata knowledge graph. For example, we cannot distinguish multiple people called "Mary Johnson" because we have no more information other than entity text and entity type. We believe that incorporating other contextual information such as entity descriptions may help solve this problem Mulang et al. (2020).
Spurious Logical Form.We follow Shen et al. (2019); Kacupaj et al. (2021) and produce golden logical forms by leveraging BFS to search valid logical forms for questions in training data. This can sometimes lead to wrong golden actions such as two actions with different semantic information but accidentally sharing the same execution result. This may misguide our model during training.
## 5 Conclusion
In this paper, we propose a novel Conversational KBQA method HSGE, which achieves effective history modeling with minimal computational cost. We design a context-aware encoder that introduces temporal embedding to address user's conversation focus shift phenomenon and aggregate context information at both token-level and utterance-level. Our proposed HSGE outperforms existing baselines averaged on all question types on the widely used CSQA dataset.
Figure 5: The performance of the concatenation method on seven representative question types with regard to the concatenation turn number.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Task & LASAGNE & HSGE \\ \hline Entity Detection & 86.75\% & **89.75\%** \\ Entity Linking & 97.49\% & **98.19\%** \\ Logical Form Generation & **98.61\%** & 92.76\% \\ Type\&Predicate Prediction & 92.28\% & **93.11\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of subtask accuracy in LASAGNE and HSGE. |
2308.10564 | Software Entity Recognition with Noise-Robust Learning | Recognizing software entities such as library names from free-form text is
essential to enable many software engineering (SE) technologies, such as
traceability link recovery, automated documentation, and API recommendation.
While many approaches have been proposed to address this problem, they suffer
from small entity vocabularies or noisy training data, hindering their ability
to recognize software entities mentioned in sophisticated narratives. To
address this challenge, we leverage the Wikipedia taxonomy to develop a
comprehensive entity lexicon with 79K unique software entities in 12
fine-grained types, as well as a large labeled dataset of over 1.7M sentences.
Then, we propose self-regularization, a noise-robust learning approach, to the
training of our software entity recognition (SER) model by accounting for many
dropouts. Results show that models trained with self-regularization outperform
both their vanilla counterparts and state-of-the-art approaches on our
Wikipedia benchmark and two Stack Overflow benchmarks. We release our models,
data, and code for future research. | Tai Nguyen, Yifeng Di, Joohan Lee, Muhao Chen, Tianyi Zhang | 2023-08-21T08:41:46Z | http://arxiv.org/abs/2308.10564v1 | # Software Entity Recognition with
###### Abstract
Recognizing software entities such as library names from free-form text is essential to enable many software engineering (SE) technologies, such as traceability link recovery, automated documentation, and API recommendation. While many approaches have been proposed to address this problem, they suffer from small entity vocabularies or noisy training data, hindering their ability to recognize software entities mentioned in sophisticated narratives. To address this challenge, we leverage the Wikipedia taxonomy to develop a comprehensive entity lexicon with 79K unique software entities in 12 fine-grained types, as well as a large labeled dataset of over 1.7M sentences. Then, we propose _self-regularization_, a noise-robust learning approach, to the training of our software entity recognition (SER) model by accounting for many dropouts. Results show that models trained with self-regularization outperform both their vanilla counterparts and state-of-the-art approaches on our Wikipedia benchmark and two Stack Overflow benchmarks. We release our models1, data, and code for future research.2
Footnote 1: [https://huggingface.co/taiding/wikiser-bert-base](https://huggingface.co/taiding/wikiser-bert-base); [https://huggingface.co/taiding/wikiser-bert-large](https://huggingface.co/taiding/wikiser-bert-large).
Software Entity Recognition, Datasets, Noise-Robust Learning
## I Introduction
Software entity recognition (SER) is an integral task for acquiring software-related knowledge. It serves as the backbone of many downstream software engineering applications, such as traceability link recovery [1, 2, 3, 4], automated documentation [5, 6, 7, 8, 9], API recommendation [10, 11, 12], and bug fixing [13, 14, 15, 16].
Early work in this research direction employs pattern-matching methods to identify software entities based on pre-defined linguistic patterns or predefined dictionaries [5, 17, 18, 19]. However, these methods lack the flexibility to handle the sophistication and ambiguity in free-form text [20]. Machine learning methods have been increasingly adopted to solve this task [21, 22, 23, 24, 25, 26]. For example, S-NER [21] uses a feature-based Conditional Random Field (CRF) model to recognize software entities in five categories, including _Programming Language Platform_, _API_, _Tool-library-framework_ and _Software Standard_. However, S-NER is trained on a small dataset with 4,646 sentences and 2,404 named entities. It does not generalize well to commonly mentioned entities such as "AMD64" and "memory leak". Furthermore, given the simplicity of its model design, it only achieves a 78% F1 score on a Stack Overflow dataset.
In general text domains, deep learning models, such as BiLSTM-CRF [27] and BERT-NER [28], have emerged as the current paradigms for Named Entity Recognition (NER). However, these models can only detect entities in general text domains, such as person names and locations. Due to the domain shift challenge, simply finetuning them to a highly specialized domain such as software engineering is not sufficient [23]. In recent years, an approach called SoftNER [23] has been proposed to detect fine-grained software entity types with BERTOverflow, a BERT model finetuned on Stack Overflow data. Their evaluation shows it has greatly outperformed BiLSTM-CRF and BERT-base models and can detect various types of software entities, such as operating systems and software libraries.
Despite the great stride, our assessment shows that existing models, including SoftNER, still fall short of addressing domain shift, limited vocabularies, and morphological names in software engineering. In particular, the training data of SoftNER is noisy, since it is constructed synthetically based on Stack Overflow (SO) tags, which can be created by any SO users and suffer from informal naming conventions and all sorts or randomness. Our manual analysis shows that the training data of SoftNER has a high labeling error rate of 17.79% (detailed in Section IV-E). We hypothesize that it may be due to the lack of widespread use of double annotation and metadata for automatic annotation. This motivates us to construct a new dataset with fewer labeling errors but more sentences and named entities.
To address this limitation, we develop an automated pipeline to develop a large, high-quality software entity dataset based on Wikipedia. We call this dataset WikiSER. Compared with Stack Overflow, Wikipedia strives to be a comprehensive online encyclopedia. It generally exhibits better structures, well-formedness, grammaticality, and semantic coherence in its natural language sentences [29]. Since Wikipedia contains articles in numerous domains, our approach first processes the Wikipedia taxonomy starting from the root category "Computing" and performs hierarchical pruning to only retain SE-related categories. Then, it extracts the titles of all articles belonging to these categories as well as their aliases curated
by Wiki authors as the entity lexicon. In this way, we get overall 79K software entities in 12 fine-grained categories, e.g., algorithms, data structures, libraries, and OS.
Since Wikipedia articles often contain hyperlinks to other articles, each hyperlinked word or phrase can be treated as a mention of another entity, which can be leveraged to curate the text corpus with labeled entities. Based on our observation, Wiki authors typically only add hyperlinks to the first mention of an entity in a Wikipedia article. Thus, we further develop a matching method to automatically propagate entity types, so that we can obtain more sentences with labeled entities. In the end, we curate a large corpus with 1.7M sentences labeled with the 79K entities. Our manual validation demonstrates that the labeling error rate of our dataset is 9.17%, compared with an error rate of 17.79% in the SoftNER dataset (Section IV-E).
Furthermore, we propose a noise-robust learning framework called _self-regularization_ that trains an SER model to be consistent with its predictions under a noisy setting. Specifically, to enhance the robustness of model training, our framework leverages the dropout mechanism to simulate the prediction inconsistency from multiple differently initialized models and incorporates an agreement loss as a regularization mechanism, encouraging prediction consistency in the presence of noisy labels. By relying solely on the training data, our framework offers several advantages over other noise-robust methods and can be easily adapted to any model initialization.
Our evaluation demonstrates that BERT models trained with our self-regularization framework outperform multiple baselines. Specifically, our self-regularized BERT\({}_{base}\) model outperforms a SOTA SER model called SoftNER [23] by 7.1% in terms of F1 score. Furthermore, self-regularization also outperforms co-regularization [30], a SOTA noise-robust learning method, by 2.9% in F1. This performance gain from self-regularization also generalizes to two existing SER datasets [23, 31] obtained from Stack Overflow. Finally, we observe that self-regularization is more effective for smaller models and in-domain training indeed plays a major role in boosting the performance gain of SER in different types of data, e.g., Wikipedia vs. Stack Overflow.
Overall, these findings provide valuable insights into the strengths and limitations of our approach.
To sum up, we make the following contributions:
1. **Dataset.** We leverage Wikipedia to develop a comprehensive lexicon of 79K software entities in 12 fine-grained categories, as well as a large labeled dataset with 1.7M sentences and 3.4M entity labels. We make our dataset publicly available to cultivate future research.
2. **Model.** We propose a new noise-robust learning framework that regularizes the training of SER models via a dropout mechanism to account for labeling errors in SER datasets.
3. **Evaluation.** We conduct a comprehensive evaluation of the proposed approach against the state-of-the-art SER models on multiple datasets.
## II Related Works
### _Software Entity Recognition_
Recognizing software entities from text documents has been a long-standing research problem in Software Engineering [1, 2]. Early approaches rely on keyword or rule-based pattern matching to identify software entities, and they mainly focus on identifying API names [1, 3, 4, 5, 6, 17, 32]. For example, Bacchelli et al. design lightweight regular expressions based on common naming conventions to identify class and function names in email discussions [3]. Rigby et al. encode regular expressions into an island parser to recognize classes, methods, and fields mentioned in Stack Overflow posts [17].
More recently, machine learning, especially deep learning, has been increasingly adopted for software entity recognition [20, 21, 23, 24, 31, 33]. These approaches also recognize a richer set of software entities beyond API names. For example, Ye et al. proposed a Conditional Random Field (CRF) model to identify five types of software entities in Stack Overflow posts, including _programming languages_, _platforms_, _APIs_, _software libraries and frameworks_, _software standards_[21]. They further integrated word embeddings with the CRF model to address the challenges of polysemy and naming variations [31]. Zhou et al. proposed a similar word embedding-based CRF model but focused on identifying software entities in bug reports [33]. More recently, they proposed a BiLSTM-CRF model for software entity recognition [24]. Chen et al. proposed to identify morphological relations between software entities by analyzing and comparing the word embeddings learned from software-related documents and general text documents [34]. Tabassum et al. finetuned BERT with Stack Overflow posts and proposed a BERT-CRF model called SoftNER to detect software entities in Stack Overflow posts [23]. Huo et al. proposed to combine BiLSTM-CRF with a context-aware scoring mechanism to identify API mentions in free-form text [20].
Recently, Chew et al. [35] conducted a comparative evaluation on S-NER [21], Stanford-NER [36], BERT [28], and BERTOverflow [23]. They found that S-NER achieved the best performance while BERTOverflow achieved the worst performance. However, the best model only achieved 78.18% F1-score. Our work advances the state-of-the-art by addressing the data noise challenge in SER. To achieve this, we present a large NER dataset with fewer noisy labels and a noise-robust learning framework to account for data noises coming from annotation errors and ambiguous software entity names. Our evaluation shows that NER models trained with our new dataset and noise-robust learning framework achieved the best performance among six NER baselines.
### _Named Entity Recognition_
Our work is also closely related to the general-domain Named Entity Recognition (NER) task in natural language processing (NLP) [37]. NER models aim to identify named entities in the general text domain, such as persons, organizations, and locations. Recently, deep learning models
have gained dominance in obtaining state-of-the-art results. These models capture complex interactions between words and their contexts by learning from large text corpora labeled with named entities. Huang et al. proposed a BiLSTM-CRF model to encode the contextual information in a sentence for NER [27]. Following this work, many BiLSTM-CRF-based models have been proposed [38, 39, 40, 41, 42, 43, 44, 45, 46, 47].
Recently, Transformer-based language models [48] have become a new standard for developing NER models. Researchers have experimented with a range of pretrained language models for NER, such as BERT [28], RoBERTa [49], LUKE [50] and even autoregressive models such as GPT [51]. Typically, these language models are first pretrained on a large unlabeled text corpus through self-supervised learning and then finetuned for a specific task.
It remains challenging to reuse NER models trained on general text corpora to highly specialized domains such as biology and medicine, as shown by previous studies [52, 53, 54, 55]. Several known challenges present, including domain-specific naming standards, common word polysemy [31, 56], and naming variations [35]. Thus, developers have spent great effort to develop domain-specific NER models, such as BioBERT [57] for biomedical literature, ClinicalBERT [58] for clinical documents, or SciBERT [59] for scientific literature. Our work focuses on doing NER for software documents.
### _Noise-robust Learning_
Training data is often noisy and contains various types of errors, which can degrade the performance of ML models. Noise-robust learning has been widely studied in computer vision [60, 61, 62, 63, 64, 65, 66, 67]. Recently, several approaches have investigated noise-robust learning in NLP tasks [68, 30, 69, 70, 71]. Wang et al. [68] proposed CrossWeigh, a method that partitions the training data into several folds and trains independent NLP models to identify potential noisy labels. However, this approach requires training multiple models on different data folds and is thus computationally expensive and only supports fold-level noise estimation. Xiao et al. [69] proposed a Bayesian Neural Network (BNN) method that quantifies model and data uncertainties for NER and sentiment analysis tasks. Wang et al. [70] proposed NetAb, presupposing that noise can be simulated by flipping clean labels randomly. However, this presupposition is overturned by Cheng et al. [71], indicating that different datasets have different noise rates. Zhou and Chen [30] proposed a co-regularization framework that consists of two or more neural networks with the same structure but different initializations, which is particularly effective at reducing the impact of noise in the training data and improving the accuracy of the NER model. Inspired by this approach, we propose _Self-regularization_, a noisy-robust learning approach for NER in the SE domain. Self-regularization outperforms co-regularization in our evaluation while requiring training of a single model instead of simultaneously many models, making it more computationally efficient.
## III Problem Formulation
This section defines the research problem of recognizing software entities from text documents.
**Definition 1. (Software Entity):** Software entities are nouns and noun phrases that describe specific objects, concepts, and procedures related to software engineering, such as an algorithm name and a data structure name. To effectively perform software entity recognition, it is crucial to establish a well-defined and easily interpretable inventory of entity types. For software entities, our primary objective is to construct a domain-specific inventory of entity types that comprehensively cover various aspects of software engineering knowledge. Therefore, we design our inventory of entity types to cover software engineering concepts exclusively. In future work, one can extend our dataset with more software-related entities, such as software engineering conference names and computer scientist names, based on the downstream applications.
With this domain focus, three of the authors conduct an iterative process involving a focus group over three 2-hour sessions. Each author independently annotated 50 samples of software entities from our corpus. Then, they compared notes and reconciled differences through discussions. After three iterations of sampling, annotation, and consensus building, they ultimately reached an agreement to center our attention on the following 12 fine-grained software entity types that cover key software entities while balancing specificity versus coverage. These 12 types are _Algorithm_, _Application_, _Architecture_, _Data structure_, _Device_, _Error name_, _General concept_, _Language_, _Library_, _License_, _Operating system_, and _Protocol_. We provide a definition and examples for each software entity type below.
* **Algorithm.** This type includes computational procedures, algorithms, and paradigms that take inputs and perform defined operations to produce outputs, e.g., Bubble Sort, Auction Algorithm, and Collaborative Filtering.
* **Application.** This type includes computer software and programs designed to perform specific user-oriented tasks, e.g., Adobe Acrobat, Microsoft Excel, and Zotero.
* **Architecture.** This type includes computer architectures and other related computer system designs, e.g., IBM POWER architecture, Skylake (microarchitecture), and Front-side Bus.
* **Data structure.** This type includes standardized ways of organizing and accessing data in computer programs, e.g., Array, Hash table, and mXOR linked list.
* **Device.** This type includes physical computing components designed for specific functions, e.g., Samsung Gear S2, iPad, and Intel T5300.
* **Error name.** This type includes program errors, exceptions, and anomalous behaviors in computer software, e.g., Buffer Overflow, Memory Leak, and Year 2000 Problem.
* **General concept.** This type includes a broad range of programming strategies, paradigms, concepts, and design principles, e.g., Memory Management, Adversarial Machine Learning, and Virtualization.
* **Language.** This type includes programming languages and domain-specific languages designed to communicate instructions to computers, e.g., C++, Java, Python, and Rust.
* **Library.** This type includes software libraries, packages, frameworks, and other types of APIs, e.g., Beautiful Soup, FFmpeg, and FastAPI.
* **License.** This type includes legal terms governing the usage and distribution of software, e.g., Cryptix General License, GNU General Public License, and MIT License.
* **Operating system.** This type includes system software responsible for managing computer hardware and software resources and providing services for computer programs, e.g., Linux, Ubuntu, Red Hat OS, and MorphOS.
* **Protocol.** This type includes rules and standards that define communication between electronic devices, e.g., TLS, FTPS, and HTTP.
**Definition 2. (Software Entity Recognition):** We formulate the task of software entity recognition as a token-level classification problem. Given \(T\), a free-form text in the context of software engineering, the software entity recognition task is to identify every span of words \(s=<w_{1}w_{2}\cdots w_{n}>\) that refers to a software entity from \(T\) and classify each \(s\) into one of the 12 entity types we defined.
In our problem setting, we consider the IOB [72] scheme for entity labeling. IOB is a commonly used tagging format for annotating tokens in NER. It provides a simple way to identify entity boundaries. In the IOB scheme, each token in a sequence is labeled as either \(B\) (i.e., beginning of an entity), \(I\) (i.e., in the middle of an entity), or \(O\) (i.e., not an entity). Figure 1 illustrates the IOB labeling scheme with an example sentence from Wikipedia. In this sentence, "Windows XP" are labeled as _Operating System_ and "Internet Explorer 6" are labeled as _Application_. Since both of them contain multiple words, the first words in them are labeled with _B-OPERATION_SYSTEM_ and _B-APPLICATION_ respectively, while the remaining words are labeled as _I-OPERATION_SYSTEM_ and _I-APPLICATION_. In the software entity recognition task, an entity is considered correctly recognized _only if_ the labels of all words in the entity span are correctly predicted.
## IV Dataset Construction
In this section, we outline the process of identifying and categorizing software entities in Wikipedia corpora. Figure 2 illustrates the data construction pipeline. The resulting dataset, which we refer to as WikiSER, comprises 1.7M sentences labeled with 79K unique software entities (3.4M labels in total).
### _Pruning Wikipedia Taxonomy_
Since articles on Wikipedia cover a variety of domains, we need to first prune it to retain ones only related to software engineering. Wikipedia provides a hierarchical classification of its articles based on the domain and topic. In the hierarchy, more general categories appear closer to the root, while more specific categories appear at the bottom. To collect SE categories, we start from Category:Computing3, which is the most general category related to SE. We use the MediaWikiAPI to recursively find all descending subcategories of Category: Computing in the taxonomy (a total of 2M categories).
Footnote 3: [https://www.cai.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/enen/en/enen/en/en/enen/en/en/enen/en/en/enen/en/enen/en/en/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/en/enen/en/en/enen/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/enen/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/enen/en/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/enen/en/en/en/enen/en/en/enen/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/en/en/en/](https://www.cai.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/enen/en/enen/en/en/enen/en/en/enen/en/en/enen/en/enen/en/en/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/en/enen/en/en/enen/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/enen/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/enen/en/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/enen/en/en/en/enen/en/enen/en/en/en/enen/en/en/enen/en/enen/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/enen/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/enen/en/en/en/en/en/en/en/en/)
### _Collecting Software Entities_
On Wikipedia, each article is labeled with one or more Wikipedia categories by default. We leverage this categorization to find Wikipedia articles that may describe software entities. Specifically, we write a script to automatically extract Wiki articles labeled with at least one of the 7,524 SE categories identified from the previous step. 139,752 Wiki articles are extracted after this step.
However, we notice that this corpus is noisy. While many articles are labeled with a SE category, they do not actually describe a specific software entity. For example, "Software studies" is a Wiki article under the Software category, but it does not refer to a specific software entity. After manually examining 30 articles, we find that those articles are labeled with not only a SE category but also some categories not closely related to SE. For example, the "Software studies" article is labeled with Computing culture, Cultural studies, Digital humanities, in addition to Software.
Based on this insight, we experiment with several heuristics that filter Wikipedia articles by the number or percentage of SE categories among all the labeled categories. Table I describes each heuristic. We measure the precision of each heuristic. Here, precision refers to how many articles classified as SE-related are actually related to SE. The two authors sample 385 articles from all the extracted Wiki articles. This sample size is considered statistically significant with a 95% confidence level and a margin of error of 5%. We then manually label whether the articles are SE-related as the ground truth and compare them with those obtained by filtering based on various heuristics. Consequently, we find that the heuristic of selecting articles labeled with two or more SE categories achieves the highest precision at 92.8% among all heuristics. The filtered number of articles is 79,899, which has not decreased significantly compared to the number number of articles before filtering.
### _Labeling Software Entity Spans_
As explained in Section III, we need to identify the span of each software entity mentioned in a Wiki article. We leverage the hyperlinks in a Wiki article, as well as keyword matching, to identify the mentions of software entities in a Wiki article. Specifically, we treat the title of each of the 79,899 articles found in the previous step as a software entity. If a word or a phrase in a sentence of a Wiki article is hyperlinked to a Wiki article in the 79,899 articles, we consider that it mentions a software entity.
We observe that not all mentions of a software entity are hyperlinked in a Wiki article. Specifically, many Wiki articles only hyperlink the first mention of an entity to its corresponding article. Based on this observation, we further develop a keyword-matching method to identify the mentions of software entities.
A major challenge in this step is that the same entity can be expressed in different forms. For example, "Long Short-Term Memory" is often written as "LSTM". To address this challenge, we leverage the page redirection mechanism in Wikipedia to recognize aliases, which is commonly adopted in prior work [74, 75, 76, 77]. In Wikipedia, accessing an article can sometimes automatically send visitors to another article with the same concept but a different name, which is called a redirect. For example, "LSTM" is a redirect of "Long Short-Term Memory". Hence, when users want to visit the Wikipedia article of "LSTM", it will automatically jump to the article of "Long Short-Term Memory". Since they both point to the same article, we can safely assume that "LSTM" is an alias for "Long Short-Term Memory". Using this mechanism, we get aliases for all software entities in our dataset. Given a Wikipedia article, we first perform lemmatization to handle words in different forms and then identify the mentions of a software entity or its alias via exact keyword matching.
In total, we obtain 3.4M sentences from 79,899 articles. Many of these sentences do not mention any software name entities and provide little value for model training and evaluation. Thus, we remove over 1.7M of these instances along with
\begin{table}
\begin{tabular}{l r r} \hline \hline Heuristics & Article \# & Precision \\ \hline Contains 1 SE category & 139,752 & 79.3\% \\
**Contains 2 or more SE categories** & 79,899 & **92.8**\% \\
20\% of labeled categories are SE & 105,793 & 83.5\% \\
50\% of labeled categories are SE & 38,043 & 88.2\% \\
60\% of labeled categories are SE & 15,528 & 90.6\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Heuristics for Identifying Software Entities in Wikipedia taxonomy
Fig. 2: WikiSER Construction Pipeline
duplexed sentences. Finally, the dataset results in 1,663,431 sentences that mention at least one software entity. Figure 3 shows the distribution of the number of name entities in the sentences in WikiSer.
### _Labeling Entity Types_
As the final labeling step, we need to further assign each entity span to the corresponding entity type as defined in Section III. Note that under the Wikipedia taxonomy, an article belongs to one or more categories. Thus, an intuitive idea is to leverage the categories to infer the type of the entity an article refers to. Based on this idea, we first establish a mapping between the 7,524 categories from Section IV-A and the 12 software entity types defined in Section III. Then, we infer the entity types of an article based on the types of categories they belong to. We elaborate on these two steps below.
**Map Wiki categories to entity types.** To do this, two co-authors first collectively label the SE categories up to the 5th level in the Wikipedia taxonomy, resulting in a total of 1,160 categories. Similar to the manual labeling process in Section IV-A, they first discuss the categorization criteria for 1 hour and practice together on 50 categories to enhance consensus on categorization. Then, each of them is assigned half of the categories and is asked to report any disagreement they are uncertain about. The Cohen's Kappa [73] score is 0.82, indicating substantial agreement. They further discuss and resolve any disagreement. In this way, we manually establish a mapping between the 1,160 categories up to the 5th level to the 12 software entity categories. Then, we developed an automated script that performs a breadth-first traversal of the category hierarchy, starting from categories of level 6, and automatically assigns an entity type to a descending category based on the type of its parent category. Ultimately, we establish a mapping between all 7,524 categories to the 12 entity types.
**Inferring Entity Types.** Having obtained the inferred types for all SE categories, our goal is to classify the Wikipedia article of each entity based on its categories. However, as explained in Section IV-B, each article can have multiple categories, which may belong to different entity types. For example, consider the software entity "ChromeOS", which should be categorized as an _Operating System_. However, while it has categories that belong to the _Operating System_ type, such as Category: ARM operating systems and Category: Google operating systems, it also has a category that belongs to the _Application_ type (i.e., Category: Google Chrome).
To address this issue, we design three heuristics to decide the final entity type of an article with multiple categories. First, we simply assign the entity type of the most fine-grained category of the article, which is measured by the distance to the root Category: Computing. Second, if all categories of the article are at the same granularity in the hierarchy, we assign the entity type that the majority of categories belong to. Third, if there is still a tie, we infer the entity type of the article by prompting a large language model. Specifically, for a Wikipedia article \(P\), we prompt Flan-T5 XL [78] with the format shown in Figure 4. Specifically, we prepend the prompt with the first sentence extracted from \(P\). We substitute the second mask token with each candidate type and use Flan-T5 XL to calculate the perplexity of each completed prompt. Perplexity measures the degree of uncertainty of the language model when generating a new token. The candidate with the lowest perplexity is selected, indicating the highest confidence from the language model. These three heuristics apply to 39%, 56%, and 5% of the 79K software entities in WikiSER respectively.
Figure 5 shows the distribution of different types of software entities in our dataset. _Application_ is the most frequently occurring entity type on Wikipedia, comprising 41% of all software entities present in our dataset. The following are _Device_, _Algorithm_, _General Concept_, _Language_ and _Protocol_, respectively accounting for 12.5%, 11.4%, 6.74%, 6.57%, and 6.48% of all entities.
### _Manual Validation_
To evaluate the labeling accuracy of our method, we manually validated a random sample from the 1,663,431 sentences identified in Section IV-B. While selecting a sample that is too large is expensive and time-consuming, selecting a sample that is too small can lead to inaccurate conclusions. We use two sampling statistics--confidence level and margin of error--to decide on the proper sample size. We choose a 95% confidence level and a 5% margin of error, which are commonly used in empirical software engineering research [79]. This leads to a sample of 387 sentences and 807 corresponding entity labels in the IOB format.
To validate the correctness of these labels, three co-authors hold a 1-hour discussion to establish the span detection and entity categorization criteria and practice on 20 sentences collaboratively to ensure their mutual agreement on the criteria.
Fig. 4: Prompt input to Flan-T5 to infer Entity Type (Content in curly brackets are placeholders)
Fig. 3: Distribution of Entity Spans per Sentence
Then, each manually labels one-third of the 387 sentences. For the tokens that they feel uncertain about, they mark them as "uncertain" for later discussion. Then, through cross-validation, they eventually reach a consensus and complete the manual annotation of all sentences. For "uncertain" sentences, the three authors conduct a joint discussion to arrive at a final determination for these labels.
By comparing the manual annotations and the auto-generated labels, we find that 74 auto-generated labels (9.17%) are incorrect. Though this error rate is higher than the 5.38% error rate of CoNLL [80], the most widely used NER in the general text domain, this is reasonable given WikiSER's fine-grained nature. CoNLL only contains 4 general entity types, while WikiSER includes 12 granular, domain-specific software entity types. The increased specificity makes entity disambiguation more challenging. Furthermore, software entities have high name overlap and aliasing, presenting additional difficulty. Considering WikiSER's more complex fine-grained distinctions, the labeling quality achieved is acceptable, especially given no existing fine-grained software NER datasets to compare against.
Likewise, we manually validate two notable SER datasets, S-NER [21] and SoftNER [23], and compute their labeling error rates. For each dataset, we randomly sample 387 sentences, which ensures a 95% confidence level and a 5% margin of error, and follow the same procedure to manually label them. Our analysis reveals that the S-NER and SoftNER have 13.93% and 17.79% error rates, respectively, as shown in Table II. Thus, compared with S-NER and SoftNER, our new WikiSER dataset not only has the lowest error rate of 9.17% but also includes a comprehensive set of software entities and the most labeled sentences (1.7M), compared to 1,015 in S-NER and 7,438 in SoftNER. Though SoftNER has more entity types than WikiSER, 8 of the 20 types in SoftNER are code-related entity types, e.g., _class_, _variable_, _inline-code_, _function_, etc. Code-related entities are easier to detect compared with other types of software entities, such as library names and protocols, which have more aliases and naming ambiguity. There is also a large body of literature on recognizing code-related entities [1, 3, 4, 5, 6, 17, 20, 32]. State-of-the-art techniques such as ARCLIN [20] have achieved high accuracy in detecting code-related entities. Thus, code-related entities are not of interest in this work.
## V Noise-Robust Learning
Although WikiSER has a lower labeling error rate compared with other benchmarks, it is not free of labeling errors. Thus, we propose a noise-robust learning framework to account for such labeling errors during model training. The key insight is that, compared to clean-label settings, noisy-label settings can benefit from a "delayed" learning curve [30]. This is due to the fact that neural models tend to learn quickly from clean instances that are more compatible with the task's inductive bias at the early stages of training. While doing so, they can become overly confident and less likely to learn from noisier instances that might diverge from the task's inductive bias in later epochs [81]. We can solve this problem by adding a loss term that discourages premature convergence and prevents the model from overfitting uncertain, noisy labels.
Based on this insight, we propose _self-regularization_, a noisy learning approach that leverages the dropout function to reduce overfitting. In deep learning, dropout [67] has long been used to improve generalization in neural networks by randomly dropping parts of a network layer. Different versions of the same model induced by the dropout might make different prediction distributions, especially in the presence of noisy instances. We can control this randomness by regularizing the model over its prediction divergence. In R-Drop, Liang et al. [82] was the first to use dropout as the main mechanism for regularization in noisy-label settings. Self-regularization differs from R-Drop in that it relaxes the number of forward passes through the model to be _multiple_ instead of just two.
Recall that our approach adds a loss term of self-regularization to the model learning objective. This function accounts for the prediction inconsistency of the model's forward passes during training. We explain more details of our training framework below.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & Ye et al. [21] & Tabassum et al. [23] & WikiSER \\ \hline Entity types & 5 & **20** & 12 \\ Sentences & 4,646 & 6,510 & **1.7M** \\ Unique entities & 1,015 & 7,438 & **79,899** \\ Labeling errors & 13.93\% & 17.79\% & **9.17\%** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison between WikiSER and two SER Datasets from Stack Overflow
Fig. 5: Distribution of Software Entities by Type
Let \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\) be a dataset with pairs of an input sequence \(x_{i}\) and a label sequence \(y_{i}\). In the NER setting, an instance in \((x_{i},y_{i})\) could be considered mislabeled if a token in \(x_{i}\) is wrongly typed (ie. iPhone as _Algorithm_ instead of _Device_), or incorrectly assigned a non-named entity ("O"). The goal is to learn a noise-robust model \(M\) that tolerates the inevitable training noise.
We initialize \(M\) from a pretrained language model such as BERT\({}_{base}\), where dropout is by default incorporated. At each training step, we sample a batch \(\mathcal{B}=\{(x_{i},y_{i})\}_{i=1}^{|\mathcal{B}|}\) in \(\mathcal{D}\) for inference on \(M\). After each random dropout, \(M\) becomes a new submodel with a fraction of the original units in the network. The same instance input \(x_{i}\) can outputs different results when passing through \(M\). The left-most block in Figure 6 illustrates this process.
Training of the NER model with self-regularization optimizes two objectives. The first objective is the divergence loss \(\mathcal{L}_{kl}\). Given the dropout randomness, we obtain a set of \(\mathcal{P}=\{P_{j}\}_{j=1}^{K}\) distributions over the label space when inputting an instance to \(M\) over \(K\) forward passes. In noisy settings, \(\mathcal{P}\) is likely to have high variance. We can control for such variance by taking the bidirectional Kullback-Leibler (KL) divergence between the average target probability distribution of \(\mathcal{P}\) and each \(P_{j}\):
\[\mathcal{L}_{kl}=\frac{1}{K}\sum_{j=1}^{K}D_{KL}(P_{j}||\frac{1}{K}\sum_{j=1}^ {K}P_{j}) \tag{1}\]
where \(\frac{1}{K}\sum_{j=1}^{K}P_{j}\) is the average of \(K\) probability distributions obtained from the Softmax, denoted as \(P_{Avg}\) in Figure 6.
The second objective optimizes the cross-entropy task loss \(\mathcal{L}_{T}\) for \(x_{i}\) for doing NER label classification:
\[\mathcal{L}_{task}=-\frac{1}{K}\sum_{j=1}^{K}\sum_{l=1}^{|x_{i}|}y_{i,l}\log P _{j,l} \tag{2}\]
where \(y_{i,l}\) is the true label of the \(l\)-th token in input \(x_{i}\) and \(P_{j,l}\) is the probability distribution over the label space for the \(l\)-th token obtained from the \(j\)-th forward pass. We achieve the standard task loss by averaging the cross-entropy loss over all forward passes.
Finally, the combined agreement loss accounts for both Equation 2 and Equation 1 as the single learning objective to optimize \(M\):
\[\mathcal{L}_{agree}=\mathcal{L}_{task}+\alpha\times\mathcal{L}_{kl} \tag{3}\]
where \(\alpha\) is a positive multiplier used to weight the agreement loss. \(\mathcal{L}_{agree}\) is high when the feels uncertain about its prediction and gets smaller when the output probability distribution is more consistent. Algorithm 1 describes the full pipeline of self-regularization.
Self-regularization has a few benefits over previous methods. First, compared to existing noisy-label learning methods such as co-regularization and CrossWeigh, [30, 68], self-regularization only requires training a single model instead of multiple. This translates to less training time and memory overhead. Second, compared to previous NER models in the software domain, our approach demands neither the use of
Fig. 6: Overview of Self-regularization Framework
an external gazette [21], nor auxiliary models [23]. Last, the method is versatile in that it can synergize well with any pretrained or randomly initialized models.
## VI Experiments
We design multiple experiments to evaluate our dataset and noisy label learning method. We aim to answer the following research questions:
* RQ1: How well does self-regularization work as a denoising measure on WikiSER?
* RQ2: How does self-regularization generalize to Stack Overflow benchmarks?
* RQ3: What entity types are most difficult to learn?
* RQ4: How does the number of forward passes impact self-regularization?
* RQ5: How efficient is self-regularization compared to co-regularization?
### _Experimental Setup_
**Dataset.** Given the massive size of the WikiSER dataset, we create a subset of it - WikiSER\({}_{small}\) - to train and test SER models. This set strives for a uniform distribution of each entity type. The resulting WikiSER\({}_{small}\) consists of 50K sentences for _training_, 8K for _validation_, and 8k for _test_.
**Baselines.** We describe the following baselines to compare with our noisy label learning method.
1. **SoftNER.** SoftNER is the state-of-the-art SER model proposed in [23]. It uses an architecture that combines three embedding attention layers: BERTOverflow, an auxiliary code classifier, and an auxiliary segmentation model to identify name spans. The segmentation model uses extra information such as HTML tags in a SO post, which does not apply to our Wikipedia data. We maintain the code classifier and segmentation model as given, but instead finetuning the entire model on WikiSER\({}_{small}\).
2. **Co-regularization.** Zhou and Chen [30] propose a co-regularization framework that reduces overfitting by regularizing the output divergence of many models, outperforming many methods in information extraction for the general domain. We finetune BERT\({}_{base}\) with their co-regularization denoising objective as a comparison baseline for self-regularization.
3. **BERT\({}_{base}\).** We finetune pretrained BERT\({}_{base}\) cased version on our WikiSER\({}_{small}\). BERT\({}_{base}\) commonly serves as the standard baseline for many downstream language tasks in the general domain [30, 83].
4. **RoBERTa\({}_{base}\).** RoBERTa [49] is another Transformer-based language model that is pretrained from large-scale text corpora. It improves over BERT on many benchmarks by having more diverse training data and modified architecture. We finetune pretrained RoBERTa on WikiSER\({}_{small}\).
5. **BERTOverflow.** Initialized from BERT\({}_{base}\), BERTOverflow is trained on an additional 152M sentences from Stack Overflow [23]. In contrast to general-purpose pretrained models, BERTOverflow serves is in-domain for software engineering. We finetune its checkpoint from [23] on WikiSER\({}_{small}\).
6. **Larger model.** BERT\({}_{large}\)[28] is a bigger variant of BERT\({}_{base}\) with 340M parameters, whereas the base model has 110M parameters. As the models discussed thus far share the same size, we include a finetuned BERT\({}_{large}\) baseline to demonstrate the performance of a bigger model on WikiSER and the gains from self-regularization.
**Training details.** All models use Adam optimizer, learning rate \(1e-5\), batch size \(16\), dropout rate of \(10\%\), and are trained on the NVIDIA RTX A6000 for 30 epochs. For self-regularization, we choose a warm-up rate of \(10\%\) and \(\alpha=10\).5
Footnote 5: We tune \(\alpha\) over \(\{10,30,50\}\) on WikiSER\({}_{small}\) and find \(\alpha=10\) to work best. Thus, we use \(\alpha=10\) for all models where self-regularization and co-regularization apply.
### _RQ1: SER Accuracy on WikiSER_
Table III shows the results of models trained by our self-regularization framework in comparison to baseline models. Overall, models trained with self-regularization outperform all baselines. This includes BERT\({}_{base}\) trained with co-regularization by \(2.9\%\). The efficacy of both denoising method suggests that we are able to reduce overfitting while learning on WikiSER\({}_{small}\), which comes with a certain level of noise. We highlight that BERT\({}_{base}\) model trained with our self-regularization framework outperforms SoftNER, the SOTA SER model [23] by 7.1% in F1 score. SoftNER also performs worst than its pretrained model in BERTOverflow. A plausible reason is that, despite an adaptation to WikiSER\({}_{small}\) via finetuning, SoftNER's auxiliary models might have provided irrelevant signals on clean texts from Wikipedia.
We provide more in-depth analyses of the results.
**Impact of pretrained models.**
Many NLP studies point to the importance of having pretraining data more closely aligned to the distribution of the downstream task, allowing the model to adapt more easily to the target domain. For SE, BERTOverflow is a pretrained language model finetuned from BERT\({}_{base}\) on 152M Stack Overflow posts. Surprisingly, Table III shows that BERTOverflow performs the worst, while BERT\({}_{base}\) performs the best. Notably, BERT\({}_{base}\) is trained on many general-domain corpora [28] that also includes Wikipedia, the same source in which we construct WikiSER. We suspect that BERT\({}_{based}\)'s good performance can be attributed to the fact that its underlying
\begin{table}
\begin{tabular}{l c c c} \hline \hline & P & R & F1 \\ \hline SoftNER [23] & 64.4 & 69.1 & 66.6 \\ BERTOverflow [23] & 66.4 & 68.5 & 67.4 \\ RoBERTa\({}_{base}\) & 68.2 & 71.0 & 69.6 \\ BERT\({}_{base}\) & 68.1 & **73.1** & 70.5 \\ BERT\({}_{base}\) + Co-reg. [30] & 72.7 & 69.1 & 70.8 \\ BERT\({}_{base}\) + Self-reg. & **74.9** & 72.0 & **73.7** \\ \hline \hline BERT\({}_{large}\) & 69.8 & **74.5** & 72.1 \\ BERT\({}_{large}\) + Self-reg. & **73.3** & **74.5** & **73.9** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Evaluation results on WikiSER
distribution is closer to WikiSER than that of BERTOverflow. However, BERTOverflow is still effective as a base model for Stack Overflow data, as our experiments demonstrate in Section VI-C.
**Impact of model size.** Compared to BERT\({}_{base}\) (110M parameters) and RoBERT\({}_{base}\) (125M), BERT\({}_{large}\) (340M) trained with self-regularization sees an improvement over its vanilla counterpart. This suggests that gains are possible when the model size increases, though self-regularization shows more effectiveness for smaller models. Specifically, in comparison with gains from BERT\({}_{large}\), BERT\({}_{base}\) with self-regularization sees an improvement of \(3.2\%\) (instead of \(1.8\%\)) in F1.
### _RQ2: SER Accuracy on Stack Overflow_
Since Wikipedia imposes strict guidelines and standards for editing and verifying written content, WikiSER tends to not suffer from data noise in the form of spelling mistakes, naming conventions, and others [21]. In this section, we investigate how our approach generalizes to more noisy benchmarks such as SoftNER [23] and S-NER [21].
**Datasets.** We use two different Stack Overflow corpora annotated by SoftNER and S-NER. As explained at the end of Section IV, SoftNER has 8 code-related entity types, which is not of interest in this work. Thus, we do not consider them in this experiment. For the remaining 12 entity types, we map them to our entity types for consistency. Please see Supplementary Materials for the full mapping. This results in 9 final entity types, since 3 types are very fine-grained and are merged with other types, e.g., _website_ merging into _application_. For S-NER, which has just 5 entity types (_API_, _Language_, _Platform_, _Framework_, and _Software Standard_), we keep the dataset as it comes, and randomly sample \(15\%\) of its data as the test set. We finetune all baseline models separately on these two datasets.
**Results.** Table IV shows that the positive gains from self-regularization also apply to SoftNER and S-NER. BERTOverflow trained with self-regularization performs the best on both datasets. Specifically, it outperforms the self-regularized BERT\({}_{base}\) model by \(11.3\%\) on SoftNER-9 and \(3.8\%\) on S-NER in F1. This result makes sense since BERTOverflow is fine-tuned on 152M SO posts and its distribution is more aligned with SoftNER and S-NER than BERT\({}_{base}\). This implies that in-domain pretraining is still helpful. Additionally, Table IV also suggests that SoftNER performs worse than vanilla BERTOverflow. Since SoftNER adopts auxiliary models to recognize code entities, this implies that these auxiliary models are not as helpful in the absence of code-related entities.
### _RQ3: SER Accuracy by Entity Type_
Table V shows the results of BERT\({}_{base}\) trained with self-regularization across all entity types. Overall, the model performs fairly well on _License_, _Error Name_, _Data Structure_, _Library_, and _Operating Systems_. Other entity types, such as _Application_, _Algorithm_, and _General Concept_, appear to be more challenging than others. One possible reason is that entities in those types share more common words with non-SE words, making them more ambiguous to recognize. In contrast, _License_ and _Error Name_ tend to be more unique and standardized, making them easier to detect.
### _RQ4: Choice of \(K\) for Self-regularization_
How does increasing the number of forward passes in self-regularization affect model performance? Results from Table VI show a major improvement for BERT\({}_{base}\) when the model regularizes over \(K=3\) outputs instead of \(K=2\). At \(K=4\), BERT\({}_{base}\) sees a marginal improvement of \(0.1\%\). For BERT\({}_{large}\), model performance does not vary greatly with different values of \(K\) (within \(0.2\%\)). Here, we note a potential tradeoff between performance and computational resources. The choice of forward passes can vary by the task and model architecture, and a high \(K\) does not necessarily lead to better results compared to smaller values of \(K\).
### _RQ5: Training Time and GPU Memory Usage_
Table VII shows the training time and GPU memory usage of models trained with self-regularization compared to co-regularization [30]. Results show that self-regularization incurs minimal memory overhead (almost at zero additional cost), whereas co-regularization requires 2x the GPU memory when trained with two models and 3x when trained with three models. In wall-clock, self-regularization is also favorable co-regularization while achieving higher F1. It is important to note that the Test F1 on WikiSER for BERT\({}_{base}\) + Self-reg outperforms Vanilla BERT\({}_{large}\), suggesting that the former
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \# Spans & P & R & F1 \\ \hline General Concept & 2,456 & 67.0 & 62.2 & 64.5 \\ Algorithm & 2,018 & 67.7 & 66.2 & 66.9 \\ Application & 6,861 & 67.9 & 69.7 & 68.7 \\ Device & 3,299 & 73.6 & 69.3 & 71.4 \\ Language & 2,525 & 73.2 & 74.4 & 73.8 \\ Protocol & 2,629 & 74.5 & 73.5 & 74.0 \\ Architecture & 1,538 & 78.0 & 73.8 & 75.8 \\ Operating System & 2,765 & 80.1 & 78.6 & 79.3 \\ Library & 991 & 81.2 & 84.8 & 82.9 \\ Data Structure & 1,051 & 83.1 & 87.4 & 85.2 \\ Error Name & 1,088 & 86.0 & 90.8 & 88.3 \\ License & 1,140 & 86.6 & 90.9 & 88.7 \\ \hline Micro Avg. & - & 73.8 & 73.5 & 73.7 \\ Macro Avg. & - & 76.6 & 76.8 & 76.6 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Results by entity type. Second column shows the number of entity label occurrences in the test set of WikiSER\({}_{small}\)
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline & \multicolumn{2}{c|}{**SoftNER-9 [23]**} & \multicolumn{2}{c}{**S-NER [21]**} \\ & P & R & F1 & P & R & F1 \\ \hline BERT\({}_{base}\) & 64.7 & 64.2 & 64.4 & 77.0 & 80.9 & 78.9 \\ BERT\({}_{base}\)+Self-reg. & 65.2 & 62.4 & 64.8 & 81.8 & 81.1 & 81.4 \\ SoftNER & 74.6 & 72.9 & 73.7 & 81.3 & **84.6** & 82.9 \\ BERTOverflow & 65.2 & 73.1 & 74.0 & **84.3** & 83.9 & 84.1 \\ BERTOverflow+Self-reg. & **75.8** & **76.5** & **76.1** & **86.0** & 84.3 & **85.2** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Evaluation results on two Stack Overflow datasets
approach is both better and cheaper. When GPU memory is a concern, self-regularization can be cheaply applied to improve model robustness in noisy-label settings.
## VII Discussion
### _Threats to Validity_
We discuss the validity of our approach in both data construction and denoising method. First, labeling software entities in sentences could be subjective. Despite our efforts to narrow down a Wikipedia tree for the software engineering domain, we recognize that our annotation still cannot guarantee perfect precision or recall of all relevant named entities from Wikipedia. We estimate the precision of our WikiSER in Section IV and compare our dataset against previous work, which shows that WikiSER's size and comparatively low label error rate position it as a beneficial contribution for both the software engineering and NLP community. In addition, we understand the limitations of the Wikipedia taxonomy as a rich but not exhaustive source of relevant software entities. Besides Wikipedia, data sources such as GitHub and source code documentation could provide fruitful information.
In model training, we use mostly the same hyperparameters for baseline methods without individual tuning. A more thorough grid search could improve the results for some models in our evaluation. However, we note that while our main method experiments with different \(\alpha\), an important hyperparameter for self-regularization, we minimally tune other hyperparameters.
The task of software entity recognition poses many challenges in entity confusion and ambiguity, noisy user-input texts, and constant distribution shifts [21]. While we demonstrate the efficacy of the self-regularization framework in recognizing entities for clean (Wikipedia) and noisier user-input texts (Stack Overflow), it is difficult to guarantee that our approach would generalize well to _any_ domain. However, we highlight that our framework can be easily adapted to new domains and any pretrained language models without requiring heavy supervision from auxiliary resources.
### _Limitations & Future Work_
We evaluate our proposed method on WikiSER\({}_{small}\) rather than the entire WikiSER dataset, which is too large to train and evaluate in our GPU server. Future work could look into training and evaluating SER models with a larger sample from WikiSER or even the entire dataset on more GPUs.
Future work can also look into ways to improve upon more domain-specific methods for noisy label learning and further leverage the massive source of labeled data from WikiSER. The fact that our corpus contains 1.7M sentences makes it an attractive resource for exploring language model pretraining and multi-task learning [84]. Furthermore, there are many downstream software engineering tasks that could benefit from our noise-robust learning methods for SER, such as traceability link recovery [1, 2, 3, 4], automated documentation [5, 6, 7, 8, 9], API recommendation [10, 11, 12], and bug fixing [13, 14, 15, 16]. It is worthwhile to augment existing solutions in these downstream tasks with our SER model. Finally, given the recent advancement in large language models (LLMs) such as ChatGPT, it is interesting to investigate how well LLMs perform in software entity recognition tasks.
## VIII Conclusion
In this work, we construct WikiSER, a large and high-quality software entity recognition dataset by leveraging the Wikipedia corpus. To account for labeling errors in SER datasets, we propose a new noise-robust learning method called self-regularization. Compared with multiple baseline models, including a SOTA SER model [23] and a SOTA noise-robust learning method [30], models trained with self-regularization perform the best while being more computationally efficient. Furthermore, self-regularization also generalizes well to two existing SER datasets from Stack Overflow. Finally, we highlight several improvement opportunities and outline future work.
## Acknowledgment
The authors would like to thank the anonymous reviewers for their valuable comments. This research was in part supported by an Amazon Research Award and a Cisco Research Award.
|
2302.11621 | Isolating the linear signal when making redshift space distortion
measurements | Constraints on the linear growth rate, $f\sigma_8$, using small scale
redshift space distortion measurements have a significant statistical advantage
over those made on large scales. However, these measurements need to carefully
disentangle the linear and non-linear information when interpreting redshift
space distortions in terms of $f\sigma_8$. It is particularly important to do
this given that some previous measurements found a significant deviation from
the expectation based on the $\Lambda$CDM model constrained by Planck CMB data.
We construct a new emulator-based model for small scale galaxy clustering with
scaling parameters for both the linear and non-linear velocities of galaxies,
allowing us to isolate the linear growth rate. We train the emulator using
simulations from the AbacusCosmos suite, and apply it to data from the extended
Baryon Oscillation Spectroscopic Survey (eBOSS) luminous red galaxy sample. We
obtain a value of $f\sigma_8(z=0.737)=0.368\pm0.041$, in 2.3-$\sigma$ tension
with the Planck 2018 $\Lambda$CDM expectation, and find less dependence on the
minimum measurement scale than previous analyses. | Michael J. Chapman, Zhongxu Zhai, Will J. Percival | 2023-02-22T19:56:26Z | http://arxiv.org/abs/2302.11621v1 | # Isolating the linear signal when making redshift space distortion measurements
###### Abstract
Constraints on the linear growth rate, \(f\sigma_{8}\), using small scale redshift space distortion measurements have a significant statistical advantage over those made on large scales. However, these measurements need to carefully disentangle the linear and non-linear information when interpreting redshift space distortions in terms of \(f\sigma_{8}\). It is particularly important to do this given that some previous measurements found a significant deviation from the expectation based on the \(\Lambda\)CDM model constrained by Planck CMB data. We construct a new emulator-based model for small scale galaxy clustering with scaling parameters for both the linear and non-linear velocities of galaxies, allowing us to isolate the linear growth rate. We train the emulator using simulations from the AbacusCosmos suite, and apply it to data from the extended Baryon Oscillation Spectroscopic Survey (eBOSS) luminous red galaxy sample. We obtain a value of \(f\sigma_{8}(z=0.737)=0.368\pm 0.041\), in 2.3-\(\sigma\) tension with the Planck 2018 \(\Lambda\)CDM expectation, and find less dependence on the minimum measurement scale than previous analyses.
keywords: cosmology: cosmological parameters - cosmology : observations - cosmology : large-scale structure of Universe - galaxies : distances and redshifts
## 1 Introduction
Precise measurement of the Cosmic Microwave Background (CMB) has fundamentally changed the way we understand our Universe. We now have tight constraints on the core cosmological parameters and find good agreement with a cosmological model with a cold dark matter component that dominates the matter density and a cosmological constant that dominates the energy density (\(\Lambda\)CDM). However, there exist several tensions between measurements of the early universe through the CMB and some late time probes. In particular the expansion rate of the Universe at present day, \(H_{0}\), measured in the local Universe from type Ia supernovae (Riess et al., 2022) and the amplitude of fluctuations in matter density field, parameterized as \(S_{8}\), measured by weak lensing surveys (Asgari et al., 2021; Abbott et al., 2022) are in disagreement with the values measured from the CMB by the Planck satellite (Planck Collaboration et al., 2020, 2020). The focus of many ongoing cosmological observations is to build on the current concordance cosmology using additional measurements that are independent of the CMB observations or have complementary parameter degeneracies. To that end, redshift space distortion (RSD) measurements provide a unique test of cosmological constraints derived from the matter density field by probing the velocity field.
RSD is an apparent effect observed in spectroscopic galaxy clustering surveys caused by the peculiar velocities of galaxies. In a spectroscopic galaxy survey the radial distances to the galaxies are usually determined from the redshifts, assuming that the recession velocities are caused entirely by the expansion of the Universe. However, because galaxies have an additional peculiar velocity caused by structure growth and primarily sourced from gravity, their radial positions as determined by the survey, called redshift space positions, will be offset from their true positions in real space (Kaiser, 1987). In the linear regime the amplitude of the velocity field is directly proportional to two cosmological parameters. The first is the logarithmic growth rate of density perturbations, \(f\). The second is the amplitude of density fluctuations, which can be normalized using the standard deviation of density fluctuations in a sphere of \(8\,h^{-1}\)Mpc, defined as \(\sigma_{8}\). Due to the degeneracy between these parameters RSD constraints are given in terms of the parameter combination \(f\sigma_{8}\)(Guzzo et al., 2008; Song and Percival, 2009). RSD measurements can therefore be used to constrain \(f\sigma_{8}\) in a way that is complementary to probes of the density distribution (Huterer and Shafer, 2018).
RSD measurements are most easily interpreted on linear scales where the density field can be easily modelled analytically (see e.g. Bautista et al., 2021). Models can be extended to quasi-linear scales using Lagrangian perturbation theory (LPT), which models the evolution of the density field by the displacement of dark matter fluid elements (Taruya et al., 2010; Reid and White, 2011; Carlson et al., 2013; Wang et al., 2014). The perturbation theory expansion breaks down
at the shell-crossing scale. An alternative method is the effective field theory (EFT) approach, which makes use of the relatively weak link between the small scale non-linear structure of galaxy formation and the typical separation of galaxies in large scale structure surveys (Baumann et al., 2012; Carrasco et al., 2012). By integrating out short-wavelength perturbations it becomes possible to solve the resulting smoothed field with a high degree of accuracy into the quasi-linear regime by extending the perturbation theory calculations to arbitrarily high-order (d'Amico et al., 2020; Ivanov et al., 2020; Chen et al., 2021). While these methods are successful at modelling the distribution of matter in the linear and quasi-linear regimes, they can not provide an analytic basis for the formation of galaxies or the non-linear motion of virialized structures. These effects are instead included as additional correction terms whose functional form can be predicted from perturbation, but with unknown amplitudes that must either be calibrated on simulations or fit from the data (Cabass et al., 2022).
Previous works have attempted to extract RSD information from small scales by modelling the formation of non-linear structure with N-body simulations. Reid et al. (2014) used an N-body simulation at a single fixed cosmology to model the clustering of galaxies within the Baryon Oscillation Spectroscopic Survey (BOSS) CMASS sample between \(0.8-32\)\(h^{-1}\)Mpc, and found a factor of 2.5 improvement in precision over the perturbation theory RSD analysis on large scales of the same sample. This method has been expanded through the use of machine learning emulators to allow for varying cosmology without needing to run additional N-body simulations for each new point in parameter space, finding similar improvements in precision over perturbation theory approaches (Chapman et al., 2022; Zhai et al., 2022; Yuan et al., 2022; Kobayashi et al., 2022).
A key aspect of the Reid et al. (2014) analysis was the introduction of a velocity scaling parameter, \(\gamma_{f}\), that multiplied all halo velocities in the simulation. Scaling the amplitude of the velocity field is directly equivalent to a proportional change in \(f\sigma_{\rm B}\) in linear theory, allowing Reid et al. (2014) to specifically assess deviations in the growth rate within a \(\Lambda\)CDM framework, since the growth rate is normally fixed by the other cosmological parameters. Chapman et al. (2022) analyzed the eBOSS LRG sample using a Gaussian process based emulator with a velocity scaling parameter (Zhai et al., 2019), however they were forced to restrict their analysis with a minimum scale cut to match the scale where changing \(\gamma_{f}\) no longer directly matched the expectation for a change in \(f\sigma_{\rm B}\). While the small-scale, non-linear velocities are certainly affected by a change in the growth rate, it is no longer necessary that that change be directly proportional, so there is a potential for a systematic bias in applying a linear velocity scaling to non-linear velocities.
This highlights a larger issue in the area of small-scale RSD measurements; how to measure a linear quantity in the non-linear regime without allowing the non-linear velocity evolution to bias the results. This is the primary motivation for this work. We build on the previous model by splitting the velocity scaling parameter \(\gamma_{f}\) into two parameters: \(\gamma_{f}\) to scale the linear component of the velocity, and \(\gamma_{n}\) to scale the non-linear component. This new parameterization allows us to interpret a change in \(\gamma_{\rm f}\) as a change in the amplitude of the linear velocity field consistent with a change in \(f\sigma_{\rm B}\) within a \(\Lambda\)CDM framework, while \(\gamma_{n}\) allows enough freedom for the non-linear velocity to vary without directly matching the scaling of the linear velocity.
This paper is structured as follows. In Sec. 2 we expand on the model of Chapman et al. (2022) to isolate the linear signal in the non-linear regime using our new velocity scaling parameters. Then we refit the eBOSS LRG data using the new emulator, and present the results in Sec. 3. Finally, in Sec. 4 we discuss the significance of our new results and compare to the work of the previous emulator and other related measurements.
## 2 Modelling RSD including velocity scaling
### Building an emulator
In order to access RSD information on small scales we need to model the clustering of galaxies into the non-linear regime. The solution we choose is to construct an emulator for the small scale clustering, trained and validated using N-body simulations. We apply machine learning with a Gaussian process to emulate the correlation function measurements in each separation bin, as a function of the set of parameters specifying the cosmology and HOD model. First, we use a set of _training_ data to specify the value of the emulator at a series of points in parameter space. These are the means of the Gaussian distributions. We then use a different set of _test_ data to optimise the width and shape of an "interpolation kernel", such that the final model given a set of model parameters is the linear sum of the means coming from the training data, weighted by this kernel. Details of the kernel and optimisation are available in Zhai et al. (2019). Our training data is generated from N-body simulations, where we use a halo occupation distribution to connect galaxies to halos. While the training data can only have a limited number of possible values in our parameter space, the trained emulator is very effective at interpolating within this parameter space to produce accurate clustering measurements.
In this work we build on the emulator used in Chapman et al. (2022), originally based on Zhai et al. (2019). The emulator used a 5-parameter cosmological model consisting of \(\Omega_{M}\), \(\Omega_{b}\), \(\sigma_{\rm S}\), \(h\), and \(n_{s}\), as well as an 8-parameter HOD model to connect galaxies to halos in the simulation, described by the parameters \(f_{\rm max}\), \(\sigma_{\rm log}\,M\), \(\log M_{\rm sat}\), \(\alpha\), \(\log M_{\rm cut}\), \(c_{\rm vir}\), \(v_{\rm bc}\), and \(v_{\rm bs}\).
The final parameter of the Chapman et al. (2022) emulator was a velocity scaling parameter, \(\gamma_{f}\). Physically, \(\gamma_{f}\) rescaled all halo bulk velocities in the simulation, where we define 'bulk velocities' to mean the velocity of the halo as a single unit, rather than the velocity of the individual particles making up the halo or the internal velocity dispersion of the halo. In the linear regime the amplitude of the velocity field is directly proportional to \(f\sigma_{\rm S}\), so a scaling of the velocity field has the same effect as scaling the logarithmic growth rate \(f\)(Reid et al., 2014). However, an issue highlighted in Chapman et al. (2022) is the question of what velocities can be considered as linear for the purposes of the growth rate. While a change in the growth rate will affect all components of the velocity, the relation between the amplitude of the non-linear velocity field and \(f\) may not be directly proportional. Chapman et al. (2022) investigated the effect of varying \(\gamma_{f}\) on the correlation function and identified a scale of \(\sim 7\,h^{-1}\) Mpc as the transition between the quasi-linear and non-linear regimes, so they restricted their measurement of \(f\sigma_{\rm S}\) to between \(7-60\,h^{-1}\) Mpc to isolate the linear signal when using a single scaling parameter.
We improve on the Chapman et al. (2022) emulator using the method described in Sec.2.2 to model the linear and nonlinear velocity components. In order to apply this new method we require access to the initial conditions of the simulation, which are not publically available for the Aemulus suite of simulations (DeRose et al., 2019) used by the Chapman et al. (2022) emulator. For our new emulator we use the AbacusCosmos suite of simulations (Garrison et al., 2018), with available first-order initial conditions generated from the
zeldovich-PLT1 code. AbacusCosmos consists of 40 variable cosmology 1100 \(h^{-1}\) Mpc simulation boxes with 14403 particles that we use to train the emulator, as well as 20 simulation boxes at the Planck 2015 cosmology (Planck Collaboration et al., 2016) that are used for testing. Since the AbacusCosmos and Aemulus suites are similar in terms of number of boxes, box size, and number of particles we use the same method to estimate the emulator uncertainty as Zhai et al. (2019), adapted to the boxes available in AbacusCosmos. We use the 20 AbacusCosmos boxes with Planck cosmology to estimate the sample variance, and assess the performance of the emulator throughout the cosmological parameter space by retraining the emulator with one variable cosmology box excluded at a time, and comparing emulator predictions to measurements from the excluded box.
Footnote 1: [https://github.com/abacusorg/zeldovich-PLT](https://github.com/abacusorg/zeldovich-PLT)
### Isolating the linear signal
In order to ensure that our results are not biased by the assumption that all components of the velocity will be scaled in the same way by a change in \(f\) we split the velocity of halos into two components: a linear and a non-linear component. We scale each component by an independent parameter: \(\gamma_{I}\) for the linear component and \(\gamma_{H}\) for the non-linear component. If these parameters are constrained such that \(\gamma_{I}=\gamma_{H}\) then all velocities are scaled by the same amount and the model reduces to the single scaling parameter, \(\gamma_{f}\) used in Chapman et al. (2022). The split is performed on halo velocities rather than galaxy velocities because the velocity bias of galaxies is implemented by other independent parameters in the emulator. Galaxies are assigned the velocity of their host halo with an additional velocity term calculated as \(\sigma_{\rm gal}=v_{\rm gal}\sigma_{\rm halo}\), where \(v_{\rm gal}\) is the velocity bias parameter for that galaxy type (\(v_{\rm bc}\) and \(v_{\rm bs}\) for centrals and satellites respectively), and \(\sigma_{\rm halo}\) is the velocity dispersion of the halo calculated from its mass using the virial theorem. The additional velocity term is calculated independently of the velocity scaling by \(\gamma_{I}\) and \(\gamma_{H}\) so that it is controlled entirely by \(v_{\rm bc}\) and \(v_{\rm bs}\). This choice reduces the degeneracy between the velocity scaling and velocity bias parameters while still allowing for sufficient freedom in the model to address both a change in the growth rate and the presence of velocity bias (Guo et al., 2015).
The challenge of this new model is determining what component of the velocity is linear at late time. While this is difficult to do for the halo velocities, we can make use of the fact that the initial conditions of the emulator provide a method for calculating particle linear velocities, which can then be combined to provide an estimate of the linear velocity of the halo. The AbacusCosmos initial conditions were generated by calculating Zel'dovich approximation displacements for a grid of particles at \(z=49\) using the zeldovich-PLT code. The Zel'dovich approximation provides a first order calculation of the displacements and velocities of particles, so \(z=49\) is chosen as an arbitrarily large redshift where the motion of particles will very closely follow linear theory. We can use these initial particle linear velocities to predict the particle linear velocities at the \(z=0.7\) simulation slice by evolving them using the linear theory prediction for the amplitude of the velocity field,
\[\mathbf{v_{k}}=\frac{i\mathbf{k}}{k^{2}}Ha\delta_{\mathbf{k}}f(\Omega_{m}). \tag{1}\]
The velocity scaling of the initial conditions is simply the ratio of Eq. 1 between the redshift of the initial conditions and the desired final redshift,
\[\mathbf{v}(z_{2})=\frac{Ha\sigma_{\rm g}(z_{2})}{Ha\sigma_{\rm g}(z_{1})}\mathbf{v}(z _{1}). \tag{2}\]
We define the non-linear velocity as all components of the total velocity not included in the linear velocity, and calculate it by subtracting the linear velocity vector from the total velocity vector. By separately scaling the linear velocity by \(\gamma_{I}\) and the non-linear velocity by \(\gamma_{H}\) we allow for the non-linear velocity of the data to deviate from the \(\Lambda\)CDM expectation of the simulations without biasing the value of \(for\sigma_{\rm B}\) we infer from \(\gamma_{I}\). \(\gamma_{I}\) and \(\gamma_{H}\) will, in general, be correlated with each other. For example, this will be true for quasi-linear velocity evolution that happens along the direction of the linear velocity.
### Smoothing the linear velocity field
Pairs of galaxies with small separation in collapsed objects have lost all dependence on the initial linear velocities. This approximately occurs at shell crossing and means that our split into linear and non-linear components is ineffective on such scales - a portion of the velocity ascribed to non-linear motion simply cancels out the linear one (see Appendix A). In an extreme situation, if two objects are located sufficiently close to each other along the line of sight and have a large enough infall velocity, the shift in position in redshift space reverses the orientation of the pair along the line of sight. In this situation scaling the velocity will increase the pair separation, leading to damping of the correlation function. We therefore elect to smooth the particle linear velocity field around the shell crossing scale, which from our previous analysis we know to occur at approximately 5 \(h^{-1}\)Mpc. This smoothing reduces the pairwise linear velocity of nearby objects, transferring the component of the velocity that provokes shell crossing to what we have termed the 'non-linear' component, since total velocity is still conserved. Meanwhile, the linear pairwise velocity of more distant objects is unaffected, preserving the signal we wish to extract with our linear velocity scaling parameter.
To illustrate the smoothing effect we use a projected 5 \(h^{-1}\)Mpc thick slice of the Abacus Planck 00-0 box to demonstrate the arrangement of the different particle velocity components in a high density region, shown in Fig. 1. While the velocity of field particles is largely unchanged between total, linear, and smoothed linear velocities, the behaviour of particles in the cluster differs greatly. The unsmoothed linear velocity displays a distinct preferred direction when compared to the total velocity, however some scatter persists. The smoothed velocity is significantly more collimated so that close particles will maintain their separation in redshift space, as intended. The non-linear velocities show the difference between the total velocity and smoothed linear velocities. As expected, the non-linear velocities are significantly larger in collapsed structures compared to the field, and do not show an obvious preferred direction.
Our process of smoothing and assigning halo velocities is as follows. First, we construct a 3D grid with side length 1 \(h^{-1}\)Mpc over the simulation box, and assign to each grid cell a linear velocity equal to the mean linear velocity of the particles contained within the cell. Next, we smooth the grid using a 3D spherical tophat kernel of radius 5 \(h^{-1}\)Mpc, equally weighting each grid cell. Finally, halos are assigned the smoothed linear velocity of the cell they inhabit. The smoothing radius of 5 \(h^{-1}\)Mpc was chosen to match the approximate scale found in Chapman et al. (2022) where increasing the velocity scaling parameter, \(\gamma_{f}\), transitioned from amplifying the monopole to damping the monopole. A tophat kernel was chosen because of
the small width of this transition, and because it reduces the number of calculations required for the smoothing compared to other possible kernel choices, such as a Gaussian kernel. The grid spacing was chosen to balance the resolution of the grid and the memory requirements of the computation. Testing these choices is discussed below.
In Fig. 2 we investigate the effect of scaling the smoothed halo linear velocity on the monopole of the halo correlation function and compare to the results of scaling the unsmoothed halo velocities. For the unsmoothed linear velocity field we define the linear halo velocity as the mean linear velocity of the constituent particles. Scaling both the smoothed and unsmoothed velocities has a nearly identical effect on the large scales of the monopole for both the linear velocity scaling parameter, \(\gamma_{l}\), and the non-linear velocity scaling parameter, \(\gamma_{n}\). This result is expected since the velocity smoothing primarily affects the pairwise velocity of small separation objects by construction,
Figure 1: A slice of one of the Abacus Planck boxes showing the particle positions and velocities. Blue points show the position of particles from a uniform 10% down sampling, and black arrows show the velocities of the particles where the size of the arrow is proportional to the amplitude of the velocity. _Upper left:_ The total particle velocity. _Upper right:_ The linear velocity calculated from the initial conditions. _Lower left:_ The smoothed linear velocity calculated using a tophat smoothing kernel with radius \(5~{}h^{-1}\)Mpc. _Lower right:_ The non-linear velocity component, calculated as the difference between the total velocity and the linear velocity.
and desired because the large scale behaviour follows the expectation from linear theory in that the amplitude of the monopole is proportional to \(f\), and scaling up the velocities increases the amplitude of the correlation function. However, around \(\sim 2\,h^{-1}\)Mpc scaling up the unsmoothed linear velocities changes behaviour and damps the monopole due to the shell crossing issue discussed above. Scaling up the smoothed linear velocity increases the amplitude at all scales, although the effect is reduced below the smoothing scale. This matches our desired behaviour for the linear velocity field, which was visualized in Fig. 1, that close pairs that have already collapsed maintain their separation as the linear growth rate is increased, rather than being spread apart. When scaling \(\gamma_{n}\) the effect is similar for both methods of calculating the velocity components, although the smoothed velocity field shows a greater change in amplitude. The quadrupole is not included in this plot because the change in sign makes these trends more difficult to see intuitively, but the same behaviour of the scaling parameters is seen in quadrupole as displayed in the monopole. The projected correlation function is largely insensitive to the radial velocity by construction, and the difference between smoothed and unsmoothed velocities is insignificant.
Fig. 2 also shows the results of varying the parameters used to smooth the linear velocity field. Faint, coloured lines show the effects of smoothing using a tophat radius of \(3.0\,h^{-1}\)Mpc and \(7.0\,h^{-1}\)Mpc instead of the default \(5.0\,h^{-1}\)Mpc, using a grid of side length \(2.0\,h^{-1}\)Mpc or \(0.8\,h^{-1}\)Mpc instead of the default \(1.0\,h^{-1}\)Mpc, and of using a Gaussian kernel with standard deviation \(2.0\,h^{-1}\)Mpc. In all cases the effect is quite similar to our default choice of parameters at all scales and for both scaling parameters, indicating that our smoothing method is robust to varying these choices.
### Testing the improved emulator
We validate our emulator by performing an MCMC fit to a subsample of the measurements of the Planck 2015 boxes used for determining the emulator uncertainty. We randomly select 10 test HOD models and measure the redshift space galaxy correlation for all 20 simulation boxes with line-of-sight along each of the three axes, giving a total of 60 measurements. We average the results of these 60 measurements for each HOD model and fit the data using our improved emulator. For the covariance matrix we use our data covariance matrix, scaled along the diagonal to match the volume of the mock measurements without modifying the correlation structure. While the true effective volume of our measurement will be between 20-60 simulation boxes because we use 20 independent boxes each measured along three independent lines-of-sight, we choose a volume of 20 simulation boxes as our fiducial amount to be conservative.
Figure 2: The mean change in the monopole of the redshift space halo correlation functions after velocity scaling from the 20 Planck cosmology boxes. The left panel shows the effect of scaling \(\gamma_{1}\), while the right panel shows the effect of the scaling \(\gamma_{n}\). Solid lines show the effect of scaling by \(\gamma=1.2\), while dashed lines show the scaling by \(\gamma=0.8\). The black lines show the result using the unsmoothed linear velocity, while the thick blue line shows the result of our fiducial smoothing method; a tophat kernel with radius \(5\,\ h^{-1}\)Mpc on a grid of side length \(1\,\ h^{-1}\)Mpc. Faint coloured lines show the results of variations on the smoothing method. The orange and green lines show the results of varying the tophat smoothing radius to \(3\) and \(7\,\ h^{-1}\)Mpc respectively while keeping the grid size fixed, while the red and purple lines show the result of varying the grad size to \(2\) and \(0.8\,\ h^{-1}\)Mpc while keeping the smoothing kernel fixed to the fiducial method. Finally, the brown line shows the result of smoothing using a Gaussian kernel with standard deviation \(2\,\ h^{-1}\)Mpc on a \(1\,\ h^{-1}\)Mpc grid.
For all 10 models we recover the known value of \(\gamma_{I}\) and the expected value of \(f\sigma_{8}\) to within the 68% confidence interval. This is expected given our conservative choices for the emulator uncertainty, which lead to slightly inflated confidence intervals while ensuring that our parameter inference is not biased. Likewise the known cosmological and HOD parameters are recovered for the majority of the models. The HOD parameters that are least often recovered are \(\log M_{\rm cut}\), \(\sigma_{\log M}\), and \(f_{\rm max}\), however none are degenerate with our key cosmological parameters and there is no significant impact on the \(f\sigma_{8}\) constraints, so there is no concern for our measurement of the eBOSS data.
We also investigate the scale dependence of the constraints from the 10 test HOD models. For each model we perform a fit to the full separation range of the model, \(0.1-60\,h^{-1}\)Mpc, as well as four additional fits restricted to the separation ranges \(0.1-7\,h^{-1}\)Mpc, \(0.8-7\,h^{-1}\)Mpc, \(0.8-60\,h^{-1}\)Mpc, and \(7-60\,h^{-1}\)Mpc, matching the methodology used to test the data in Sec. 3.3. For each model we find all separation ranges give a mutually consistent value of \(f\sigma_{8}\) at the \(1\sigma\) level, with approximately half of the models showing a slight offset between the \(0.1-7\,h^{-1}\)Mpc and \(0.8-7\,h^{-1}\)Mpc results and the remaining separation ranges. The offset is equally likely to occur to larger and smaller values and is within the measurement uncertainty, so it is not a concern for our cosmological inference.
Finally, we validate our entire pipeline using a subhalo abundance matching (SHAM) mock constructed from the Uchuuz1 simulation (Ishiyama et al., 2021). Uchuu is a \((2000\,h^{-1}\mathrm{Mpc})^{3}\), \(12800^{3}\) particle simulation using the Planck2015 cosmology and a mass resolution of \(m_{p}=3.27\times 10^{8}\,h^{-1}\,M_{\odot}\). Using a different galaxy halo connection model and simulation is a necessary test of the robustness of our model in order to be able to confidently apply it to the eBOSS data. Fitting the correlation function of the SHAM mock using our new emulator we are able to recover the known cosmological parameters within the 68% confidence interval for all parameters, and find all well constrained HOD parameters to be within their respective prior ranges. We recover \(\gamma_{I}=1.00\pm 0.08\) and \(\gamma_{n}=0.90\pm 0.14\), both consistent with their expected values of 1 since the mock contained a \(\Lambda\)CDM growth rate and no velocity scaling.
Footnote 1: [http://skiesanduniverses.org/Simulations/Uchuu/](http://skiesanduniverses.org/Simulations/Uchuu/)
## 3 Measuring the eBOSS LRG RSD
### eBOSS LRG Sample
We fit our new emulator model to the extended Baryon Oscillation Spectroscopic Survey (eBOSS) (Dawson et al., 2016) luminous red galaxy (LRG) sample analyzed in Chapman et al. (2022). Targets were selected for the eBOSS LRG sample (Prakash et al., 2016) from a combination of SDSS DR13 photometry (Albareti et al., 2017) and infrared observations from the WISE satellite (Lang et al., 2016). Spectroscopic observations were made using the BOSS spectrographs (Smee et al., 2013) mounted on the 2.5-meter Sloan telescope (Gunn et al., 2006). The eBOSS LRG sample consists of 174 816 objects over 4242 deg\({}^{2}\) in the redshift range \(0.6<z<1.0\). The sample has an effective volume of 1.28 Gpc\({}^{3}\), an effective redshift of \(z=0.737\), and a peak number density of \(n=1\times 10^{-4}\,(\mathrm{Mpc}^{-1}h)^{3}\)(Ross et al., 2020).
We apply the standard eBOSS weights to the data, which correct for variations in obtaining reliable redshifts and observational contaminants, as well as optimizing the signal obtained from the data. We also apply the pairwise inverse probability weights combined with angular upweighting (PIP+ANG) (Bianchi & Percival, 2017; Percival & Bianchi, 2017) calculated in Mohammad et al. (2020) to correct the fibre collision issue. Fibre collision occurs when the physical size of the fibres prevents simultaneously targeting multiple close objects within a single pointing of the instrument, leading to a biased sample that is particularly concerning for small scale observations. Reid et al. (2014) identified fibre collision as the the most significant issue for analyzing small scale clustering of SDSS data. Most analyses use an approximate correction that involves transferring the weight from the missing object to a nearby observed object. This type of correction approximately recovers the true clustering on large scales, however the performance degrades on smaller scales and all information below the fibre collision scale is lost. By contrast PIP weights are theoretically unbiased on all scales, allowing for a full recovery of the true clustering. Mohammad et al. (2020) calculated PIP+ANG weights for all three eBOSS samples, which we apply when measuring the eBOSS LRG clustering.
We measure the clustering using the two-point correlation function, which represents the excess probability of finding two galaxies at a given separation compared to if the sample was randomly distributed. We calculate the correlation function using the Landy-Szalay estimator, which has been shown to be the least-bias and least-variance estimator (Landy and Szalay, 1993). We use a random catalogue matching the angular and radially distribution of the LRG sample with a factor of 50 times more points in order to reduce the impact of shot noise. The difference in the number of data and random points is taken into account in the normalization of the pair counts.
To reduce the number of bins, we compress the full 2D correlation function in two ways. We use the first two even multipoles of the correlation function, which contain most of the RSD information. It is common to also use the next even multipole, the hexadecapole, in RSD analyses. However, due to the increased noise in the hexadecapole and required increase in complexity of the emulator we choose to exclude it. The second compression method is the projected correlation function,
\[w_{p}\,(r_{\perp})=2\int_{0}^{r_{\parallel,\max}}\xi^{s}(r_{\perp},r_{\parallel} )dr_{\parallel}, \tag{3}\]
where \(r_{\perp}\) and \(r_{\parallel}\) are the normal and parallel to the line-of-sight components of the pair separation, \(s\). We limit the integral to a \(r_{\parallel,\max}=80\,h^{-1}\)Mpc, which is sufficient to remove the majority of the RSD signal. While not sensitive to RSD, \(w_{p}\) is useful for constraining the HOD parameters of our model because it has different parameter degeneracies than the multipoles.
We bin both \(r_{\perp}\) and \(s\) in 9 logarithmically spaced bins between \(0.1-60\,h^{-1}\)Mpc, while \(r_{\parallel}\) and \(\mu\) are binned using linear bins of width \(\Delta r_{\parallel}=1\,h^{-1}\)Mpc and \(\Delta\mu=0.1\). These measurement statistics and binning schemes are matched to those used in our emulator. The spacing of the separation bins is chosen to sample a range of scales commonly excluded in traditional measurements, while limiting the number of bins in order to reduce the training complexity of the Gaussian process emulator. Additionally, in order to ensure a match between the model and the data we scale the separations of the model correlation functions by the Alcock-Paczynski parameters (Alcock and Paczynski, 1979) to account for the difference between the cosmology of the fit and the fiducial cosmology used to convert redshifts to distances for the data (see Sec. 3.6 of Chapman et al., 2022).
We estimate the covariance of our measurements using jackknife resampling of the data in 200 angular regions. To select the regions we apply equal area angular tiles to the data footprint, and remove
the lowest occupation regions to arrive at 200 equally sized and equally weighted angular regions. The covariance is then estimated by removing regions one at a time, measuring the clustering of the remaining regions, and comparing to the measurement over the whole survey according to
\[C_{i,j}=\frac{n-1}{n}\sum_{k}^{n}(\xi_{i,k}-\tilde{\xi}_{i})(\xi_{j,k}-\tilde{ \xi}_{j}), \tag{4}\]
where the \(i,j\) indices are over the elements of the data vector, n=200 is the number of jackknife regions, and \(k\) is an index over the jackknife realisations. We then rescale the diagonals of the covariance matrix to account for the difference in volume between the 200 jackknife regions and the full catalogue while preserving the correlation structure. For more details, see Chapman et al. (2022). In this work we also apply the \(v_{\rm match}\) weighting scheme detailed in Mohammad and Percival (2022), which corrects the ratio of auto-pairs and cross-pairs removed by the jackknife sampling for a low density of galaxies. Applying these weights we find a minor reduction in the covariance of large separation bins, matching what was seen in Mohammad and Percival (2022), although there is no significant change to the correlation between different separation bins.
### Headline results
Our analysis of the eBOSS sample with the new velocity-split emulator yields a value of \(f\sigma_{\rm B}(z=0.737)=0.368\pm 0.041\), with \(\chi^{2}=16.2\) from 27 data points and 15 free parameters. This value is 2.3-\(\sigma\) below the expectation for a \(\Lambda\)CDM universe with the Planck 2018 cosmology, and an increase in tension from the 1.4-\(\sigma\) offset found in Chapman et al. (2022). Chapman et al. (2022) found \(f\sigma_{\rm B}(z=0.737)=0.408\pm 0.038\) when using a single velocity scaling parameter and limiting their measurement scales to \(7-60\,h^{-1}\)Mpc, so this increase in tension is caused by a shift to a lower value of \(f\sigma_{\rm B}\) rather than an increase in precision, although the two results are mutually consistent.
In Fig. 3 we compare the best fit models for various choices of scaling parameters to the eBOSS data. All models are able to accurately fit the data on all scales, although our baseline model of allowing both \(\gamma_{l}\) and \(\gamma_{n}\) to vary results in the lowest \(\chi^{2}\) value. The performance is slightly improved over a single scaling parameter model in the large scales of the quadrupole and projected correlation function. This is likely caused by improved flexibility in simultaneously fitting the smallest and largest measurement scales by decoupling the scaling of the velocity terms, with the non-linear velocity dominating on the smallest scales and the linear velocity dominating on the largest scales. It should be noted that while the fixed \(\gamma_{l}=1\) model is restricted to match the linear velocity amplitude expected for a \(\Lambda\)CDM universe from the AbacusCosmos simulations, it does not indicate agreement with the value of \(f\sigma_{\rm B}\) expected from the Planck 2018 observations because \(\Omega_{m}\) and \(\sigma_{\rm B}\) are still allowed to vary. That model results in values of \(\sigma_{\rm B}=0.791\pm 0.027\) and \(f\sigma_{\rm B}=0.450\pm 0.016\), with \(\chi^{2}=20.4\).
Our fit to the data only weakly constrains the amplitude of the non-linear velocity field, giving a value of \(\gamma_{n}=0.694\pm 0.29\), where a value of \(\gamma_{n}=1\) indicates agreement between the data and the expectation for a \(\Lambda\)CDM universe from the model. This constraint is limited by the lower edge of the prior at \(\gamma_{n}=0.5\), but does show a clear preference for \(\gamma_{n}<1\). The poor constraint is likely due to the lower magnitude of the non-linear velocity field compared to the linear velocity field (see Appendix A), as well as the degeneracy between \(\gamma_{n}\) and \(v_{\rm bc}\) (Fig.4). It may also indicate that our parameterization of \(\gamma_{n}\) needs further refinement in order to fully describe the behaviour of the actual non-linear velocity field. We have implemented \(\gamma_{n}\) as a uniform scaling for all components of the velocity that do not match the initial linear velocity. It is possible that there are multiple contributions to the non-linear velocity, requiring a more nuanced parameterization to capture the deviations between the data and the best fit \(\Lambda\)CDM+HOD model. Non-linear velocity scaling is also not necessarily uniform for all galaxies, and may be dependent on characteristics such as galaxy mass and environment. Investigating these alternatives is a possible avenue for future research. While this result is independent of our cosmological constraint by construction, it does indicate that non-linear velocities in the data are lower than those generated by combining our HOD model with a CDM-only simulation.
### Testing the dependence on the data fitted
A key motivating factor for constructing our new velocity-split model was the scale dependence observed when using a single velocity scaling parameter by Chapman et al. (2022). That analysis found that fitting to various measurement scales found lower values of \(f\sigma_{\rm B}\) at smaller scales, although all measurements were consistent with each other and below the expectation for a \(\Lambda\)CDM universe with Planck 2018 cosmology. Using our updated emulator we find the smallest measurement scales to be in better agreement with the larger scales of our analysis. A small offset still exists between the quasi-linear scales and transition scales, as shown in Fig. 5. A comparison of the constraints on \(f\sigma_{\rm B}\) using various measurement scales between the new emulator and the result of the the single velocity scaling parameter emulator used in Chapman et al. (2022) is shown in Table 1.
This result follows our expectation for splitting the velocity parameters into \(\gamma_{n}\) and \(\gamma_{l}\). \(\gamma_{n}\) is more important on the smallest scales, which are fit to the lowest velocity amplitude. Introducing an additional degree of freedom for the non-linear velocities through \(\gamma_{n}\) reduces the tension in \(\gamma_{l}\), but leaves the constraints from the scales around the transition from non-linear to quasi-linear (\(\sim 0.8-7\,h^{-1}\)Mpc) and quasi-linear scales largely unaffected. Our lower overall value of \(f\sigma_{\rm B}\) from the new analysis is caused by the inclusion of these transition scales, which also preferred a low value of \(f\sigma_{\rm B}\) in Chapman et al. (2022) but could not be definitively attributed to the linear signal until the introduction of our new model.
While the introduction of \(\gamma_{n}\) and \(\gamma_{l}\) has not fully removed the scale dependence of our measurement, it has significantly improved the agreement of different measurement probes, as shown in the right panel of Fig. 5. The results of fitting to the multipoles alone, the monopole with projected correlation function, and all three measurements are in close agreement. This result is a significant improvement over Chapman et al. (2022), which found some tension between the different measurements due to the degeneracy between the combined velocity scaling parameter and \(v_{bc}\).
\begin{table}
\begin{tabular}{l c c} \hline Measurement Scales & Chapman et al. (2022) & This Work \\ \hline \(0.1-7\,h^{-1}\)Mpc & \(0.334\pm 0.061\) & \(0.335\pm 0.105\) \\ \(0.8-60\,h^{-1}\)Mpc & \(0.373\pm 0.031\) & \(0.368\pm 0.041\) \\ \(7-60\,h^{-1}\)Mpc & \(0.408\pm 0.038\) & \(0.412\pm 0.048\) \\ \(1-60\,h^{-1}\)Mpc & \(0.365\pm 0.025\) & \(0.368\pm 0.041\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of \(f\sigma_{\rm B}\) constraints from different scales between the velocity-split emulator and a single velocity scaling parameter emulator.
### Comparison to previous emulator
In Fig. 6 we compare the constraints of key parameters between our new velocity-split model and the single velocity scaling parameter model used in Chapman et al. (2022). To ensure an accurate comparison between the two methods we produce a new fit using our current emulator by setting \(\gamma_{l}=\gamma_{n}\), which is equivalent to scaling all velocities by a single value. We find that all parameters are consistent between the two methods, with the most significant differences occurring in the velocity parameters \(\gamma_{n}\), \(\gamma_{l}\) and \(v_{\rm bc}\), as expected. The new method slightly increases the uncertainty on \(\gamma_{l}\), which follows through to the \(for\sigma_{8}\) constraint, since splitting the velocity scaling parameters causes \(\gamma_{l}\) to have a smaller impact on the fit, particularly at small scales. \(\gamma_{n}\) and \(v_{\rm bc}\) show significant degeneracy since they both contribute dispersive components to the galaxy velocity, and there are lesser degeneracies between \(\gamma_{l}\) and \(\gamma_{n}\), and \(\gamma_{l}\) and \(v_{\rm bc}\).
## 4 Discussion and conclusion
Using a emulator-based model with individual scaling parameters for the linear velocity, \(\gamma_{l}\), and non-linear velocity, \(\gamma_{n}\), we measure \(for\sigma_{8}(z=0.737)=0.368\pm 0.041\) from clustering between \(0.1\) - \(60\,h^{-1}\)Mpc. Chapman et al. (2022) measured the same sample using an emulator with a single parameter scaling for the total velocity, but restricted their range of measurement to \(0.7-60\,h^{-1}\)Mpc in order to isolate the linear signal, and found \(for\sigma_{8}(z=0.737)=0.408\pm 0.038\). The shift to lower values in the updated emulator is caused by the inclusion of smaller scale clustering, and is very similar to the measurement from the same scales using the older emulator.
The consistency of the \(for\sigma_{8}\) constraints between the two models gives confidence that our cosmological constraint is robust to the form of the velocity scaling. The advantage of the new model is that by isolating the linear signal we can now confidently extend our fitted data to small scales, which gives an increased tension with the expectation from Planck+\(\Lambda\)CDM. By splitting the velocity scaling parameter to isolate the linear signal we can identify where the information for our constraint comes from, and be sure that we are optimally extracting linear information from the small-scale RSD signal without contamination from non-linear structure growth. This theory is borne out by the consistency between the results of the two emulators given the difference in modelling choices, which indicates that the non-linear velocities are not significantly affecting the linear measurements. Therefore, the most significant advancement of the new emulator is removing non-linear contamination as a potential source of systematic. In addition to the change in parameters, the older emulator was trained on the Aemulus simulation suite while the updated emulator was trained on the AbacusCosmos suite. The consistency between two different simulation suites, generated using different codes, indicates the reliability of the training data. Combined, these factors place severe limits on potential systematic biases in the analysis that could produce the low value of \(for\sigma_{8}\) found from the data.
The results of both emulators, as well as other measurements of \(for\sigma_{8}\) from SDSS galaxy samples are shown in Fig. 7. Our result is still consistent with the large scale analysis of the same sample at the \(1\sigma\) level, but is now in \(2.3\sigma\) tension with the expectation for a \(\Lambda\)CDM universe with a Planck2018 cosmology. There remains a consistent trend in small scale RSD measurements to lower values of \(for\sigma_{8}\). This trend is now remarkable when considering the differences in modelling, data, and simulations between these analyses, shown in Table 2.
The consistency of these various small-scale RSD analyses leaves limited options to explain the tension with the \(\Lambda\)CDM expectation without modifying the cosmological model given the variations in data, simulations, and model. However, there are several common tools shared by all these analyses. All have models based on CDM-only simulations, with an HOD model to connect galaxies to halos, and all are used to analyse data composed mainly of LRGs observed
Figure 3: Best fit models compared to the eBOSS LRG measurement data for several choices of scaling parameters. Our baseline fit, allowing both \(\gamma_{l}\) and \(\gamma_{n}\) to vary, is shown in blue. A single velocity scaling parameter model, constrained so that \(\gamma_{l}=\gamma_{n}\) and equivalent to the model used in Chapman et al. (2022), is shown in orange. The green line shows the result of allowing \(\gamma_{l}\) to vary while fixing \(\gamma_{n}=1\), and the red line shows the result of allowing \(\gamma_{n}\) to vary while fixing \(\gamma_{l}=1\). The left, centre, and right columns show the monopole, quadrupole, and projected correlation function respectively. The top row of panels directly compares the model to the data, while the lower row shows the difference between model and data in units of the data uncertainty, with the grey-shaded region indicating the \(1\sigma\) region.
\begin{table}
\begin{tabular}{l c c c} \hline Analysis & Data & Simulations & Model \\ \hline This Work & eBOSS LRG & AbacusCosmos & Emulator+\(\gamma_{\gamma_{R}}\) \\ Chapman et al. (2022) & eBOSS LRG & Aerulus & Emulator+\(\gamma_{f}\) \\ Lange et al. (2022) & BOSS LOWZ & Aerulus & Cosmological Evidence Modelling \\ Zhai et al. (2022) & BOSS LOWZ+CMASS & Aerulus & Emulator+\(\gamma_{f}\) \\ Yuan et al. (2022) & BOSS CMASS & AbacusSummit & Constrained HOD Emulator \\ \hline \end{tabular}
\end{table}
Table 2: Data, simulations, and models used by a variety of small scale RSD analyses.
Figure 4: 1D and 2D contours of the parameters used in our baseline fit of the eBOSS LRG \(\xi_{0}\), \(\xi_{2}\), and \(w_{p}\) in the separation range \(0.1-60\,h^{-1}\)Mpc. The constraint on \(f\,\sigma_{8}\) is calculated as \(f\,\sigma_{8}=\gamma_{f}\lambda_{\rm CDM}\sigma_{8}\). The dashed lines highlight \(\gamma_{I}=1\) and \(\gamma_{R}=1\), which would indicate that no velocity scaling is needed to match the data to the \(\Lambda CDM\) expectation of the emulator.
using the BOSS spectrograph. A non-cosmological solution could take the form of an overlooked systematic related to one of these three shared tools that is able to affect all analyses. However, it should be stated that each analysis has attempted to test these factors, and none have given evidence of an unknown systematic. In order to address these possible biases it is important to test the small scale clustering against simulations including baryonic physics (Amon & Efstathiou, 2022). Extending these analyses to DESI would also reduce the possibility of an observational bias since DESI uses a different target selection and significantly improved instrument (DESI Collaboration et al., 2016, 2014). Measuring the DESI emission line galaxy (ELG) sample would be particularly interesting, since ELGs are expected to have a different HOD from LRGs, allowing an independent test of the HOD model. These factors could also explain the low value of \(\gamma_{n}\) obtained in this analysis, which indicates a discrepancy between the model and the data in both the linear and non-linear velocity fields.
While our model has shown promise in isolating the linear signal, there remain a number of possible areas of improvement. The smoothing of the linear velocity field, while empirically motivated and tested, could be directly connected to a physical phenomenon (Hollinger & Hudson, 2021). The optimal smoothing scale is likely related to some physical characteristic of the density field, such as a the radius for shell crossing. The linear velocity parameter is poorly constrained, and several significant degeneracies exist between parameters in the fit. Refining these parameters could lead to more informative and precise results. Finally, while the model successfully separates linear growth and random motions, quasi-linear growth along the direction of the linear velocity remains a point of degeneracy between \(\gamma_{l}\) and \(\gamma_{n}\), and a potential bias in the model.
Figure 5: 2D and 1D marginalized constraints on \(\gamma_{n}\) and \(f\)\(\sigma_{8}\) for fits to different scales and measurements. _Left_: Constraints from the three largest separation bins (orange), six largest separation bins (green), six smallest separation bins (red), and all nine separation bins (blue) for all three measurements. The dotted line shows the value of \(f\)\(\sigma_{8}\) expected from the Planck 2018 results assuming a \(\Lambda\)CDM cosmological model. _Right_: Constraints from the joint fit to the monopole and projected correlation function (orange), monopole and quadrupole (green), and all three measurements (blue).
Figure 6: 2D and 1D marginalized constraints of several key parameters from the eBOSS LRG data using the independent velocity-split scaling parameters introduced in this paper (blue) compared to the results from a single scaling parameter (orange), such as that used in Chapman et al. (2022). Both fits were made using the updated emulator described in Sec. 2. For the single scaling parameter fit we constrain \(\gamma_{n}=\gamma_{l}\) in order to mimic the effect of a single scaling parameter for all components of the velocity.
## Acknowledgements
WP acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2019-03908] MC and WP acknowledge funding from the Canadian Space Agency.
Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
This research was enabled in part by support provided by Compute Ontario (computeontario.ca) and the Digital Research Alliance of Canada (alliancecan.ca).
## Data Availability
The eBOSS galaxy and random catalogues are publicly available at: [https://data.sdss.org/sas/dr16/eboss/lss/catalogs/DR16/](https://data.sdss.org/sas/dr16/eboss/lss/catalogs/DR16/) with a description here: [https://www.sdss.org/dr16/spectro/lss/](https://www.sdss.org/dr16/spectro/lss/) We used the Cobaya package (forrado & Lewis, 2021; Lewis & Bridle, 2002; Lewis, 2013; Neal, 2005), which is available here: [https://github.com/CobayaSampler](https://github.com/CobayaSampler)
|
2308.04511 | MT-IceNet -- A Spatial and Multi-Temporal Deep Learning Model for Arctic
Sea Ice Forecasting | Arctic amplification has altered the climate patterns both regionally and
globally, resulting in more frequent and more intense extreme weather events in
the past few decades. The essential part of Arctic amplification is the
unprecedented sea ice loss as demonstrated by satellite observations.
Accurately forecasting Arctic sea ice from sub-seasonal to seasonal scales has
been a major research question with fundamental challenges at play. In addition
to physics-based Earth system models, researchers have been applying multiple
statistical and machine learning models for sea ice forecasting. Looking at the
potential of data-driven approaches to study sea ice variations, we propose
MT-IceNet - a UNet based spatial and multi-temporal (MT) deep learning model
for forecasting Arctic sea ice concentration (SIC). The model uses an
encoder-decoder architecture with skip connections and processes multi-temporal
input streams to regenerate spatial maps at future timesteps. Using bi-monthly
and monthly satellite retrieved sea ice data from NSIDC as well as atmospheric
and oceanic variables from ERA5 reanalysis product during 1979-2021, we show
that our proposed model provides promising predictive performance for per-pixel
SIC forecasting with up to 60% decrease in prediction error for a lead time of
6 months as compared to its state-of-the-art counterparts. | Sahara Ali, Jianwu Wang | 2023-08-08T18:18:31Z | http://arxiv.org/abs/2308.04511v1 | # MT-IceNet - A Spatial and Multi-Temporal Deep Learning Model for Arctic Sea Ice Forecasting
###### Abstract
Arctic amplification has altered the climate patterns both regionally and globally, resulting in more frequent and more intense extreme weather events in the past few decades. The essential part of Arctic amplification is the unprecedented sea ice loss as demonstrated by satellite observations. Accurately forecasting Arctic sea ice from sub-seasonal to seasonal scales has been a major research question with fundamental challenges at play. In addition to physics-based Earth system models, researchers have been applying multiple statistical and machine learning models for sea ice forecasting. Looking at the potential of data-driven approaches to study sea ice variations, we propose MT-IceNet - a UNet-based spatial and multi-temporal (MT) deep learning model for forecasting Arctic sea ice concentration (SIC). The model uses an encoder-decoder architecture with skip connections and processes multi-temporal input streams to regenerate spatial maps at future timesteps. Using bi-monthly and monthly satellite retrieved sea ice data from NSIDC as well as atmospheric and oceanic variables from ERA5 reanalysis product during 1979-2021, we show that our proposed model provides promising predictive performance for per-pixel SIC forecasting with up to 60% decrease in prediction error for a lead time of 6 months as compared to its state-of-the-art counterparts.
spatiotemporal data mining, neural networks, UNet, sea ice forecasting, climate change
## I Introduction
The Arctic is a region with unique climate features. For instance, in the Arctic, the Sun never rises over the horizon because of which the seasonal variations in polar day and night are extreme. The enormous areas of Arctic ice and snow are responsible for reflecting sunlight back to space which keeps the planet cool and regulates global and regional weather patterns. However, the Arctic sea ice has seen a continuous decline since 1979 and is left half of which it was in 1970. Therefore understanding Arctic Amplification and forecasting sea ice is a key research topic of climate science. It is important to predict fluctuations in the Arctic sea ice by modeling the weather patterns as it can improve our understanding of potential changes facing the global climate.
To study climate change, environmentalists and domain experts rely greatly on dynamic forecasting systems [12] that are mainly based on coupled Earth System Models. However, over the last few years, researchers have shifted their focus to data driven Artificial Intelligence (AI) approaches like machine learning and deep learning. Since the climate data presents high spatiotemporal correlations, machine learning models have shown promising results in spatiotemporal data mining leading to short and long-term weather forecasting. Machine Learning (ML) can provide valuable tools to tackle climate change. For example, ML approaches can be used to forecast El-Nino events, hurricanes, and ocean eddies and understand the role of greenhouse gases and aerosols on climate trends and events.
Recent works on climate analytics include Convolutional and Recurrent Neural Network [6] based models and some hybrid modeling approaches like Convolutional LSTM [16, 20] and GraphCNN [4]. However, due to the unique nature of the problem of forecasting Arctic sea ice, there are several limitations to the existing solutions and multiple challenges. Enlisted below are some of the prevailing challenges in forecasting Arctic sea ice:
* Performance versus lead-time trade-off while predicting per-pixel sea ice variations from sub-seasonal (two weeks to three months) to seasonal (three months to two years) scales.
* Inability to capture the annual minimum and maximum peak values of sea-ice in the non-stationary time-series datasets.
* Small data problem owing to the availability of only few decades worth of observational data.
In this paper, we propose a modeling framework, MT-IceNet, to tackle the aforementioned challenges with promising results. Our implementation code can be accessed at the Big Data Analytics Lab GitHub repository1.
Footnote 1: github.com/big-data-lab-umbc/sea-ice-prediction/tree/main/mt-icenet
### _Problem Definition_
In all the research works conducted (details in Section II), there has not been a one size fits all solution proposed to tackle the problem of simultaneously detecting, monitoring and predicting sea ice variations. Therefore, in this paper, we propose MT-IceNet - a fast converging UNet-based [19] spatial and multi-temporal (MT) regression model for forecasting Arctic sea ice concentration (SIC) at sub-seasonal to seasonal scales. More formally, given:
**Input**
* \(X1_{i}\), monthly observational and reanalysis data where \(i=[1,5]\) with a rolling window of 12 monthly records equivalent to one year.
* \(X2_{i}\), bi-monthly observational and reanalysis data where \(i=[1,5]\) with a rolling window of 24 bi-monthly records equivalent to one year.
MT-IceNet learns from past values of atmospheric and oceans variables (details in Table I), along with past SIC spatial maps to forecast:
**Output**
* \(Y\), monthly per-pixel Sea Ice Concentration (SIC) values at lead time of \(N\) months, where \(N=[1,6]\).
Here, lead time represents future forecasts of SIC values with a lag of one to six months between the input predictors \(X\) and outcome predictand \(Y\).
### _Contributions_
In light of the aforementioned background information, the main goal of this research is to develop a spatiotemporal deep learning model that forecasts Arctic sea ice concentration (SIC) at future months, given spatial data at multiple sub-seasonal scales i.e. bi-monthly (15 days) and monthly levels. Our major contributions are:
* We combine reanalysis and observational meteorological data from multiple sources into two self-curated spatiotemporal datasets of uniform geographic grid and multi-temporal resolutions.
* a spatial and multi-temporal deep learning model that incorporates a multi-stream learning approach for multi-temporal data and forecast sea-ice on a monthly seasonal scale of up to 6 months.
* We perform a thorough comparative analysis between MT-IceNet, baseline models and recently proposed SIC prediction models for forecasting Arctic Sea ice at seasonal scale.
The rest of the paper is organized as follows. Some of the important related work is reported in Section II. Section III describes the details of our dataset. Our proposed model is presented in Section IV. Section V provides results and analysis of our experimental study and comparative analysis. Finally, we conclude our paper and share future directions in Section VI.
## II Related Work
Majority of the recent works on climate analytics either include Convolutional and Recurrent Neural Network based models or some hybrid modeling approaches like Convolutional LSTM [20] and GraphCNN [4]. [6] proposed a fully data-driven Long Short-Term Memory (LSTM) model based approach for Arctic Sea-ice forecasting and compared it with a traditional statistical model; they found that the LSTM showed good performance for 1-month sea ice concentration (SIC) prediction, with less than \(9\times 10^{6}\)\(km^{2}\) of average monthly Root Mean Square Error (RMSE) and around \(11\times 10^{6}\)\(km^{2}\) of mean absolute error during the melting season. [15] developed a 2D-CNN model that takes as input 8 atmospheric predictors to predict SIC with 1 month's lead time. They compared the performance with Random Forest baseline model, achieving RMSE of \(5.76\times 10^{6}\)\(km^{2}\). [16] worked on daily prediction of the Arctic Sea Ice Concentration using reanalysis data based on a Convolutional LSTM Network. They proposed a ConvLSTM model to predict SIC for \(T\) timestep given \(T-1\) and \(T-2\) 25 km resolution observational data from National Snow and Ice Data Center (NSIDC) (2008-2018). They compared their model with a 2DCNN model that takes in a spatial map with pixel grids from \(T-1\) timestep. Their model achieved an RMSE of \(11.2\times 10^{6}\)\(km^{2}\) as compared to the 2DCNN with RMSE of \(13.7\times 10^{6}\)\(km^{2}\).
Ensembling is another hybrid modeling approach where outputs from multiple models are combined to improve performance, whereas it also reduces variance and generalization errors. [14] worked on an MLR \(+\) DNN ensemble model using Bayesian Model Averaging to predict sea-ice concentrations for the next 10-20 years. They evaluated their model using correlation co-efficient (\(R^{2}\) score) and achieved normalized RMSE of 0.8. [1] proposed an attention-based LSTM ensemble that takes in multi-temporal, daily and monthly, data and predicts sea ice extent (SIE) for \(T+1\) timestep, achieving an RMSE of \(4.9\times 10^{6}\)\(km^{2}\). To explore the potential of probabilistic modeling approaches for forecasting sea ice and to aid uncertainty quantification, [2] performed a thorough comparative analysis of four probabilistic and two baseline machine learning and deep learning models and published benchmarking results for sea ice forecasting for multiple lead times on these models. They evaluated these models performance using RMSE error and \(R^{2}\) scores and reported Gaussian Process Regression (GPR) to achieve the most competent results. Our work takes inspiration from IceNet proposed by [3]. IceNet is a U-Net [19] based probabilistic model for seasonal sea-ice forecasting. Their model takes in images as input and forecasts as output Sea Ice Probabilities (SIP) for three classes (open-water region SIC \(<15\%\), ice-edge region \(15\%<\) SIC \(<80\%\), and confident ice region SIC \(>80\%\)) for next 6 months. Through probabilistic deep learning, they showed their forecasted values to be competent with the physics-based ECMWF seasonal forecast system SEAS5 [12]. IceNet is pretrained using Coupled Model Intercomparison Project (CMIP6) 2,220 years (1800-2011) simulation data and is fine-tuned on NSIDC's observational data from 1979 to 2011. They evaluated their model performance on observational data from 2012-2017 using integrated ice edge error (IIEE) and binary accuracy (BACC). Following IceNet, [18] proposed SICNet, based on a Temporal Spatial Attention Module (TSAM) that captures SIC variations for a lead time of 7 to 28 days. They evaluated their work using Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and BACC. However, there are two major differences in our proposed MT-IceNet, IceNet and SICNet. One, MT-IceNet produces spatial patterns through per-pixel prediction for SIC values contrary to SIP classification of IceNet. Second, MT-IceNet shows promising results in the prediction of SIC on greater lead times i.e. 1
to 6 months whereas SICNet predicts SIC on a weekly i.e. subseasonal scale.
## III Dataset
For this study, we use observational sea-ice and reanalysis atmospheric and meteorological data which is available from 1979 till the present. The reanalysis data is available with open access and can be obtained from European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-5 global reanalysis product [8]. Whereas the sea-ice concentration (SIC) values are obtained from Nimbus-7 SSMR and DMSP SSM/ISMIS passive microwave data version 1 [5] provided by the National Snow and Ice Data Center (NSIDC). These variables along with their spatiotemporal resolution details are enlisted in Table I.
For the Arctic region, the SIC observational dataset contains an uncertainty of about +15% during the summer season due to a high number of melt ponds that can skew the data [5]. During the winter months, this uncertainty decreases to about +-5% as the sea ice tends to reach its peak in concentration levels. However, for modeling purposes, this concentration data can be considered as the ground truth.
The inclusion of these variables is based on their causal links with sea ice variations [11] and also based on their physical impact on weather trends in the Arctic. For instance, sea surface temperature provides information on oceanic heat. Similarly, earlier rainfalls during spring trigger earlier Arctic ice and snow melt [7, 17]. Further, as highlighted by [9, 10], regional differences in atmospheric pressure cause an increase in Arctic humidity, which in turn enables higher levels of longwave radiation to reach the sea surface. Consequently, this can lead to earlier melting of sea ice. In short, each of the chosen predictors impacts Arctic sea ice through complex oceanic and atmospheric physical interactions.
### _Data Preprocessing_
Each of the chosen data variables come in a different spatial and temporal resolution. For instance, NSIDC provides daily sea-ice concentrations in 25km resolution that is 448\(\times\)304, whereas the reanalysis data is available in \(1^{\circ}\) resolution, i.e. 360\(\times\)180, as hourly and daily records. For our proposed model, we required 5 dimensional inputs of shape _(samples, timesteps, height, width, features)_. To achieve this, the first step after downloading raw data was to regrid individual \(1^{\circ}\) ERA5 variables corresponding to the Arctic geolocation of 90N, 60N, 180E, 180W into the NSIDC polar projections, that is the 25km spatial (448 \(\times\) 304) resolution of the sea ice concentration (SIC). To begin with, an empty array with the latitude and longitude geolocation index was created. Next, the values from previous dimensions are interpolated to the new dimensions and stored in the empty grid with new \(lat\times lon\) dimensions using the XESMF Python API. The variables acquired in hourly data were aggregated to the daily timescale. After the spatial and temporal rescaling was performed on individual variables, they were combined into a single h5 file with D x H x W x F dimensions. Here \(D\) is the total number of days, \(H\) is the height of images corresponding to 448 latitude, \(W\) represents the width, which corresponds to 304 longitude and \(F\) is the number of features which is 5.
#### Iii-A1 Monthly Data
To generate the monthly dataset for our model, we averaged the daily 30, 31 or 28 values corresponding to the different months. Special care was taken for leap years, e.g. in case of a leap year, we averaged 29 entries for February. This gave us 504 monthly records. Then we sequentially divided the data into training and testing sets of 408 and 96 months. To reshape the data, a stateless rolling window was applied to the training and testing data, creating 384 samples of 12 months each. Sample one contained months 1-12, sample two contained months 2-13, and the last sample contained months 372-384. Finally we got our training data in the shape \(M\times T\times H\times W\times F\), where \(M=\) 384 samples, \(T=\) 12 months, \(H\times W=\) 448\(\times\)304 pixel images and \(F=\) 5 features. Similarly, the final shape of the test set was \(84\times 12\times 448\times 304\times 5\).
#### Iii-A2 Bi-Monthly Data
Deep learning models require a large volume of diverse training data to generalize well on the unseen test data. However, our monthly dataset is comprised of only 504 records. To counter this small data problem, we generated the second temporal resolution of semi-monthly or bi-monthly data that not only increases the dataset size but also helps our model focus on sub-seasonal patterns. From our previous work [1], we observed high frequency data captures sub-seasonal fluctuations better and in turn helps the model learn the seasonal patterns. For this, similar to aggregating 30 or 31 daily values, we aggregated samples of 15, 16 or 14 daily records depending on the annual months of the year. For example, for January the two bi-monthly records were calculated by taking the average of 15 days and 16 days respectively. Similar to the rolling window applied to monthly data, we applied a 24 timestep rolling window to bi-monthly data to correspond to the same annual cycle as its monthly counterpart. The final dimensions for bi-monthly training and test sets were \(384\times 24\times 448\times 304\times 5\) and \(84\times 24\times 448\times 304\times 5\)
respectively.
## IV Method: MT-IceNet
Our proposed MT-IceNet model is a UNet-based spatial and multi-temporal (MT) deep learning model for forecasting per-pixel Arctic sea ice concentrations (SIC). The model uses an encoder-decoder architecture with skip connections and multi-temporal data to regenerate spatial maps at future timesteps. As shown in Figure II, we started off by downloading the raw data from multiple sources mentioned in Section III. Next, we preprocessed the data to bring it into uniform spatial and temporal resolutions. We then reshaped the data into 5 dimension and sequentially split it into training and testing sets. Sequential splitting is performed to retain the seasonality patterns in the data. We then built our baseline and proposed model MT-IceNet and finally evaluated the performance of all models using the Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and R-squared (\(R^{2}\)) score. The details of our baseline models and constituent blocks of our proposed model are as follows.
### _Baseline Models_
To design our baseline models, we utilized two widely used spatiotemporal deep learning techniques that are, the Convolutional Neural Network (CNN) and Convolutional Long Short Term Memory (ConvLSTM) models. One reason for choosing these two models is that they have been used in previous solutions proposed for this problem. Another reason is that they also work as the constituent blocks of our proposed model MT-IceNet.
#### Iv-A1 Convolutional Neural Network (CNN)
We designed a simple CNN model with three 2D convolutional layers using the Keras API. Each of these layers was followed by a 2D Max Pooling layer for dimensionality reduction. We flattened the output from the third CNN layer and appended two fully connected (Dense) layers for regression. Finally, we reshaped the output from the final Dense layer into 448 \(\times\) 304 to retrieve the predicted spatial maps. The input to this model were mini-batches of 3D tensors of shape \(H\times W\times F\) corresponding to \(448\times 304\times 5\) dimensional images whereas the output was monthly SIC percentage values in the shape of 448 \(\times\) 304 images for multiple lead times. To incorporate the lead time in monthly forward predictions, a lag (offset) of 1 to 6 months was created in input features and target SIC values by removing the first 1 to 6 rows from the SIC column and last 1 to 6 rows from the 5 input features. So that a January 1979 data sample would correspond to February 1979 SIC values for a lag of 1 month, a January 1979 data sample would correspond to March 1979 SIC values for a lag of 2 months, and so on. The model was trained six times, each time for a different lag value.
#### Iv-A2 Convolutional Long Short Term Memory (ConvLSTM)
We further designed a ConvLSTM model by appending one ConvLSTM layer at the beginning of our baseline CNN model. Since ConvLSTM model requires 4D input tensors, we used the same input train and test datasets generated for our proposed MT-IceNet model for training this ConvLSTM baseline model, that is, \(T\times H\times W\times F\). This model was also developed using the Keras API. We trained four ConvLSTM models using the monthly dataset for lead times of 1 to 6 months. To incorporate the lag, an offset was added to train and test sets in a similar manner as done for our CNN baseline model.
### _MT-IceNet Model_
Our proposed Multitemporal (MT)-IceNet is a U-Net based model that comprises two paths of neural network layers, the contractive path that we represent as encoder, and the expansive path represented as the decoder. The architecture diagram is shown in Figure 2.
#### Iv-B1 U-Net
A U-Net based model was first introduced for image segmentation for bio-medical imagery [19]. It comprises three constituent blocks; the encoder, the decoder and the bottleneck block that acts as a bridge between both the encoder and decoder. What distinguishes a U-Net architecture from a transformer based model is the use of skip connections between different layers of encoder and decoder. These skip connections provide upsampling layers important features
Fig. 1: End-to-end pipeline of our predictive model.
from the downsampling layers that are lost due to the depth of the network.
#### Iii-B2 Encoder
Our encoder comprises two downsampling blocks. The first block consists of a ConvLSTM layer that takes in monthly data as input in the shape \(T\times H\times W\times F\), here \(T=12\). The output of ConvLSTM layers is passed to two 2D convolution layers, followed by a batch normalization layer and a 2D max pooling layer. The second block follows the same architecture with the difference of input shape. Here, the ConvLSTM layer if given bi-monthly data of the shape \(24\times H\times W\times F\). In every successive layer of the encoder, we increment the output channels by a multiplicative factor of 2, as shown in Figure 2. All CNN layers use the same \(3\times 3\) kernel size filters whereas the ConvLSTM layer uses a \(5\times 5\) kernel. The activation function ReLU is used in all the encoder layers. The encoder part of our model helps learn low-level spatiotemporal dependencies in the data and identifies patterns needed for predicting SIC spatial maps.
#### Iii-B3 Decoder
The purpose of the decoder block is to upsample the low-level features learnt from the data and help reconstruct the spatial map in the same dimension as the input but at a future timestep. Similar to the encoder, the decoder comprises two upsampling blocks. Every block comprises a \(2\times 2\) upsampling layer using the nearest interpolation method and a \(2\times 2\) kernel size filter. The skip connection is built by concatenating the output of each upsampling layer with the output from a corresponding downsampled feature map generated by the encoder, as shown in the Figure 2. Once the outputs are concatenated, they are passed through two 2D convolutional layers. The output channel size of every CNN layer is reduced by a factor of 2 in order to regain the initial input dimension. Finally, a \(1\times 1\) convolution with linear activation is applied to the decoder's output to generate the predicted spatial map.
### _Postprocessing_
We performed two post-processing steps on the predictions generated by our model. Since our predictions correspond to sea ice concentration values that are basically percentage values between 0 to 100, we rescale the values predicted by the model to [0,100] by clipping all predictions less than 0 to 0 and all predictions greater than 100 to 100. This helps interpret the regression results. Further, to help visualize the predictions, we multiplied the predicted spatial maps with a binary land mask. Since we are not interested in land-area predictions, this multiplicative step discards the land area predictions by
Fig. 2: Model Architecture of the Multi-temporal (MT) IceNet Model.
assigning them zero-weightage while all ocean and water body predictions are retained. This also helps in evaluating the model using the evaluation metrics discussed in Section V.
## V Evaluation
We first present the experimental setup for our research work in Section V.A. We then move forward to compare our performance with the baseline CNN and ConvLSTM models. We also compared our work with two recently proposed solutions to SIC forecasting using 1) multitask-ConvLSTM method [13] and 2) IceNet [3]. We present the results of this comparative analysis in Section V.B. Finally we perform the qualitative analysis of our MT-IceNet predictions in Section V.C.
### _Experimental Setup_
All our experiments are performed using the Amazon Web Services (AWS) cloud-based Elastic Compute Cloud (EC2) accelerated computing instances with high frequency 2.5 GHz (base) Intel Xeon Scalable Processor, 32 vCPUs and 64 GBs of GPU memory. The total storage space required for our experiments is around 600 GBs which includes our train and test datasets, model computations and storage and visual illustrations of our results. Our MT-IceNet model is trained using Keras Functional API with a Tensorflow backend and has around 148,000 trainable parameters. This is 99% less than the 44 million trainable weights of the IceNet [3] model. Through a less complex architecture, we also show how simple approaches can generate better results.
We trained our model using Adam optimizer, Root Mean Squared Error (RMSE) loss and trained it on 100 epochs using the Early stopping criteria with a learning rate of 0.0001. Due to the high dimensionality of input data and a limited RAM, we could only process mini-batches of 4 samples each.
**Evaluation Metrics:** We report the RMSE, MAE and \(R^{2}\) performance evaluation scores for our model in Tables II, III IV. Since it is a spatiotemporal 3D dataset corresponding to latitude and longitude values, we customized the RMSE and MAE metrics for our models evaluation using the following formula:
\[RMSE_{SIC}=\sqrt{\frac{\Sigma_{I}\Sigma_{J}\Big{(}Y[i,j]-\hat{Y}[i,j]\Big{)}^{ 2}}{N}} \tag{1}\]
\[MAE_{SIC}=\frac{\Sigma_{I}\Sigma_{J}\Big{|}Y[i,j]-\hat{Y}[i,j]\Big{|}}{N} \tag{2}\]
Here, \(Y\) represents ground truth while \(\hat{Y}\) represents predicted SIC values. \(i\) corresponds to 448 latitude, \(j\) corresponds 304 longitude values and \(N\) represents the total number of test samples. While both, RMSE and MAE, are metrics to calculate error, RMSE gives relatively higher weightage to large error and can help in capturing the variance in error magnitudes.
\[R^{2}=1-\frac{RSS}{TSS} \tag{3}\]
We further evaluated our model using the \(R^{2}\) score. As shown in Eq. 3, \(RSS\) represents the sum of squares of residuals and \(TSS\) represents the total sum of squares. Higher \(R^{2}\) score represents better performance.
### _Comparative Analysis_
For the comparative analysis, we first trained the baseline models CNN and ConvLSTM to predict SIC values for a lead time of 1 to 6 months. We then trained the Multitask-ConvLSTM and IceNet on our dataset to predict SIC values for the same lead times. We are grateful to the authors for providing open-access to their codes on github. Since, IceNet is a computationally expensive classification model that takes in 50 input features and requires 1TB of memory, we customized their models to a light-weight version comprising of only 2 downscaling and 2 upscaling blocks. We further tweaked their output layer by removing the classification layer and replacing it with a regression layer to generate \(448\times 304\) spatial maps at multiple lead times. We refer to this modified version as IceNet\({}^{\dagger}\). The multitask-ConvLSTM was proposed to jointly predict per-pixel SIC values and the total sea-ice extent corresponding to the entire spatial region. All these models were trained and evaluated using the same train and test split to have a fair comparison of their performance. In our comparison with other models, we only took into account the SIC prediction and ignored the sea-ice extent values. We first analyzed the performance of baseline models for SIC prediction and compared it with our MT-IceNet predictions.
#### V-B1 Quantitative Analysis
In Tables II and IV, it is evident by increasing values of RMSE and MAE scores that both CNN and ConvLSTM have poor predictive performance as compared to MT-IceNet, where our model reportedly decreased the RMSE error by 51% as compared to CNN, by 40% as compared to ConvLSTM and by 58% as compared to IceNet\({}^{\dagger}\), for all lead times. The same trend was observed in MAE error where MT-IceNet reduced the MAE error by more than 60% as compared to its Multi-task ConvLSTM and IceNet counterparts. We further noticed that MT-IceNet has a significantly better \(R^{2}\) score with a notable lead of around 10% for all lead times as compared to the CNN and ConvLSTM models. The best results have been highlighted in bold in all our result tables.
As seen in Table II, both multitask-ConvLSTM and IceNet\({}^{\dagger}\) take a sharp increase in RMSE score after a lead time time of one month, whereas MT-IceNet still shows a trivial increase in the RMSE scores with only a 2 point increase in RMSE and 1 point increase in MAE from lead time of 1 month to the sixth month. To our surprise, the highest reported errors are from IceNet\({}^{\dagger}\) model where the RMSE and MAE errors have significantly higher and \(R^{2}\) scores have very low values for the IceNet\({}^{\dagger}\) predictions. It is evident from all three metric results that MT-IceNet outperforms all baseline and recently proposed models for SIC forecasting by showing a promising and persistent predictive performance on greater lead times. The second best performance is achieved by the baseline ConvLSTM model. An interesting observation here is that all
Figure 4: MT-IceNet Summer Prediction difference plots for multiple lead times.
Figure 5: NSIDC Observed Sea Ice Concentration (%) vs MT-IceNet Predictions for Winter 2020.
Figure 3: NSIDC Observed Sea Ice Concentration (%) vs MT-IceNet Predictions for Summer 2020.
Figure 6: MT-IceNet Winter Prediction difference plots for multiple lead times.
models show an improvement in performance after the lead time of 4 months, as evident by the RMSE, \(R^{2}\) and MAE scores. Though this is an interesting finding, the actual cause of this performance improvement is yet to be known.
#### Iv-B2 Qualitative Analysis
To evaluate the quality of our per-pixel predictions, we plot the spatial maps generated by the MT-IceNet model over the Arctic region using Python's car-topy API for geospatial projections. Figures 3 and 5 show the forecasted spatial plots where every pixel value lies between [0,100]. Here 100 represents 100% ice concentration whereas 0 represents the absence of ice in that specific pixel. Since each pixel corresponds to a \(25\times 25\)\(km^{2}\) land area, any value ranging in between 0 to 100 represents the percentage area covered with ice in that region. Looking at Figure 3, it is observed that MT-IceNet overpredicts September sea ice at greater lead times which is the trickiest to predict. Nonetheless, our model shows great performance throughout March predictions which is the peak Winter time in the Arctic, as shown in Figure 5.
To have a clear identification of regions with incorrect predictions, we plot the differences in the actual SIC observations and the predicted values for multiple lead times, both for Summer and Winter peak months, i.e., September and March, as shown in Figures 4 and 6. Upon inspecting Figure 4, we see that model performs poorly only near the coastal areas of Greenland. For March, we notice that the model underpredicts
Fig. 7: Time-series for derived Sea Ice Extent from MT-IceNet predictions at multiple lead times.
the sea ice over the coastal areas. This can be considered a minor performance flaw as edge predictions are usually the trickiest for spatiotemporal models. For September predictions, as shown in Figure 4, we notice how model overpredicts Summer sea ice at greater lead times. This is due to the concept of seasonal barrier, according to which the seasonality patterns are hard to identify from a distance of more than 3 months. Using the SIC values, we calculated the overall sea ice extent for the entire region by calculating the area-weighted sum of the Arctic region using the per-pixel area map provided by NSIDC. We plotted these sea ice extent values as a time-series plot for multiple lead times, as shown in Figure 7. We noticed how our model overpredicts summer sea ice and underpredicts winter sea ice at greater lead times. We also noticed the performance improvement in lead times 5 (red) and 6 (lime) where the model predictions once again come closer to the actual observations (blue). Overall, we did not find any sharp increase or decrease in the SIC model predictions as the lead time increases. This means our model can overcome the performance versus lead-time tradeoff that is faced by most of the models proposed for seasonal predictions.
## VI Conclusions & Future Work
In this paper, we presented our work on a spatiotemporal deep learning model that jointly learns from multi-temporal inputs to forecast Arctic sea ice at lead times of 1 to 6 months. Through experiment and ablation study, we showed how our model outperforms the baseline and recent state-of-the-art approaches using a U-Net based architecture by overcoming the small data problem and seasonality barrier challenge. Our MT-IceNet not only outperforms the baseline and other recent work but also shows a consistency in forecasting SIC values at greater lead times. We believe our proposed model can substantially improve our ability in predicting the future Arctic sea ice changes at sub-seasonal to seasonal scales, which is fundamental for forecasting transportation routes, length of open water, resource development, coastal erosion, and threats to Arctic coastal communities and wildlife.
In the future, we plan to extend our work to multi-scale spatiotemporal modeling in order to jointly process fine and coarse resolutions of geolocation information that can be vital in solving similar Earth Science problems. We further plan to incorporate the attention mechanism in our model to identify important contributing factors to the prediction. Lastly, we plan to work on data-driven causal discovery to study variations in Arctic sea ice using spatiotemporal deep learning models.
## Acknowledgement
This work is supported by NSF grants: CAREER: Big Data Climate Causality (OAC-1942714) and HDR Institute: HARP - Harnessing Data and Model Revolution in the Polar Regions (OAC-2118285). We thank Dr. Yiyi Huang (NASA Langley Research Lab) for her assistance in introducing the dataset.
|
2307.14187 | ADAPT: Efficient Multi-Agent Trajectory Prediction with Adaptation | Forecasting future trajectories of agents in complex traffic scenes requires
reliable and efficient predictions for all agents in the scene. However,
existing methods for trajectory prediction are either inefficient or sacrifice
accuracy. To address this challenge, we propose ADAPT, a novel approach for
jointly predicting the trajectories of all agents in the scene with dynamic
weight learning. Our approach outperforms state-of-the-art methods in both
single-agent and multi-agent settings on the Argoverse and Interaction
datasets, with a fraction of their computational overhead. We attribute the
improvement in our performance: first, to the adaptive head augmenting the
model capacity without increasing the model size; second, to our design choices
in the endpoint-conditioned prediction, reinforced by gradient stopping. Our
analyses show that ADAPT can focus on each agent with adaptive prediction,
allowing for accurate predictions efficiently. https://KUIS-AI.github.io/adapt | Görkay Aydemir, Adil Kaan Akan, Fatma Güney | 2023-07-26T13:41:51Z | http://arxiv.org/abs/2307.14187v1 | # ADAPT: Efficient Multi-Agent Trajectory Prediction with Adaptation
###### Abstract
Forecasting future trajectories of agents in complex traffic scenes requires reliable and efficient predictions for all agents in the scene. However, existing methods for trajectory prediction are either inefficient or sacrifice accuracy. To address this challenge, we propose ADAPT, a novel approach for jointly predicting the trajectories of all agents in the scene with dynamic weight learning. Our approach outperforms state-of-the-art methods in both single-agent and multi-agent settings on the Argoverse and Interaction datasets, with a fraction of their computational overhead. We attribute the improvement in our performance: first, to the adaptive head augmenting the model capacity without increasing the model size; second, to our design choices in the endpoint-conditioned prediction, reinforced by gradient stopping. Our analyses show that ADAPT can focus on each agent with adaptive prediction, allowing for accurate predictions efficiently. [https://KUIS-AI.github.io/adapt](https://KUIS-AI.github.io/adapt)
## 1 Introduction
A self-driving agent needs to be able to anticipate the future behavior of other agents around it to plan its trajectory. This problem, known as trajectory forecasting, is an important requirement for safe navigation. There are multiple challenges to solving this problem. First of all, traffic scenes are highly dynamic. The behavior of an agent depends not only on the scene properties, such as configurations of lanes but also on other agents, such as yielding to another vehicle that has priority. Second, multiple futures need to be predicted due to the inherent uncertainty in future predictions. While these two challenges are studied in the literature, one challenge remains mostly unresolved: The future is shaped according to _all_ agents in the scene acting together. Therefore, trajectories of all agents need to be predicted as opposed to the current practice of predicting only the trajectory of a selected agent [11, 6].
The progress in trajectory forecasting has focused mainly on scene representations for predicting the trajectory of a single agent. Typically, the existing methods [32, 49, 21] follow an agent-centric reference frame where the scene is centered around the agent of interest, and everything else is positioned relative to it. This way, the prediction network is provided with the same initial state regardless of the agent's location or orientation, providing _pose-invariance_. In other words, the scene is observed from the viewpoint of the agent of interest. In multi-agent setting, each agent has a different view of the world, and one cannot be prioritized over another as in the case of the agent-centric approach. A straightforward extension of an agent-centric approach to multi-agent is iterating the process for each agent in its own reference frame (Fig. 2). This is achieved by transforming the scene according to each agent to obtain pose-invariant features. However, this solution scales linearly with the number of agents and causes a variable inference time that cannot be afforded in the real-time setting of driving. As a solution, SceneTransformer [37] introduces a global frame that is shared across agents. In their scene-centric approach, all agents are positioned with respect to
Figure 1: **Accuracy vs. Efficiency. We plot the accuracy in terms of error (brief-mFDE) vs. the number of parameters (a) and inference time (b) on the test set of Argoverse dataset [11]. Our method achieves one of the lowest reported errors with a small number of parameters, leading to highly efficient inference time compared to methods AutoBot [22], LaneGCN [32], mm-Transformer [33], DenseTNT [23], SceneTransformer [37], LTP [48], PAGA [13], and HiVT [53].**
the same reference point but at the cost of pose-invariance.
Ideally, the pose-invariance is a desirable property but for multi-agent prediction, a scene-centric approach can be preferred in real world due to efficiency concerns [37]. Then the question is how do we avoid the problems of a scene-centric approach without sacrificing efficiency? In this paper, we propose a solution to adapt to the situation of each agent with dynamic weight learning [28, 45, 43]. Dynamic networks can adjust the model structure based on input by adapting network weights according to the changes in the input states [25]. Therefore, they are well-suited for the multi-agent prediction task where each agent has a different initial state. Additionally, dynamic networks are capable of expanding the parameter space without increasing computation cost, therefore meeting the real-time requirements of our task. We learn the weights of the network that predicts the endpoints so that they can change and adapt to each agent's reference frame. With dynamic weights, we can efficiently adapt the prediction head to each agent in a scene-centric approach without iterating over agents.
Our method is not only the first to achieve multi-agent prediction accurately and efficiently in a scene-centric approach but also one of the smallest and fastest among all trajectory prediction models including single-agent prediction. Using a goal-conditioning approach, we can easily switch between single and multi-agent prediction settings. To further enhance the performance of our model, we employ gradient stopping to stabilize the training of trajectory and end-point prediction. This technique enables us to achieve good performance by fully leveraging the capacity of a small decoder with simple MLP layers rather than a complex one.
We show that our method outperforms the state-of-the-art methods with a fraction of their parameters in both single-agent setting of the Argoverse [11] and multi-agent setting of the Interaction [51]. On Interaction, specifically designed for evaluating multi-agent predictions, our method achieves a \(1\%\) miss rate in comparison to \(5\%\) which was the lowest achieved so far [21]. Our contributions can be summarized as follows:
* We propose a novel approach for predicting the trajectories of all agents in the scene. Our adaptive head can predict accurate trajectories by dynamically adapting to various initial states of multiple agents.
* We achieve state-of-the-art results efficiently with one of the smallest and fastest models including the ones in single-agent setting. We validate our design choices in endpoint prediction and trajectory prediction with gradient stopping for stabilized training.
* We have created a unified prediction process that can be used for both single and multi-agent settings with the same backbone by utilizing endpoint conditioning. Our method allows for easy switching between scene-centric and agent-centric reference frames, achieving state-of-the-art in both settings.
## 2 Related Work
### Single-Agent Prediction
Scene Representation:In dynamic traffic scenes, representing the scene elements and modeling interactions between them play a crucial role in performance. In single-agent prediction, previous works focus on the representation of the scene and interaction modeling from the viewpoint of the agent of interest. Earlier works [12, 9, 26, 4, 31, 34] create a rasterized image to represent both the context and the interactions. The previous locations of agents are typically encoded with sequential models such as RNNs [35, 29, 1, 24, 38, 40]. Combining the two, the following works explore more specialized scene representations [44, 10, 15, 19, 39, 5, 38, 36]. In contrast to rasterized representation, Graph Neural Networks (GNNs) enable a more explicit way of modeling interactions such as with a lane graph [32] or a vectorized representation of the scene [17]. Based on their success in capturing hierarchical representations, recent works continue adapting GNNs for interaction modeling [52, 23, 50, 2]. Towards the same purpose, more recent works [53, 33, 41, 37, 21] use transformers with multi-head attention [47], whose success has been proven repeatedly [16, 8, 14, 7]. We also use a scene representation based on multi-head attention.
Attention-Based Encoding:Transformers are widely used for interaction modeling due to their ability to capture the interaction between different scene elements. VectorNet [17] uses the same attention weights for different types of elements, i.e. agents and lanes. LaneGCN [32] categorizes interactions and uses a different type of attention for
Figure 2: **Scene-Centric vs. Agent-Centric Representation. In scene-centric representation (a), all elements are encoded according to the same reference point. In agent-centric representation (b), each agent is encoded in its own reference frame, leading to a complexity linear in the number of agents in multi-agent prediction.**
each, leading to a more specialized representation. Due to its success, the following works [33, 48, 53] continue modeling different types of interactions. Recently, factorized attention [37, 22] has been proposed to model temporal relations between scene elements efficiently. Instead of performing attention on the entire set of agents and time steps, factorized attention separately processes each axis, i.e. agent, time, and road graph elements. A similar factorization over time and agents is explored in Autobot [22] with a smaller and more efficient architecture. We also use different types of attention to model different types of interactions but enable updated information flow between different scene elements with iterative updates.
### Multi-Agent Prediction
**Multi-Agent Prediction:** Due to the evaluation setting on publicly available datasets [11, 6], most existing works focus on predicting the trajectory of a single agent. While it led to great progress in scene representations and modeling of dynamic interactions, in real-life scenarios, the agent needs to account for the future trajectories of other agents as well. Multi-agent prediction has been recently addressed by SceneTransformer [37] with a unified architecture to jointly predict consistent trajectories for all agents. While this method can perform inference for all agents in a single forward pass, HiVT [53] iterates over agents in an agent-centric representation, leading to the aforementioned inefficiency issues. LTP [48] follows a different approach with a lane-oriented representation to predict the most likely lane for each agent in a single pass. However, lane classification may not be as precise as regression. Our method can regress the trajectories of each agent efficiently in a single pass.
**Reference Frame:** The existing works represent the scene either from the viewpoint of an agent or from a fixed reference point as illustrated in Fig. 2. In the agent-centric representation [53, 22, 49, 23, 52, 32], the scene is transformed so that the agent of interest is positioned at the origin of the scene. In contrast, all elements are positioned with respect to the same reference point in a scene-centric representation [37]. This shared context representation is especially helpful for multi-agent prediction [51, 42] while the agent-centric works better for single-agent prediction [11, 6] due to the simplification of the problem. It allows focusing on a single agent without worrying about other agents except for their relation to the agent of interest. Multi-agent prediction can be performed with sequential agent-centric predictions, i.e. one agent at a time [22, 48, 23, 53, 21]. However, this straightforward extension scales linearly with the number of agents in the scene and raises efficiency concerns. We use a scene-centric representation for multi-agent predictions but adapt to each agent with dynamic weights to benefit from agent-specific features as in agent-centric representation.
## 3 Methodology
Given the past trajectories of all agents on a High-Definition (HD) map of the scene, our goal is to predict the future trajectories of agents in the scene. In a vectorized scene representation, we model different types of interactions between the agents and the map to obtain a representation for agents (Section 3.1). Following goal-conditioned approaches [52, 23], we first predict a possible set of endpoints. We then refine each endpoint by predicting an offset (Section 3.2). Finally, we predict the full trajectories conditioned on endpoints (Section 3.3). We stabilize training by separating endpoint and trajectory prediction with gradient detaching. Our pipeline is illustrated in Fig. 3. Our model uses small MLPs in endpoint and trajectory prediction, keeping model complexity low.
### Feature Encoding
**Polyline Encoding:** We represent the map and the agents using a vectorized representation in a structured way. The vectorized representation initially proposed in VectorNet [17] creates a connected graph for each scene element independently. Given past trajectories of agents, \(\mathcal{A}=\{\mathbf{a}_{i}\}\) where \(\mathbf{a}_{i}\in\mathbb{R}^{T\times 2}\) denotes the location of agent \(i\) at previous \(T\) time steps and HD map, \(\mathcal{M}=\{\mathbf{m}_{i}\}\) where \(\mathbf{m}_{i}\in\mathbb{R}^{l_{i}\times 2}\) denotes the lane \(i\) with \(l_{i}\) consecutive points constituting the lane. We encode each scene element, i.e. a polyline, with a polyline subgraph. We use two separate subgraphs for the agents and lanes (Fig. 3, left), resulting in a feature vector of length \(d\), \(\mathbf{f}_{i}\in\mathbb{R}^{d}\) for each polyline.
**Interaction Modelling:** We model different types of interactions between the scene elements (Fig. 3, left). Following LaneGCN [32], we model four types of relations: agent-to-lane (**AL**), lane-to-lane (**LL**), lane-to-agent (**LA**), and agent-to-agent (**AA**). Using attention, we update each feature \(\mathbf{f}_{i}\) extracted by the polyline subgraph.
In contrast to previous work using simple attention operation [32], and given the importance of multi-head attention and feed-forward networks in understanding the relations [18], we use a Multi-Head Attention Block (MHAB) as proposed in [22]. Specifically, we update self-relations (**AA, LL**) using a self-attention encoder followed by a feed-forward network (FFN) and cross-relations (**AL, LA**) using a cross-attention encoder followed by the FFN:
\[\text{MHA}(\mathbf{f}_{\mathbf{q}},\ \mathbf{f}_{\mathbf{kv}}) =\text{softmax}\left(\frac{\mathbf{Q}\,\mathbf{K}^{T}}{\sqrt{ dim_{\mathbf{k}}}}\right)\mathbf{V} \tag{1}\] \[\text{where}\ \ \mathbf{Q},\mathbf{K},\mathbf{V} =\mathbf{W}^{\mathbf{q}}\mathbf{f}_{\mathbf{q}},\mathbf{W}^{ \mathbf{k}}\mathbf{f}_{\mathbf{kv}},\mathbf{W}^{\mathbf{v}}\mathbf{f}_{ \mathbf{kv}}\]
where each \(\mathbf{W}\) is a learned projection. Similar to the original block in [47], our Multi-Head Attention Block is for
mally defined as follows:
\[\begin{split}\text{MHAB}(\mathbf{f_{q}},\ \mathbf{f_{kv}})& =\text{norm}(\mathbf{\tilde{f}}+\text{FFN}(\mathbf{\tilde{f}}))\\ \text{where}\ \mathbf{\tilde{f}}&=\text{norm}(\mathbf{f_{q} }+\text{MHA}(\mathbf{f_{q}},\ \mathbf{f_{kv}}))\end{split} \tag{2}\]
where the norm is the Layer Normalization [3]. Different than LaneGCN [32] applying different interaction types \(L\) times sequentially one by one, we model each interaction in order and repeat the process \(L\) times. This way, intermediate features can be updated at each iteration, and then the updated features are used to compute attention in the next iteration. Each scene element can be informed by different types of relations \(L\) times. See Supplementary for the experiment comparing the two design choices.
### Endpoint Prediction
For endpoint prediction, we either use a single MLP if an agent-centric reference frame is used which might be preferred due to its advantages in single-agent prediction or an adaptive head with dynamic weights if a scene-centric reference frame is used which might be preferred due to its efficiency in multi-agent prediction. Our model uses simple linear layers for endpoint prediction rather than sophisticated modules used in previous work [37, 22].
**Endpoint Prediction Head:** We predict a possible set of endpoints for each agent based on the agent features from previous attention layers. We utilize two different types of heads to predict the future trajectory of a single agent in an agent-centric reference frame and future trajectories of multiple agents in a scene-centric frame. In single-agent setting, we predict the endpoints with a simple MLP, which we call a _static head_. In multi-agent setting, we train an _adaptive head_ to dynamically learn the weights that predict the endpoints. Dynamic weight learning [28, 45] enables the prediction head to adapt to the situation of each agent.
\[\mathbf{W}_{1} =\mathbf{W}_{d_{1}}\mathbf{\tilde{f}} \tag{3}\] \[\mathbf{W}_{2} =\mathbf{W}_{d_{2}}\mathbf{\tilde{f}}\] \[\mathbf{F}_{d_{1}} =\text{ReLU}(\text{norm}(\mathbf{W_{1}f}))\] \[\mathbf{\hat{y}_{\text{pred}}} =\mathbf{W}_{2}\mathbf{F}_{d_{1}}\]
We visualize the adaptive head in Fig. 3 and explain it mathematically in (3) where \(\mathbf{W}_{\mathbf{d_{1}}}\) and \(\mathbf{W}_{\mathbf{d_{2}}}\) are trainable parameters and norm is the layer normalization. We process the encoded agent features concatenated with meta info, \(\mathbf{f}\) with an MLP and obtain \(\mathbf{\tilde{f}}\). Meta info includes the direction and location information of the agent at prediction time.
Figure 3: **Overview. Our scene encoding approach involves separate polyline encoders that interact in feature encoding (left). To predict endpoint proposals, we utilize the endpoint head, which employs the adaptive head with dynamic weights for multi-agent prediction, without the need to transform the scene for each agent. Conversely, we use static head (simple MLP) for single-agent prediction. Then we perform endpoint refinement to improve accuracy (middle). Finally, we interpolate the full trajectory for each agent using the refined endpoints (right). By utilizing gradient detaching for endpoint and trajectory prediction modules, we achieve better performance with a small and fast architecture.**
By providing the current state information to the prediction head as input, we allow the dynamic weights to adjust to the state of the agent while predicting the endpoints.
**Refinement:** We further refine the endpoints by predicting an offset to the initial endpoint proposals from the prediction head. Given the endpoint proposals and the features of the agent, we predict the corresponding offset for each proposal with a simple MLP. We detach the gradients of endpoints before passing them as input to decouple the training of the endpoint prediction and refinement. Intuitively, offsets that are supposed to correct the endpoints can receive an independent gradient update from the endpoints. A similar approach is used to update queries in [27].
### Trajectory Prediction
**Trajectory Interpolation:** After obtaining the refined endpoint for each agent, we interpolate future coordinates between the initial point and the endpoint with an MLP. We detach the endpoints to ensure that weight updates for full trajectory prediction are separated from endpoint prediction. Similarly, we predict a probability for each trajectory using detached endpoints. We provide the pseudo-code for endpoint and trajectory prediction in Fig. 4. We train static and adaptive heads as the endpoint head for the agent-centric and the scene-centric reference frames, respectively.
**Training:** For training, we predict \(K\) trajectories and apply variety loss to capture multi-modal futures by back-propagating the loss only through the most accurate trajectory. As we predict the full trajectories conditioned on the endpoints, the accuracy of endpoint prediction is essential for full trajectory prediction. Therefore, we apply a loss on endpoints to improve the endpoint prediction. The final term in our loss function is classification loss to guide the probabilities assigned to trajectories. In summary, we train our model using the endpoint loss, the full trajectory loss, and the trajectory classification loss. Please see Supplementary for the details of our loss functions.
## 4 Experiments
### Experimental Setup
**Datasets:** We evaluate our method in single-agent setting on Argoverse v1.1 [11] and in multi-agent setting on Interaction [51]. Argoverse, with 323,557 scenarios, is the commonly used benchmark for single-agent motion forecasting. Given the HD map and the history of agents for 2s, the goal is to predict the future locations of the agent of interest for the next 3s. Interaction contains 62,022 multi-agent scenarios with up to 40 agents per scenario. The goal is to predict the future for all agents in the scene.
**Metrics:** We use standard metrics including minimum Average Displacement Error (mADE\({}_{k}\)), minimum Final Displacement Error (mFDE\({}_{k}\)), Miss Rate (MR\({}_{k}\)), and brier minimum Final Displacement Error (brier-mFDE\({}_{k}\)). These metrics are calculated based on the trajectory with the closest endpoint to the ground truth over \(k\) trajectory predictions. mADE\({}_{k}\) measures the average \(\ell_{2}\) difference between the full prediction and the ground truth, mFDE\({}_{k}\) measures the difference between the predicted endpoint and the ground truth. MR\({}_{k}\) is the ratio of scenes where mFDE\({}_{k}\) is higher than 2 meters. The brier-mFDE\({}_{k}\) is calculated as \((1-p)^{2}+\) mFDE\({}_{k}\) where \(p\) is the probability predicted for the trajectory. In multi-agent setting, each metric is computed per agent and then averaged over all agents.
**Training Details:** We set the number of layers \(L\) to 3 for both polyline subgraphs and interaction modeling. We train our models with a batch size of 64 for 36 epochs. We use Adam optimizer [30] with an initial learning rate of \(1\times 10^{-4}\) and \(2\times 10^{-4}\) for Argoverse and Interaction experiments, respectively. We anneal the learning rate with a factor of \(0.15\) at the \(70^{th}\) and \(90^{th}\) percentiles. We generate lane vectors for lanes that are closer than \(50\)m to any available agent. For data augmentation, we use random scaling in the range of \([0.75,1.25]\) for Argoverse and random agent drop with the probability of \(0.1\) for both Argoverse and Interaction experiments. We also use other agents on Argoverse as additional training data as done in previous work [53]. Specifically, we only consider agents that move by at least \(6\)m following previous work [22, 37]. In agent-centric reference frame, we translate and rotate the scene with respect to the agent of interest. In scene-centric reference frame, we orient the scene based on the mean location of all agents.
Figure 4: **Pseudo-code for Trajectory Prediction. Given features of \(N\) agents, we first predict endpoint proposals using the endpoint head. Later, we apply a refinement on the endpoints by adding an offset. We then predict a trajectory for \(T\) steps conditioned on each endpoint. We also predict a score associated with each trajectory.**
### Quantitative Results
**Single-Agent Prediction on Argoverse:** We compare our method in single-agent setting using an agent-centric reference frame to the state-of-the-art on test (Table 1) and validation (Table 2) sets of Argoverse. We report results without ensembles, except for Multipath++ [46] (only the result of the ensemble is reported). Our method achieves comparable results to the top-performing methods on the test set in all metrics. In particular, we approach the the performance of the state-of-the-art PAGA [13] in the official metric, i.e. brier-mFDE\({}_{6}\) and reach the performance of HiVT [53] in other metrics by using only \(56\%\) of its parameters. On the validation set, our method performs the best in terms of mFDE\({}_{6}\) and MR\({}_{6}\). Impressively, ADAPT achieves these results with one of the smallest and fastest models (Fig. 1).
In Table 2, we report the average runtime per scene on the validation set of Argoverse using a Tesla T4 GPU. To align the settings between different approaches and alleviate implementation differences in parallelism, we set the batch size to 1 and predict the future only for agent of interest per scene. Our method is the second fastest method, only behind mmTransformer [33] but with significantly better results than mmTransformer on both validation and test sets. Note that HiVT [53] suffers significantly in terms of inference time due to agent-centric approach where the scene is normalized for each agent iteratively. Our approach can achieve similar results, and even slightly better, on the test set with only \(18\%\) of HiVT's inference time. We provide a comparison of methods in terms of computational complexity in Supplementary to justify our design choices in feature
\begin{table}
\begin{tabular}{l|c c|c c|c|c} \hline \hline & \multicolumn{2}{c|}{mADE\({}_{k}\)} & \multicolumn{2}{c|}{mFDE\({}_{k}\)} & \multicolumn{1}{c|}{brier-} & \multicolumn{1}{c}{\#Prm} \\ \cline{2-7} & k=1 & k=6 & k=1 & k=6 & mFDE\({}_{6}\) & (M) \\ \hline AutoBot [22] & - & 0.89 & - & 1.41 & - & 1.5 \\ HO+GO [20] & - & 0.92 & 3.68 & 1.29 & - & - \\ LaneGCN [32] & 1.71 & 0.87 & 3.78 & 1.36 & 2.05 & 3.7 \\ mmTr [33] & 1.77 & 0.84 & 4.00 & 1.34 & 2.03 & 2.6 \\ D-TNT [23] & 1.68 & 0.88 & 3.63 & 1.28 & 1.98 & **1.1** \\ THOMAS [21] & 1.67 & 0.94 & 3.59 & 1.44 & 1.97 & - \\ SceneTr [37] & 1.81 & 0.80 & 4.05 & 1.23 & 1.89 & 15.3 \\ LTP [48] & 1.62 & 0.83 & 3.55 & 1.30 & 1.86 & **1.1** \\ HiVT [53] & 1.60 & **0.77** & 3.53 & **1.17** & 1.84 & 2.5 \\ MP++* [46] & 1.62 & 0.79 & 3.61 & 1.21 & 1.79 & 125 \\ PAGA [13] & **1.56** & 0.80 & **3.38** & 1.21 & **1.76** & 1.6 \\ Ours (Single) & 1.59 & 0.79 & 3.50 & **1.17** & 1.80 & 1.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results on Argoverse (Test).** This table shows single-agent results on Argoverse. The number of parameters is reported or calculated using the official repositories. Models with ensembles are marked with “*”.
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline & mADE\({}_{6}\) & mFDE\({}_{6}\) & MR\({}_{6}\) & Inf. (ms) \\ \hline TPCN [49] & 0.73 & 1.15 & 0.11 & - \\ mmTrans [33] & 0.71 & 1.15 & 0.11 & **7.66** \\ LaneGCN [32] & 0.71 & 1.08 & - & 38.37 \\ LTP [48] & 0.78 & 1.07 & - & - \\ DenseTNT [23] & 0.73 & 1.05 & 0.10 & 444.66 \\ HiVT [53] & **0.66** & 0.96 & 0.09 & 64.45 \\ PAGA [13] & 0.69 & 1.02 & - & - \\ Ours (Single) & 0.67 & **0.95** & **0.08** & 11.31 \\ \hline Ours (Multi) & 0.65 & 0.97 & 0.08 & 11.31 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results on Argoverse (Validation).** This table shows results on the Argoverse validation set. The bottom row shows the multi-agent evaluation in scene-centric reference frame. The inference time is calculated using the official repositories in the same setting.
Figure 5: **Qualitative Results.** We visualize multi-agent predictions on Interaction (a) and single-agent on Argoverse (b). The predicted trajectories are shown in green, ground truth in red, past in cyan, and the trajectories of other agents in black.
encoding, contributing to our method's efficiency. A full comparison in terms of brier-mFDE\({}_{6}\) vs. inference time is provided in Fig. 0(b). ADAPT achieves the best performance without sacrificing efficiency.
Multi-Agent Predictions on Argoverse:We extend the Argoverse setting from single-agent in agent-centric reference frame to multi-agent in scene-centric reference frame by modifying the reference point to be the same for all agents. In this case, we use ADAPT with adaptive head instead of static head. For evaluating multi-agent predictions, we only consider the agents that are visible at all timestamps. As shown at the bottom row of Table 2, ADAPT can predict the future trajectories of all agents in scene-centric reference frame with similar accuracy to single-agent case which has the advantage of agent-centric reference frame. Please note that the inference time remains the same from single-agent to multi-agent case since we predict all future trajectories in a single pass.
Multi-Agent Prediction on Interaction:We compare our method in multi-agent setting using a scene-centric reference frame to other methods on the Interaction validation set in Table 3. Our method significantly outperforms other methods with a large gap in all metrics. Impressively, it reaches \(1\%\) miss rate, showing that our method can predict future trajectories accurately for all agents in the scene.
Robustness of Adaptive Head:To evaluate the effect of noisy input data, we conducted an experiment in multi-agent setting on Interaction. Specifically, we perturb input coordinates with noise \(\mathcal{N}(0,\sigma)\) where \(\sigma\in\{0.4,0.8,1.6,3.2\}\), corresponding to an average of \(\{0.32,0.64,1.28,2.56\}\) meters deviation in 0.1 seconds, respectively. As shown in Table 4, the performance is quite robust to increasing noise levels in input coordinates.
### Qualitative Analysis
In Fig. 5, we visualize the predictions of our model in multi-agent setting on Interaction (a) and in single-agent setting on Argoverse (b). Our model can predict accurate multi-modal trajectories for all agents in complex intersection scenarios on Interaction. In single-agent case, our model can vary predictions for the agent of interest in terms of intention and speed. Our model can successfully capture the interactions between the agents and predict futures by considering other agents, e.g. on the top left in Fig. 4(b).
Visualizing Effect of Adaptive Head:To understand the importance of adaptive head, we compare the predictions of adaptive head (left) to the predictions of static head (right) in the same scenario in Fig. 6. The adaptive head significantly improves the consistency and accuracy of predictions by allowing the model to adapt to the initial state of the agent, including its rotation, location, and speed.
Understanding Dynamic Weights:To understand how the proposed adaptive prediction head changes according to each agent, we visualize dynamically learned weights for each agent by projecting them into the 3D hypersphere as shown in Fig. 7. Specifically, we project the \(\mathbf{W}_{2}\) matrix (Eq. 3) for each agent to a 3-dimensional vector with PCA and normalize it to unit length as shown on top of each scene. Despite differences in absolute position, the
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \(\sigma\) & mADE\({}_{6}\) & mFDE\({}_{6}\) & MR\({}_{6}\) \\ \hline
0 & 0.161 & 0.344 & 0.010 \\
0.4 & 0.161 & 0.347 & 0.010 \\
0.8 & 0.163 & 0.349 & 0.010 \\
1.6 & 0.166 & 0.357 & 0.010 \\
3.2 & 0.169 & 0.361 & 0.011 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Robustness of Adaptive Head on Interaction (Validation).** This table shows the effect of adding noise to input coordinates on multi-agent performance.
\begin{table}
\begin{tabular}{l|c c c|c} \hline \hline & mADE\({}_{6}\) & mFDE\({}_{6}\) & MR\({}_{6}\) & Inf. (ms) \\ \hline AutoBot (J.) [22] & 0.21 & 0.64 & 0.06 & 25.29 \\ SceneTr [37] & 0.26 & 0.47 & 0.05 & - \\ THOMAS [21] & 0.26 & 0.46 & 0.05 & - \\ Ours (Multi) & **0.16** & **0.34** & **0.01** & **11.10** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on Interaction (Validation).** This table shows multi-agent results on Interaction. The inference time is calculated using the official repositories. The results of SceneTransformer are based on a re-implementation by authors of THOMAS [21].
Figure 6: **Visualizing Effect of Adaptive Head.** We visualize the predictions of adaptive head (**left**) and static head (**right**). The predicted trajectories are shown in green, and ground truth in red.
learned weights for agents moving in the same lane map to similar vectors on the hypersphere. For example, the brown and green agents on the left column map to almost identical vectors, although they are spatially far from each other. A separating factor is the orientation of agents, which is preserved in the mapping. For example, the green and purple agents on the right column map along the same direction, while the orange, red, brown, and cyan agents on the opposite lane map to the opposite direction.
### Ablation Study
We conduct ablation studies on the validation split of the Argoverse for single-agent setting and the validation split of the Interaction dataset for multi-agent setting.
Ablations on Single-Agent:In Table 5, we perform an ablation study on our architectural choices in single-agent setting of the Argoverse. First, using other agents in a scene as done in previous work [53, 37, 22] improves the performance in all metrics as it provides more samples for training. Second, gradient stopping in the trajectory predictor enhances the performance by providing independent updates for endpoint refinement and trajectory scoring. Third, refinement improves the accuracy of both the endpoint and the full trajectory by improving the accuracy of the initial endpoint as well. Fourth, data augmentation increases the diversity of the training data, leading to better performance. Finally, combining all results in the best performance, proving the importance of each component and design choice.
Ablations on Multi-Agent:In Table 6, we analyze the effect of adaptive head with dynamic weights on multi-agent prediction. The results show that the performance is improved significantly with the adaptive head. This indicates that the adaptive head can adjust the weights according to the initial state of each agent.
## 5 Conclusion and Future Work
We propose a novel efficient framework for predicting future trajectories in both multi-agent and single-agent settings where switching between the two requires only changing the endpoint prediction head. We propose dynamic weight learning to accurately predict the endpoints of multiple agents in the same scene reference frame. We demonstrate that our model reaches state-of-the-art performance in both single-agent and multi-agent settings without increasing the model size and consequently without sacrificing efficiency in inference time.
An interesting direction for future work might be incorporating stochastic latent variables into the endpoint prediction to improve the uncertainty in future predictions, e.g. with separate latent variables for short-term and long-term goals. Another promising direction is learning the temporal dynamics of the scene to understand the relations better and improve efficiency without limiting assumptions of factorized attention. Like most of the existing work in trajectory forecasting, we assume the availability of an HD map where the past locations of agents are marked. The effect of imperfect perception on predicting future trajectories needs to be studied in future work to deploy these solutions.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & mADE\({}_{6}\) & mFDE\({}_{6}\) & MR\({}_{6}\) \\ \hline w/o Adaptive Head & 0.244 & 0.425 & 0.017 \\ ADAPT & **0.161** & **0.344** & **0.010** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Multi-Agent Ablation Study on Interaction (Validation).** This table shows the effect of using the adaptive head with dynamic weights for endpoint prediction on the performance of multi-agent prediction on Interaction.
Figure 7: **Visualization of Dynamic Weights.** We project the dynamic weights for each agent into the 3D hypersphere **(top)** for two scenes on Interaction **(bottom)**. We use the same color for the agents and their corresponding projections on the hypersphere. |
2303.06074 | Susceptibility to Influence of Large Language Models | Two studies tested the hypothesis that a Large Language Model (LLM) can be
used to model psychological change following exposure to influential input. The
first study tested a generic mode of influence - the Illusory Truth Effect
(ITE) - where earlier exposure to a statement (through, for example, rating its
interest) boosts a later truthfulness test rating. Data was collected from 1000
human participants using an online experiment, and 1000 simulated participants
using engineered prompts and LLM completion. 64 ratings per participant were
collected, using all exposure-test combinations of the attributes: truth,
interest, sentiment and importance. The results for human participants
reconfirmed the ITE, and demonstrated an absence of effect for attributes other
than truth, and when the same attribute is used for exposure and test. The same
pattern of effects was found for LLM-simulated participants. The second study
concerns a specific mode of influence - populist framing of news to increase
its persuasion and political mobilization. Data from LLM-simulated participants
was collected and compared to previously published data from a 15-country
experiment on 7286 human participants. Several effects previously demonstrated
from the human study were replicated by the simulated study, including effects
that surprised the authors of the human study by contradicting their
theoretical expectations (anti-immigrant framing of news decreases its
persuasion and mobilization); but some significant relationships found in human
data (modulation of the effectiveness of populist framing according to relative
deprivation of the participant) were not present in the LLM data. Together the
two studies support the view that LLMs have potential to act as models of the
effect of influence. | Lewis D Griffin, Bennett Kleinberg, Maximilian Mozes, Kimberly T Mai, Maria Vau, Matthew Caldwell, Augustine Marvor-Parker | 2023-03-10T16:53:30Z | http://arxiv.org/abs/2303.06074v1 | # Susceptibility to Influence of Large Language Models
###### Abstract
Two studies tested the hypothesis that a Large Language Model (LIM) can be used to model psychological change following exposure to influential input. The first study tested a generic mode of influence - the Illusory Truth Effect (ITE) - where earlier exposure to a statement (through, for example, rating its interest) boosts a later truthfulness test rating. Data was collected from 1000 human participants using an online experiment, and 1000 simulated participants using engineered prompts and LIM completion. 64 ratings per participant were collected, using all exposure-test combinations of the attributes: truth, interest, sentiment and importance. The results for human participants reconfirmed the ITE, and demonstrated an absence of effect for attributes other than truth, and when the same attribute is used for exposure and test. The same pattern of effects was found for LIM-simulated participants. The second study concerns a specific mode of influence - populist framing of news to increase its persuasion and political mobilization. Data from LIM-simulated participants was collected and compared to previously published data from a 15-country experiment on 7286 human participants. Several effects previously demonstrated from the human study were replicated by the simulated study, including effects that surprised the authors of the human study by contradicting their theoretical expectations (anti-immigrant framing of news _decreases_ its persuasion and mobilization); but some significant relationships found in human data (modulation of the effectiveness of populist framing according to relative deprivation of the participant) were not present in the LIM data. Together the two studies support the view that LLMs have potential to act as models of the effect of influence.
## 1 Introduction
Human beliefs, attitudes and values can be held absolutely ('dinosaurs roamed the Earth', 'I love my children', 'family first') but are often modal or graded ('COVID19 may have an artificial origin', 'I mostly trust the BBC', 'I try to follow my religion'). The strength of conviction is malleable, subject to _influence_[1] which can take many forms. Some forms are generic, independent of the content: logical deduction from agreed premises, or rhetorical devices such as rapid speech [2]. While others require a mobilization of specific factors: manipulating beliefs of feared or desired outcomes [3, 4], encouraging conformity [5], distorting the weighting of pro and con arguments [6], provision of false information [7], and more.
An improved scientific understanding of influence:
\(\circ\) could start to account for an important aspect of the human condition,
\(\circ\) would have applications ranging from the clearly malign (e.g. national scale disinformation, election manipulation), through the ambivalent (e.g. consumer advertising, political campaigning), to the arguably beneficial (e.g. encouraging healthy behaviours, de-radicalization), and
\(\circ\) would help in making AI systems immune to unwanted influence.
Investigating the effects of influence on human psychology by using experiments with human participants is slow, expensive and subject to ethical restrictions [8]. Similar difficulties bedevi the study of the effect of drugs on human physiology. In that domain, physiological models (e.g. mice) have proven utility despite their limitations. We propose that **Large Language Models (e.g. GPT-3 [9]) can be useful models of human psychology for investigating influence**, just as mice are useful models of human physiology for investigating the effect of drugs. This is a bold proposal since Large Language Models (LLMs) were not devised to model human psychology, nor do they have a shared evolutionary history like mice and humans. On the other hand, they do share linguistic input, and many have been struck by the human-like responses of LLMs [10] and so the proposal is not implausible.
Recent studies (reviewed in section 2.2) have investigated whether LLMs have human-like psychological responses, but it has not yet been reported whether LLMs can be influenced to change these responses like humans. Here we report two empirical studies that test this.
The paper is structured as follows. Section 2 introduces Large Language Models and reviews studies comparing them to human psychology. Section 3 details a study looking at a generic mode of influence - the Illusory Truth Effect (ITE) - whereby earlier exposure to a statement makes it seem more truthful later [11]. Our results demonstrate that GPT-3 is subject to the same ITE as humans. Section 4 details a study looking at a more specific mode of influence - Populist Framing of News (PFN) - to make it more persuasive and politically mobilizing. A previously published large scale human experiment on PFN is simulated with GPT-3, which is found to respond like humans in some respects, including ones which contradicted the theory-based expectations of the authors of the human study, but not all. Sections 5-7 summarize, discuss and conclude.
## 2 Large Language Models
Large Language Models (LLMs) are artificial neural networks that operate with natural language text broken into tokens (short words or chunks of longer words). State-of-the-art LLMs, such as OpenAI's GPT-3 [9], make use of a transformer neural network architecture which uses a self-attention mechanism to adaptively determine the semantically relevant context of each token, rather than the simpler assumption that it is only nearby tokens which are relevant.
LLMs are trained with a single simple core auto-regressive objective (predict the next token from the context of preceding ones), using datasets of natural language text. This training sets the values of the weights of the network, encoding the conditional statistics of the natural language that it has been trained on. The trained network can then process unseen incomplete texts and estimate probabilities for the identity of the next token.
The auto-regressive capability gives rise to a range of capabilities reviewed in section 2.1. The potential uses of LLMs are very wide and still being explored, but the interest in this paper is the extent
to which their capabilities align with a particular aspect of human psychology - its susceptibility to influence. In section 2.2 we review other results on alignment between LLMs and human psychology.
The examples and experiments in this paper make use of GPT-3, in particular the text-davinci-003 model, which is representative of the state-of-the-art (2022/23) in auto-regressive, transformer-based LLMs trained purely on language data without additional training using human feedback. We accessed GPT-3 through its web 'playground' interface for pilot experiments, and through its API for larger scale experiments.
### Capabilities
Given a stretch of text called a _prompt_, an LIM estimates probabilities for what the next token would be, under the assumption that the text is a sample from the same distribution of natural language fragments upon which it has been trained. For example, given the prompt:
**John took his hat and**
GPT-3 estimates the probabilities of the most likely next tokens to be:
left (31%), went (19%), coat (13%), put (12%) and so on.
LLMs can _complete_ text by picking a next token, with a temperature parameter controlling stochasticity, between the extremes of picking the most likely (temp=0.0) or sampling according to the probabilities (temp=1.0). LLMs complete one token at a time, but the prompt with completion appended can be fed back into the LIM as a new prompt to generate the next token, and so on iteratively.
The John example completes (using bold to indicate the prompt), at temp=0.0, to:
**John took his hat and left**
John put on his hat and left the room
The LIM has completed the text under the assumption that the given partial sentence was the first of a list of such sentences. With a longer prompt the nature of the text is more clearly established and ideas and structures expressed in the prompt can steer the completion. For example:
**John had been feeling out of sorts all day and the feeling was getting worse...//200 words omitted//...John took his hat and groceries home and went straight to bed. He slept soundly that night and when he awoke the next morning, the strange sensation in his stomach was gone**
An emergent capability of LLMs is that their completions are not only plausible given the prompt, but also show some consistency with established facts [12], both concrete:
**Three reasons for WW1.**
**1. The rise of nationalism in Europe**
**2. The alliance system**
**3. The assassination of Austrian Archduke Franz Ferdinand**
and abstract:
**thirty-two times twenty-nine is 928**
Whether LLMs, such as GPT-3, have knowledge in a human sense, or merely the appearance of knowledge, is a debateable issue; but whatever its status, it is not grounded in direct experience, nor gained by an error-correcting interaction with a language-using community. LLMs have what seems
like knowledge as a side-effect of learning the statistics of natural language! This is an extraordinary extension of the older, already remarkable, idea [13-15] that semantics are available within linguistic statistics.
The capabilities of LLMs have been explored in a flurry of 'prompt engineering', including the useful discovery that LLMs can make sense of instructions written in loose pseudo-code [16] - presumably because of the inclusion of code in their training corpus. For example:
<< sentence list >> # quel gatto si sedettes sul tappeto # dashuf ososo volko zoz # oh for a muse of fire << end list >>
Process << statement list >> deciding what language each is written in. Output your results in the format: ${sentence from list in speech marks} is written in ${language|an unknown language}. ${process next statement in list} Begin.
"quel gatto si sedette sul tappeto" is written in Italian. "dashuf ososo volko zoz" is written in an unknown language. "Oh for a muse of fire" is written in English.
The text-davinci-003 version of GPT-3 we use in this paper has a maximum prompt length of 4000 tokens, a substantial increase on previous models, allowing complex tasks to be specified. This will be essential for the studies we report in sections 3 and 4. It is worth emphasizing that while a long prompt makes more complex completions possible, the capacity to process a long prompt does not in itself explain the startling performance of GPT-3 and similar LLMs. That performance arises from the _accuracy_ of their estimates of the probabilities of the next token conditioned on the preceding prompt. That accuracy derives from:
* The good fit of the inductive biases that arise from the transformer architecture [17] used in the network with the structures/regularities/redundancies of natural language.
* the great number of free parameters of the network (~2x1011 in GPT-3) into which the conditional probability structure of natural language can be encoded.
* the volume of the training dataset: ~5x1011 words; equivalent to ~10 millennia of fast speech.
### 2.2 LLMs and Human Psychology
**Personality**. [18] administered a personality questionnaire (HEXACO) to GPT-3, measuring the BIG-5/OCEAN dimensions plus the honesty-humility dimension. The instructions and items of that questionnaire (e.g. '_Rate your agreement with...! sometimes feel that I am a worthless person_') were administered with prompt completion to replicate the procedure of testing human participants. The authors found that GPT -3's personality profile was somewhat similar to the average profile from a large representative study with human participants. Using similar methods, [19] showed that the personality of the LLM could be conditioned by preceding testing with a self-description ('_You are a very friendly and outgoing person..._') which enhanced or diminished a targeted OCEAN dimension and correctly manifested in the LLM's open responses to questions about how it would behave in different scenarios.
**Values**. [18] used the Human Values Scale to assess the importance that GPT-3 attaches to specific values (e.g., power, achievement, hedonism). Using prompt completion, GPT-3 indicated on a 6-point
scale how strongly it likened itself to a described person (e.g. _'It is important to them to be rich. They want to have a lot of money and expensive things.'_). When GPT-3 could access its previous answers (i.e. was given a response memory for sequential prompts), the values profile became correlated like human responses, but with a tendency towards more extreme responses than humans. The values universalism, benevolence, self-direction and stimulation were scored particularly high.
**Political Views.**[8] propose that LLMs "can be used as surrogates for human respondents in a variety of social science tasks." They show that if an LLM is conditioned with a demographical self-description e.g. _'deologically, I describe myself as conservative. Politically, I am a strong Republican. Racially, I am white. I am male. Financially, I am upper-class. In terms of my age, I am young.'_ It would then give responses to probes of political views closely matching the responses of humans with the same demographic traits. This alignment was shown in elicited descriptors of citizens of different political stripe (_'Give four words to describe Democrats'_), in voting intention (_'In [year] I voted for...'_), and in the correlational structure of the topics mentioned in simulated interview.
**Creativity.**[20] collected LLM responses to the 'Alternative Uses Test' [21] in which participants produce as many original uses for an everyday object (e.g. a brick) as possible. LLM responses scored marginally lower than humans for originality, surprise and creativity, and marginally higher for utility. They concluded that the difference between LLM and human responses could be expected to close soon.
**Moral Judgment.**[22] examine how LLMs answer moral puzzles about when rule breaking is permissible. They devised a chain-of-thought prompting method [23] implementing a 'contractualist' theory [24] of moral reasoning. The prompt for each task starts with a set-up: a norm, a vignette and an action e.g. setting up a scenario with possibly justified queue jumping. The prompt then guides the LLM to state whether the action breaks the rule. The completion to this is then appended to the prompt, followed by text which solicits consideration of what the purpose of the rule is, and so on. The final step of the prompting asks if the action is justified. This chain-of-thought method produced answers in agreement with human judgements 66% of the time (vs 50% random baseline). Impressive given the complexity of the task but still a considerable gap with to human judgement.
**Theory of Mind (ToM)** is the capability to infer and reason about the mental states of others. In classic experiments participants observe scenes where a mismatch arises between the beliefs of an agent in the scene and the observing participant. A participant with a developed ToM will be able to answer questions about the scene that demonstrate appreciation of this mismatch. [25] tested whether LLM-simulated participants demonstrate apparent ToM capabilities by using prompt adaptions of two classic experiments and found that the most recent LLMs (davinci-003 as also tested in this paper) achieved 93% correct performance, matching that of a typical 9 year-old child. However, a different ToM study [26] found only 60% correct performance.
**Social Intelligence**, the ability to reason about feelings was tested in GPT-3 and found to be limited [26], trailing the human gold standard by more than 30%. For example, for the situation _'Casey wrapped Sasha's hands around him because they are in a romantic relationship. How would you describe Casey?_ GPT-3 selected the answer _'Wanted'_ whereas humans preferred _'Very losing towards Sasha'_.
These studies suggest that a range of aspects of human psychology are modelled by LLMs, some more closely than others. However, the quality and limits of this modelling is far from clear, since some studies have conflicting results, and not all have yet been peer-reviewed. Experimental methods for probing the 'psychology' of LLMs are in their infancy; so, as with human experiments, there may well be subtle pitfalls to be avoided.
In our view, all the reviewed studies use LLMs as models of _static_ aspects of psychology - current views, values, etc. Some, such as the Personality and Political Views studies, _condition_ the LLM before querying it; but that conditioning does not model a psychological change, rather it is intended to steer the LLM towards modelling a person with particular demographic or psychological traits. In contrast, the studies we report in the next two sections consider _dynamic_ aspects of psychology - how beliefs and views can be changed - and assess whether LLMs are able to model such changes.
## 3 Study 1: Illusory Truth Effect
Demagogues understand and exploit the Illusory Truth Effect (ITE). Hitler's operating principles, for example, were said to include: 'if you repeat it frequently enough people will sooner or later believe it' [27]. First experimentally demonstrated in 1977 [28], the ITE - that mere exposure to a statement, without provision of evidence, increases its subsequent apparent truthfulness - has been reconfirmed numerous times; not only for innocuous statements [11], but even for contentious claims [29].
A typical test of the ITE [30] uses a bank of statements devised to be neither obviously false nor obviously true - for example 'orchids grew wild in every continent'. In an _engaged exposure_ phase participants attend to the statements, for example by rating how interesting each one is; then, after an interval (from minutes to weeks), they rate the truthfulness of a new set of sentences, amongst which are some to which they were previously exposed. The truthfulness ratings for a statement are compared between those from participants previously exposed to it versus those from participants seeing it fresh for the first time. The ITE is confirmed by a significant increase, from fresh to exposed.
Many aspects of the experimental paradigm have been investigated, with some reliable conclusions: repeated exposures gives a stronger effect [31]; a longer interval between statement exposure and truth rating gives a weaker effect [30]; if participants are exposed to statements by soliciting truth ratings then later truth ratings in the test phase are not enhanced [32]. The ITE is typically explained as a fluency effect - initial exposure makes processing during the later truth rating phase more fluent, and fluency is taken as an indicator of truth [33].
The ITE is an interesting phenomenon with respect to the hypothesis of this paper - that LLMs can be useful models of how human beliefs change in response to influence. The ITE can be considered an example of influence operating beyond the principles of logic, evidence and argument, and it is an important test whether an LLM is vulnerable to such a mode.
We have devised an experiment suitable for human and GPT-3 participants, allowing a direct comparison of results. Our experiment includes a variation that has not previously been reported in ITE experiments - the use of four attributes (the standard truth and interest, plus sentiment and importance; see Table 1) - used in all combinations for exposure and test rating. We call it _same_ when the exposure and test attributes are identical, and _mixed_ when different. By testing on all combinations of attributes we will be able to determine whether we have found an Illusory Truth Effect (ITE) or merely an Illusory Rating Effects (IRE). By testing on same-exposure conditions we can test the previously reported ineffectiveness of truth exposure to boost truth ratings, and analogously for other attributes.
Our hypotheses are:
* [noitemsep,topsep=0pt]
* HITE: The standard ITE boost for truth rating resulting from mixed-exposure.
* HITE: No analogy of the ITE for other attributes e.g. mixed-exposure does not increase importance ratings.
* Hsame: Same-exposure has no effect on test ratings for any attribute.
* HGTP-3: GPT-3 shows the same effects as humans for all attributes, for both same- and mixed-exposure.
### Measuring ITE in GPT-3 Participants
The authors devised a dataset of ~200 novel statements. Based on their own rating on the four attribute scales, these were reduced to 100 statements that were diverse on those scales. Table 2 shows examples.
To test whether GPT-3 exhibits the ITE we need an experimental protocol that can be equally administered to GPT-3 and human participants which implements an exposure phase followed by a test phase. Prompt completion is well suited to gathering statement ratings, less obvious is how to implement the test phase _following_ the exposure phase. To justify the scheme we implemented, we first describe some rejected schemes.
**Rejected Scheme 1**. Split the experiment into an exposure prompt & complete followed by a test prompt & complete. This certainly will _not_ exhibit an ITE since GPT-3 does not have a memory which can carry a causal trace from the earlier prompt & complete to the later.
**Rejected Scheme 2**. Fine-tune GPT-3 on statements (without accompanying ratings) as an exposure to them, leaving a trace in the changed weights of the network; and then using prompt & complete to gather test phase ratings. This is worth investigating, but it seems a better analogy to eliciting truth ratings for facts learnt at school than it is an analogy to the standard ITE experiment.
**Rejected Scheme 3**. Use prompt & complete for the exposure phase; then append the test task to that prompt plus completion to get a new prompt, and then generate a completion to that to gather test
\begin{table}
\begin{tabular}{|l|l|l|} \hline Attribute & Lowest Rating & Highest Rating \\ \hline \hline Interest & 1 = Very uninteresting & 6 = Very interesting \\ \hline Sentiment & 1 = Very sad & 6 = Very cheerful \\ \hline Truth & 1 = Definitely false & 6 = Definitely true \\ \hline Importance & 1 = Very unimportant & 6 = Very important \\ \hline \end{tabular}
\end{table}
Table 1: Statement attributes and scales used in the experiment.
\begin{table}
\begin{tabular}{|l|} \hline The Philippines has a tricameral legislature \\ \hline London is closer to New York than to Rome \\ \hline Mark Chapman assassinated JFK \\ \hline The Slateford Aqueduct has 100 arches \\ \hline Death Metal is very popular in Finland \\ \hline The population of Andhra Pradesh score high life satisfaction \\ \hline Harrison and Harrison Ltd make pipe organs \\ \hline The Ohio Penguins are a baseball team \\ \hline A small number of women have tetrachromatic vision, so see more colours \\ \hline John McCartney and Paul Lennon were in the Ruttles \\ \hline \end{tabular}
\end{table}
Table 2: Example statements used for testing the ITE.
ratings. This is a fine approach but has a technical issue. The issue arises because to get GPT-3 to reliably rate multiple statements in a single prompt & complete it is necessary for the completion to repeat each statement as the rating is generated. Consequently statements would be doubly exposed in the test prompt, which would need to be replicated in the human experiment; and the test prompt becomes very long, limiting how many statements can be exposed and tested in each cycle of prompt & complete.
**Implemented Scheme.** An initial prompt & complete solicits statement ratings (for example to interest, as per the classic ITE paradigm). A new test prompt is then constructed, the first part of which recaps the exposure prompt & complete (e.g. _"Earlier you rated the interest of 'Most frogs are green' as 12: quite uninteresting"_, followed by a new request to rate statements (e.g. _"rate the truthfulness of 'Most frogs are green"_). The details of this are shown in figures 1 and 2.
Figure 1: An example exposure-phase prompt & complete. Text on white is prompt, on green is completion. The two-column format is only for visibility in the figure. a) introduces the ratings scales. b) describes the task in ordinary language. c) uses pseudo-code to describe how the task should be completed to ensure that responses are in a consistent format. d) is a list of statements together with the attribute which it should be rated on. This list is different for each simulated participant, as is the pairing of statements and attributes. For visibility of the figure only six statements-attribute pairs are shown. In the actual experiment there were 32. e) the completion generated by GPT-3. Comparing d and e shows that the GPT-3 has completed the task without error.
Given the format of prompts shown in figures 1 and 2, and given the 4000-token limit for prompt plus completion, we are able to expose GPT-3 to 32 statements, and then test it on 32 statements. We construct the prompts for each participant as follows: 16 statements appear in the exposure phase but not the test phase, 4 paired with each of the 4 attributes; 16 statements appear only in the test phase but not the exposure phase, 4 paired with each of the 4 attributes; 16 statements occur in both phases, between them covering each combination of exposure-attribute and test-attribute. Thus, for each participant: exposed statements are as likely to reappear in test as not; test statements are as likely to have been previously exposed as not; and all combinations of exposure- and test-attribute are equally common.
We construct random Latin Squares [34] to choose statements and attributes for participants, and their order of presentation, so that these are balanced across a block of 100 participants. We simulate 10 blocks of 100 participants (different Latin square for each) for a total of 1000 participants, costing ~$200 for API usage. The resulting dataset consists of 10 test-ratings for each triple <statement, exposure-attribute, test-attribute>, and 40 test-ratings for each pair <statement, unexposed, test-attribute>.
### Measuring ITE in Human Participants
We used the Prolific platform (www.prolific.co) to recruit 1000 participants constrained to be 21-65 years old (\(\mu\)=38, \(\sigma\)=11), UK resident, English as first language, roughly gender-balanced (51% female), and with some track record of successfully completed Prolific studies (\(\geq\)100). Each participant completed a multi-screen questionnaire which started with a screen on ethics permission (granted after review by the Computer Science, UCL Research Ethics Committee and Head of Department approval) and collected consent. Statements were shown on individual screens as in figure 3.
Figure 2: An example test-phase prompt & complete following on from the example in figure 1. a*b) same as in the figure 1. c) recaps the ratings made in the completion of the exposure phase. d) pseudo-code instructions (same as c in figure 1). e) statements and attributes to be used (same format as d in figure 1). Only six statements are shown for visibility, in the experiment there were 32. Note how some statements occur in both c and e, paired with the same attribute in some instances, and different attributes in others. f) the error-free completion to the prompt generated by GPT-3.
We used the same sequence of statement and attribute pairs as for the GPT-3 simulated participants. Into those trials we inserted attention trials (two per block) requiring participants to give specified responses, and appended an attention quiz in which participants indicated which of 10 statements they had seen during the test. Results of attention checks and quizzes, and completion timings were used to reject and replace 9% of the participants. Participants took a median time of ~10mins to complete the survey and were paid at a rate of ~E9/hr for this. Participants were recruited in the period 16-23/feb/2023.
### Comparison between ITE in Humans and GPT-3
We first compare the ratings given by GPT-3 and humans to unexposed statements. Figure 4 shows that the distributions of ratings produced by humans and GPT-3 are similar, except for truth where humans are much less likely than GPT-3 to rate a statement as 6='definitely true'. The correlations between human and GPT-3 ratings are significantly positive for all four attributes, but the per-statement confidence intervals make it clear that there are instances of significant mismatch (table 3 shows examples).
Figure 4: Mean ratings made during the exposure phase, compared between human (x) and GPT-3 data (y) – one point for each of the 100 statements. Error bars show 95% confidence intervals. Green line is y=x. Correlations are given above each plot with a 95% confidence interval. Symbols in the truth plot are coloured according to whether the statement is actually true (blue), false (red) or uncertain (black).
Figure 3: Rating response screen used in human data collection.
We now consider how ratings are changed by previous exposure. Let \(r\) and \(r\)' be the mean rating of a statement without and with previous exposure respectively. In typical ITE literature the relationship is modelled as a constant boosting effect (i.e. \(r\)' = \(r\) + _offset_), independent of the unexposed rating; and the ITE is that the offset parameter is significantly greater than zero. However, the plots in figure 5 show that, for truth rating at least, the boosting effect is not independent of the unexposed rating - initially more truthful statements are boosted less. To capture this we fit a general linear function, which for interpretability we parameterize as:
1) \(r\)' = \(r\) + _offset + tilt \(\times\) (\(r\)-3.5)_,
3.5 being the midpoint of the 1-6 rating scales used. Figure 5 suggests that both human and GPT-3 data exhibit an ITE with a similar linear trend, though the GPT-3 data is markedly more variable around the best fit than the human data.
Table 4 presents complete results for mixed-exposure and all attributes, for humans and GPT-3, together with the results of tests of whether regression coefficients were significantly different from zero. Confidence intervals and p-values were computed using \(10^{4}\) bootstrap re-samplings of the participants and statements. Bonferroni correction (n=16) of both was used to prevent excess false positives due to multiple comparisons. The values in the Human column of the first row show that our data reconfirms the standard ITE (HTE). The significantly negative tilt coefficient in the row below adds the nuance that truth boosts are smaller for initially more truthful statements (as per figure 5, right). Values in the other rows of the Human column confirm that the ITE is not merely an IRE (HRE). The
\begin{table}
\begin{tabular}{|l|l|c|c|} \hline \multirow{2}{*}{attribute} & \multirow{2}{*}{statement} & \multicolumn{2}{c|}{Mean Rating} \\ \cline{3-4} & & Human & GPT-3 \\ \hline \multirow{2}{*}{truth} & The Ohio Penguins are a baseball team & 3.7 & 1.6 \\ \cline{2-4} & Spiders have exactly six legs & 2.0 & 6.0 \\ \hline \multirow{2}{*}{interest} & Millions of children die annually through house fires & 4.5 & 1.9 \\ \cline{2-4} & Most American homes have a fridge & 2.3 & 5.5 \\ \hline \multirow{2}{*}{sentiment} & The Orange-tufted Spiderhunter is a type of fish & 4.0 & 2.5 \\ \cline{2-4} & Loyal Huskies will pull a sled till they drop & 2.6 & 5.4 \\ \hline \multirow{2}{*}{importance} & Gravity is a social construct & 3.0 & 1.1 \\ \cline{2-4} & Spiders have exactly six legs & 3.0 & 5.5 \\ \hline \end{tabular}
\end{table}
Table 3: Statements with greatest differences between human and GPT-3 mean ratings. Ratings are on a scale from 1 to 6.
Figure 5: Mean truth ratings before (x) and after mixed-exposure (y). Error bars are 95% confidence intervals. The dashed red line is the identity function, the solid blue line is the best linear fit.
values in the GPT-3 column confirm H\({}_{\text{GPT-3}}\) for mixed-exposure. Table 5 presents results for _same_ exposure. The results confirm H\({}_{\text{same}}\) and confirm H\({}_{\text{GPT-3}}\) for same exposure.
In summary:
* Although correlated, there are considerable differences between the unexposed ratings given to statements by humans and GPT-3 for all attributes.
* For humans:
* The ITE has been reconfirmed.
* The ITE has been shown to not merely be an IRE, since there is no illusory effect for interest, sentiment nor importance.
* Same-exposure has been shown to be ineffective for all attributes.
* For GPT-3:
* The same pattern of effects as in humans is demonstrated for all attributes and all combinations of same- and mixed-exposure.
## 4 Study 2: Populist Framing of News
Bos et al. [35] investigated whether populist framing (emphasizing in-group vs out-group divisions) of a news article modulated its effect on a reader. In section 4.1 we review their study design; in section 4.2 we present an adaptation of the study suitable for GPT-3 rather than human participants; and in section 4.3 we compare the results of the human and GPT-3 studies.
\begin{table}
\begin{tabular}{|l|l|c|c|} \hline Test Attribute & & Human & GPT-3 \\ \hline \multirow{3}{*}{Truth} & offset & 0.26 [0.12, 0.39]\({}^{**}\) & 0.54 [0.22, 0.95]\({}^{**}\) \\ \cline{2-4} & tilt & -0.15 [-0.32,–0.03]\({}^{**}\) & -0.18 [-0.38,–0.04]\({}^{**}\) \\ \hline \multirow{3}{*}{Interest} & offset & -0.03 [-0.29, 0.21] & -0.20 [-0.41, 0.04] \\ \cline{2-4} & tilt & -0.13 [-0.39, 0.01] & -0.12 [-0.36, 0.06] \\ \hline \multirow{3}{*}{Sentiment} & offset & -0.04 [-0.16, 0.08] & 0.03 [-0.12, 0.20] \\ \cline{2-4} & tilt & -0.06 [-0.19, 0.01] & -0.19 [-0.34, -0.09] \\ \hline \multirow{3}{*}{Importance} & offset & -0.11 [-0.27, 0.08] & 0.00 [-0.17, 0.20] \\ \cline{2-4} & tilt & -0.01 [-0.23, 0.10] & -0.19 [-0.35, -0.07] \\ \hline \end{tabular}
\end{table}
Table 4: Parameter estimates for the relationship between unexposed and exposed ratings, modelled by equation 1. For each test attribute, all types of _mixed_ exposure are pooled together for results in this table, but data for _some_ exposure is excluded. Bonferroni-corrected (n=16) bootstrap-computed 95% confidence intervals are shown after least-squares best fit estimates. Significantly non-zero estimates are colour-coded: red for positive, blue for negative. Superscripts indicate significance: "p<0.05, "p<0.01, "p<0.001.
\begin{table}
\begin{tabular}{|l|l|c|c|} \hline Test Attribute & & Human & GPT-3 \\ \hline \multirow{3}{*}{Truth} & offset & -0.07 [-0.27, 0.13] & 0.00 [-0.36, 0.44] \\ \cline{2-4} & tilt & 0.05 [-0.18, 0.19] & 0.02 [-0.22, 0.17] \\ \hline \multirow{3}{*}{Interest} & offset & -0.04 [-0.30, 0.30] & 0.02 [-0.26, 0.39] \\ \cline{2-4} & tilt & 0.00 [-0.38, 0.23] & 0.10 [-0.23, 0.35] \\ \hline \multirow{3}{*}{Sentiment} & offset & -0.13 [-0.31, 0.06] & -0.05 [-0.21, 0.14] \\ \cline{2-4} & tilt & -0.01 [-0.19, 0.11] & 0.06 [-0.08, 0.16] \\ \hline \multirow{3}{*}{Importance} & offset & -0.16 [-0.41, 0.11] & 0.15 [-0.08, 0.43] \\ \cline{2-4} & tilt & 0.11 [-0.19, 0.29] & -0.02 [-0.21, 0.10] \\ \hline \end{tabular}
\end{table}
Table 5: Same conventions as table 4, but here the exposure phase uses the same ratings scale as the test phase. No coefficients are significantly different from zero.
### 4.1 Measurement of PFN in Humans
In 2017 Bos et al. [35] recruited 7286 participants in roughly equal numbers from each of 15 countries, with demographic balancing within each country. Using online surveying, demographic traits were queried and the relative deprivation of each participant was assessed. Relative deprivation being a subjective feeling of economic, social and political vulnerability. Participants were then shown one of four mocked-up news articles, and then asked questions about their agreement with the content of the article and their willingness to act upon it.
Each version of the article (translated into the participant's mother tongue) concerned a study from a fictional nongovernmental organization warning of a likely future decline in purchasing power. The baseline version of the article reported the study neutrally while the other versions used 'populist identity framing', portraying ordinary citizens as an in-group threatened by the actions and attitudes of out-groups. One version drew attention to politicians as an elitist out-group; another to immigrants; and the final version blamed both groups, and additionally the support of politicians for immigrants. Based on Social Identity Theory [36] the authors predicted that all forms of framing would make the articles more persuasive and mobilizing than the unframed article, and this influence would be greater on more relatively deprived participants.
In a pre-test phase participants provided demographic information (age, gender, education, political interest, political alignment) and rated agreement with three statements (e.g. 'I never received what I in fact deserved') to allow their'relative deprivation' to be quantified. Following exposure to the article, presented as a generic online news item complete with photo of hands opening a wallet, the participants rated agreement with each of two statements (e.g. 'The economy will face a decline in the near future') to gauge how _persuaded_ they were of the issue reported in the article, and rated their willingness to perform three actions (e.g. 'Share the new article on social media') to gauge how _mobilized_ they were.
### 4.2 Measurement of PFN in GPT-3
Each human participant completed a survey in the sequence: 1) demographic information; 2) relative deprivation ratings; 3) exposure to news article; 4) rating of probe statements. To adapt this for GPT-3 participants we _simulate_ steps 1-3, providing answers generated from Bos et al.'s summary statistics of their respondents' demographics, and then use GPT-3 completion for step 4 to generate ratings for the probe statements _given the earlier responses (1+2) and news article exposure (3)_. This is shown in figure 6.
The demographic information included in the prompt is sampled from the data provided by Bos et al. [35] on the number of participants per country, and the per-country distribution of gender, age, education, political interest and political ideology ratings. We use the provided per-country parameters for the distributions and made the simplifying assumption that those distributions were independent.
Bos et al. [35] state that the three relative deprivation ratings are highly correlated and provide the mean and standard deviation (4.30 and 1.61 respectively) for (what we take to be) the per-participant mean of those three ratings - this is a single distribution, not per-country. We generate simulated deprivation ratings by real-valued sampling from that distribution, generating three perturbations of that sample, and rounding each to an integer 1,...,7 - yielding three ratings. The perturbation magnitude was chosen so that three identical ratings resulted ~50% of the time. We made the assumption that relative deprivation ratings are independent of the demographic information.
Each GPT-3 participant is shown a random choice from Bos et al.'s four versions of the news article. Figure 6 shows the version with anti-elitist _and_ anti-immigrant framing, the three other versions (single outgroup framing and no framing) are text reductions of the example shown.
Figure 6: Format of prompts used to implement the Bos et al. [35] study with GPT-3 participants. The prompt is intended to read like an incomplete survey with written in answers. The central block of text on white shows an example prompt, the “5” on green shows the completion provided by GPT-3. Key parts of the prompt are indicated by letters. a) Demographic information for the simulated participant b) The simulated participant’s simulated agreement ratings for statements to gauge relative deprivation. c) The version of the news article shown to this simulated participant – this is the version with an anti-elitist _and_ anti-immigrant framing. d) The final instruction for a rating, following the format used in part b; in this example to gauge agreement with the news content of the article.
The final part of the prompt is to collect a rating for a _single_ probe statement. Following Bos et al., five probe statements were employed: two that assessed the persuasion of the article, and three that assessed the political mobilization that resulted from reading it. Each simulated participant thus has five prompt completions collected - holding the initial parts of the prompt constant and varying the final probe. Prompts were completed using a temperature of 1.0 so full probabilistic sampling. An overall persuasion score for a participant was calculated as the mean of their two persuasion ratings, and an overall mobilization score as the mean of their three mobilization ratings.
We intended to collect data for 7286 GPT-3 simulated participants, matching the size of the Bos et al. study, but due to other usage hit our monthly cap for GPT-3 queries after 2153 participants. This number was however sufficient to get stable estimates for regression parameters and their uncertainties as described in the next section. Data was collected using the OpenAI API in early February 2023, costing ~$100.
### Comparison between Human and GPT-3 PFN
Table 6 shows that there are substantial differences between the ratings of probe statements produced by Humans and GPT-3. The mean persuasion ratings match well, but GPT-3 mobilization ratings are on average nearly 2 units (on a 7-point scale) higher than humans. For both types of probe, GPT-3 ratings have roughly half the variability of human ratings.
Bos et al. [35] were concerned not with the absolute level of ratings but to check their predictions that persuasion and mobilization ratings would be increased by populist framing, and that increase would be modulated by the relative deprivation of the participant. To that end they compute linear regressions of persuasion (\(P\)) and mobilization (\(M\)) ratings based on a pair of Boolean variables \(E,I\in\{0,1\}\) which indicated whether the exposed news article made use of anti-elitist and/or anti-immigrant framing, a continuous variable \(D\in[1,7]\) coding the relative-deprivation score for a participant, and 14 Boolean country flags \(C_{i}\in\{0,1\}\) indicating country of residence (the 15th country being coded by all flags being zero). Robust standard errors (clustered by country) of regression coefficients were reported, with t-tests being performed to determine when significantly non-zero. We performed the exact same analysis on the GPT-3 data. Human and GPT-3 results are shown in Table 7.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Human & GPT-3 \\ \hline \hline Persuasion (P) & 5.11 (1.37) & 5.28 (0.72) \\ \hline \multicolumn{1}{|l|}{Mobilization (M)} & 3.81 (1.76) & 5.74 (0.82) \\ \hline \end{tabular}
\end{table}
Table 6: Mean and standard deviations of per-participant mean ratings compared for Human data (from Bos et al. [35]) and the GPT-3 model.
Hypothesis H1a - that anti-elitist framing increases persuasion was supported by Bos et al.'s human data and was also found in the GPT-3 data. Hypothesis H1b - that anti-immigrant framing increases persuasion was contradicted by the human data and by the GPT-3 data. This was presented by Bos et al. [35] as an unexpected result at odds with their predictions from theory. Seeking to explain it they speculated that the immigrant-blaming articles may have seen far-fetched, triggering counter-arguing; or that the result was due to'socially desirable responding' causing respondents to self-censor responses. It is remarkable that this unexpected result is replicated by GPT-3. Hypothesis H1c, that blaming both groups would have an additional persuasive effect, was not supported or contradicted by the human data, but is supported in the GPT-3 data.
The pattern of results for mobilization (H2a, H2b and H2c) is similar to persuasion. The surprising reduction in mobilization for anti-immigrant framing that was found for human participants was also found for GPT-3. Anti-E framing had an insignificant effect on persuasion for humans, but was significantly positive for GPT-3 (as per the expectations of Bos et al. [35]). I+E-framing had no significant additional impact on mobilization for humans, but was significantly positive for GPT-3.
Both the human data and the GPT-3 data exhibit a significant increase in persuasion and mobilization ratings as a function of relative deprivation (significance of the D coefficients). This relationship was not an explicit hypothesis of Bos et al. since it is not predictive of the effect of exposure to populist framing (i.e. it is a pure D term rather than DxE, DxI or DxExl). We include it because it shows that the GPT-3 responses _are_ affected by the simulated relative deprivation ratings provided in the prompts. This makes the failure of the GPT-3 results to exhibit the positive interaction between relative deprivation and populist framing on mobilization that is significantly present for humans (H4a and H4b) disappointing.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Hyp.**} & \multirow{2}{*}{**Dep.**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**prediction \&**} & \multirow{2}{*}{**Human**} & \multirow{2}{*}{**GPT-3**} \\ & & & & & & \\ & & & & & & \\ \hline \hline H1a & \(P\) & \(E\) & \(C_{i}+(E+I)\to P\) & \(>\)0, confirmed & \(\approx\)0.079\({}^{\ast}\) & \(+\)0.478\({}^{\ast\ast\ast}\) \\ \hline H1b & \(P\) & \(I\) & \(C_{i}+(E+I)\to P\) & \(>\)0, **contradicted** & \(-\)0.118\({}^{\ast}\) & \(-\)0.527\({}^{\ast\ast\ast}\) \\ \hline H1c & \(P\) & \(E\)\(\times I\) & \(C_{i}+(E+I+E\times I)\to P\) & \(>\)0, unsupported & \(-\)0.140 & \(\approx\)0.541\({}^{\ast\ast\ast}\) \\ \hline H2a & \(M\) & \(E\) & \(C_{i}+(E+I)\to M\) & \(>\)0, unsupported & \(+\)0.037 & \(\approx\)0.463\({}^{\ast\ast\ast}\) \\ \hline H2b & \(M\) & \(I\) & \(C_{i}+(E+I)\to M\) & \(>\)0, **contradicted** & \(\geq\)0.243\({}^{\ast\ast\ast}\) & \(-\)0.090\({}^{\ast\ast\ast}\) \\ \hline H2c & \(M\) & \(E\)\(\times I\) & \(C_{i}+(E+I)\times I\to M\) & \(>\)0, unsupported & \(+\)0.146 & \(\approx\)0.234\({}^{\ast\ast\ast}\) \\ \hline & \(P\) & \(D\) & \(C_{i}+(E+I)+D\to P\) & & \(\approx\)0.279\({}^{\ast\ast\ast}\) & \(\approx\)0.149\({}^{\ast\ast\ast}\) \\ \hline & \(M\) & \(D\) & \(C_{i}+(E+D)\to M\) & & \(\approx\)0.219\({}^{\ast\ast\ast}\) & \(\approx\)0.125\({}^{\ast\ast\ast}\) \\ \hline H3a & \(P\) & \(D\)\(\times E\) & \(C_{i}+(E+I)+D\to(D\)\(\times E+D\times I)\to P\) & \(>\)0, unsupported & \(+\)0.032 & \(+\)0.048 \\ \hline H3b & \(P\) & \(D\)\(\times I\) & \(C_{i}+(E+I)+D\to(D\)\(\times E+D\times I)\to P\) & \(>\)0, unsupported & \(+\)0.031 & \(-\)0.029 \\ \hline H3c & \(P\) & \(D\)\(\times E\)\(\times I\) & \(C_{i}+(E+I)+D\to D\) & \(>\)0.1\({}^{\ast\ast\ast}\) & \(\approx\)0.105\({}^{\ast\ast\ast}\) & \(\approx\)0.015\({}^{\ast\ast\ast}\) \\ \hline H4a & \(M\) & \(D\)\(\times E\) & \(C_{i}+(E+I)+D\to(D\)\(\times E+D\times I)\to M\) & \(>\)0, **contradicted** & \(\approx\)0.062\({}^{\ast\ast\ast}\) & \(+\)0.000 \\ \hline H4b & \(M\) & \(D\)\(\times I\) & \(C_{i}+(E+D)+D\to(D\)\(\times E+D\times I)\to M\) & \(>\)0, **confirmed** & \(\approx\)0.086\({}^{\ast\ast\ast}\) & \(-\)0.025 \\ \hline H4c & \(M\) & \(D\)\(\times I\) & \(C_{i}+(E+I+E\times I)+D\to(D\)\(\times E+D\times I)\to M\) & \(>\)0, unsupported & \(-\)0.077 & \(+\)0.096 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Hypothesis uses the labelling in Bos et al. [35]; the two unlabelled rows are not influence effects since they are a function only of the participant’s traits, not of framing (\(E\),\(I\)) but are included since relevant to the discussion of hypotheses H4a and H4b. _Dependent Variable_ indicates whether the hypothesis concerned an effect on Persuasion (\(P\)) or Mobilization (\(M\)). _Regressor_ shows the particular term, featuring in the _model_, whose coefficient pertains to the hypothesis. _Prediction & finding_ shows what sign the regression coefficient was hypothesized to have in [35], and the status of the hypothesis in light of the human data. _Human_ (from [35]) and _GPT-3_ columns show values of the regression coefficient. Asterisks indicate significance of the non-zero result: “p-c0.05, “p-c0.01, “’p-c0.001. Colour-coding shows significantly positive coefficients (red) and significantly negative (blue).
In summary, the GPT-3 and Human results differ in the absolute level and variability of persuasion and mobilization ratings, but there is good agreement how these ratings are dependent on the presence of anti-elitist and/or anti-immigrant framing, and on relative deprivation. There are no contradictory results where the signs of regression coefficients are significant from both data sources but opposite in polarity. Most impressively the GPT-3 data finds significant _negative_ effects on persuasion and mobilization resulting from anti-immigrant framing, in agreement with the results reported as surprising by Bos et al. [35]. The positive modulation on mobilization due to relative deprivation found in humans was not present in the GPT-3 data, even though GPT-3 was demonstrated to be sensitive to relative deprivation in a non-modulating way the same as humans. Overall this is a mixed score card - surprising human results (H1b and H2b) were modelled by GPT-3, but some other human results of interest (H4a and H4b) were not, and there were GPT-3 results (H1c, H2a & H2c) that were not seen in human data.
## 5 Summary
Given suitable prompts Large Language Models (LLMs) can provide answers to posed questions. This has allowed researchers to use an LLM to model a human experimental participant, undergoing tests of _static_ aspects of their psychology (section 2.2). In some of the reviewed studies the LLM models an unspecified generic participant, while in others the LLM is _conditioned_ by including a self-description within the prompt so that the completions it generates take account of demographic or psychological traits of the simulated participant.
We hypothesized that LLMs could also model _dynamic_ belief change in response to influencing input. This required us to devise methods to _expose_ the simulated participant to earlier influencing input, and measure the effect of that on later responses. In one study - the Illusory Truth Effect (ITE, section 3) - we applied influencing exposure to generic LLM participants; in the other study - Populist Framing of News (PFN, section 4) - we applied influencing exposure to conditioned LLM participants. The two studies also differed in that the ITE is a _generic_ mode of influence that can be applied the same to any message, while the PFN leverages _specific_ pre-dispositions within participants to have its influencing effect.
In the ITE study, while there were mismatches between humans and GPT-3 in the absolute attribute ratings of truth, interest, etc. given to statements, there was excellent agreement in how prior exposure influenced participants to give higher ratings of truthfulness. This agreement covered the presence of an ITE, how it was eliminated when prior exposure was via rating for truth, and the absence of analogous effects for other attributes (e.g. exposure through an earlier rating of importance, does not effect a later rating of interest). Although we found significant ITEs of similar magnitude in both human and GPT-3 responses, the per-statement effect was more variable for GPT-3 than for humans. Overall, the findings suggest a good match between humans and GPT-3 with respect to the ITE.
In the PFN study, out of 12 influence effects tested (Table 7): four were absent in human and GPT-3 responses; three were significant in both and of matching sign; two were present in humans but not GPT-3; and three were present in GPT-3 but not in humans. The three consistent effects included ones expected from theory (positive effects of anti-elitist framing) and ones counter to theory (negative effect of anti-immigrant framing). The human effects that were absent in GPT-3 concerned the modulating effect of a participant's relative deprivation on the effectiveness of framing. Overall this is a mixed result - some impressive agreement, and some disappointing failure to replicate, but no actual mis-matches.
## 6 Discussion
### Shortcomings
#### Illusory Truth Effect (ITE)
The statement set used for the study was generated by the authors. As figure 5 shows, the ITE is very consistent across statements for humans, but more variable for GPT-3. We have ensured our analysis is robust to this variability by using bootstrap resampling of participants _and statements_ when computing confidence intervals and p-values, but even so we cannot be fully confident that our findings would hold were the statement list generated by different researchers. We experimented with a more reproducible procedure (randomly sampled Wikipedia sentences that were manually classified to be truth-value bearing statements without ambiguous referents) but found the resulting statements rather homogeneous. This deserves further research.
#### Populist Framing of News (PFN)
We generated simulated participants based on the statistical characterisation provided by Bos et al. [35]. We used the characterization broken down by country for all traits apart from relative deprivation for which only a global characterization was given. We did not simulate any correlation between traits, or between traits and and relative deprivation, since no information on this was provided. We suspect that such correlations do exist, and our failure to model them could explain some of the mis-matches between human and GPT-3 results (see section 6.2).
In Bos et al. [35] participants completed the task with materials translated into their native language - we did everything in English, the only cues to nationality being its specification in the conditioning block of demographic traits and its mention in the news article. This may account for the small variation we saw by nationality - specifically, the \(C_{i}\) regression coefficients we obtained were an order of magnitude smaller than those reported by Bos et al. and were uncorrelated with their values. It would be better to simulate the language variation aspect of the study, and very interesting to learn what effect that had.
### Explanations
We introduce terms for four, not mutually exclusive, types of explanation for LLM influence effects:
**Mechanistic.**: Influence of an LLM explained in terms of the processing of an input on a case-by-case basis. To understand the LLM we will need to open it up.
**Meaning.**: Influence leverages, possibly subtle and buried, aspects of the meaning of terms (where meaning is understood as patterns of use [37]). To understand the LLM we need to talk to it.
**Parrot.**: A trained LLM mimics being influenced by reproducing conditional dependencies in the statistics of natural language which were present in its training corpus as traces of influence operating on humans [38, 39]. To understand the LLM we need to study what it has read.
**Accidental.**: A variant of parrot, but the reproduced dependencies do not really exist in the data, the LLM has made an error by modelling a dependency which does not exist, possibly
due to its inductive biases. To understand the LLM we will need to open it up _and_ study what it has read.
### Illusory Truth Effect (ITE)
The standard explanation of the ITE in humans is as a fluency effect [33] i.e. prior exposure to a statement makes it easier to process when encountered later, and fluency is taken as a cue to truthfulness. A fluency-type **mechanistic** explanation is feasible for an LLM, since presumably there are cues available within the statistics of the internal activity of an LLM mid-processing that give some indication of whether a statement has previously been encountered; and conceivably the weights of the network could be such that these statistics could influence its inclination to apply the label 'true'.
Mechanistic explanations which do _not_ make use of some LLM version of fluency are also possible. Perhaps previous exposure could change network weights (if exposure was during training) or internal activity (if earlier in the prompt) influencing the applicability of 'true', without activity statistics playing a role.
A different perspective would consider such mechanisms to be the implementation details of a **meaning**-type explanation i.e. part of the meaning of 'true' is that it is often an appropriate label for statements that have been heard previously. If this seems odd, recall that we are working with a'meaning is use' characterization - though it may still seem odd.
The existence of a mechanism giving rise to the ITE would need to be explained. A **parrot**-type explanation could do this i.e. a regularity within the training corpus (crudely, frequently repeated statements often being taken to be true) had caused the LLM to develop the mechanism so that it could reproduce the regularity in its predictions. Or the mechanism could exist as an **accident**, the regularity imposed on generated text being a hallucination.
### Populist Framing of News (PFN)
Satisfying **mechanistic** explanations seem difficult to obtain for the PFN findings. A route to them would be to map out which nodes of the LLM's network are influential in the PFN effect. The resulting map would probably leave one none the wiser; but possibly an atlas, with more explanatory power, could be charted based on a diversity of studies, though it might well become overwhelming before it became enlightening.
Meaning explanations for the influence effects we see in the PFN study are possible and maybe more satisfying. It could go something like this: part of the meaning of 'politician' is someone whose actions often have negative consequences; thus a predicted negative event, when said to be caused by politicians becomes more likely to occur, hence more concerning. Such a meaning-based explanation could be tested by measuring the views and opinions of an LLM, like political scientists do with human participants.
Parrot explanations for PFN effects are viable too. They would require traces of the positive influence effect of anti-elitist framing, and the negative influence effect of anti-immigrant framing, to exist in the training corpus of the LLM. Manipulating the data present in that corpus and showing that altered the influence of the framing could confirm such an explanation.
For PFN we also need to explain why some human effects were _not_ reproduced by the LLM. In particular, why did conditioning the LLM-simulated participants to answer as if having a particular level of relative deprivation _not_ modulate the mobilizing effectiveness of populist framing? The problem was not that the relative deprivation scores provided as conditioning were ignored by the LLM - Table 7, rows without hypotheses labels, disproves that possibility - just that there was no interaction of those scores with the type of framing used.
The failure could be down to shortcomings of the simulation of the human experiment, as described earlier: the failure to simulate any correlation between relative deprivation scores and demographic traits. Perhaps relative deprivation has its effect via the intermediary of a correlated political affiliation trait for example.
A parrot-type explanation for the failure to reproduce the effect would be to show that the training corpus carried no trace of this particular relationship (though how this analysis could be done other than by training a transformer-based network is not clear). Absence of a trace could be due to the geographical-tilt of the training corpus being mismatched to the EU citizens who were the human participants, or a more general problem in written language failing to reflect all facts about all parts of life.
A _mechanistic_ explanation might be that the network architecture is not sufficiently expressive to reproduce this effect. Consider that the task requires taking account of the _interaction_ of many parts of the prompt - the relative deprivation probe statements, the ratings given to those statements as conditioning, the framing of the news, the news content itself, and the statement probing the participant's mobilization - when generating the rating of the probe.
The failure to model also invites types of explanation different from those for successful modelling. The failure could be due to LLMs being an impoverished model of human psychology, possibly doing one element well while entirely missing other elements such as emotional response which might be relevant to this particular influence effect.
## 7 Concluding Remarks
Our results support our hypothesis that an LLM can model influence in human participants, not perfectly, but well enough to be useful. Remarkable given that such modelling is far from the task for which the LLM was constructed, and it was not adapted in any way - so improved models are a reasonable possibility. We consider the implications of such models being widely available, and simple to work with, as they already are.
### Psychology
Our results add to the broader agenda of establishing the quality and limits of LLMs as a model of human psychology more generally. They add a positive result with caveats and suggest that dynamic aspects of psychology are modellable as well as static.
The advantages of an LLM model, compared to human participants, include speed of experimental set-up and execution, reduced cost, increased scale, and bypass of some ethical considerations. All LLM experiments in this paper were developed, implemented and run in a couple of months for a cost of a few hundred collars; and LLM prompt engineering will become more familiar, and costs will likely come down. In comparison the human ITE experiments cost a few thousand dollars, while the human PFN experiments probably cost a few tens of thousands.
Beyond the convenience of LLM models, they also offer opportunities for further investigations that are comparatively difficult with human participants. LLMs can be opened up and their architecture, weights and internal activity studied, providing analogues of neuroscientific studies [40]. We can also countenance investigating how the texts which the LLM has been trained on leads to its responses. This could be a Golden Age for parts of Psychological Science, except of course LLMs are not humans [41], and LLMs models will have limits of applicability and only be an approximation within those limits: but nor are mice men [42], yet even so they have advanced biomedicine hugely [43].
## Applications
It could well be that LLM-based investigation of influence leads to better understanding of existing methods, but development of only incrementally better new methods. We consider the risks and benefits should this not be the case, and substantially better methods are discovered. The applications would range from the clearly malign as far as the arguably beneficial.
At the malign end there is manipulation of individuals for fraud, and manipulation of populations for international cold conflict [44]. Consider the fertile ground that improved methods for malign influence will fall on. Generative Al can already produce realistic text [45], speech [46, 47] and images [48]; and narrative [49], video [50] and conversation are imminent [51]. Meanwhile the data available to characterize a target gets ever richer [52]. LLMs, acting as model humans targeted by influence attacks, could be coupled together with generative Al producing candidate attacks. The creation, testing and refinement of attacks could then operate at silicon speed, or even in real-time by persuasive bots.
While the use of LLM models could facilitate such unnerving possibilities, which regulating legislation and treaty would struggle to keep up with, LLM models might also help countermeasures to be developed. _Detection_ may be possible, if not at the level of a single message then at least in aggregrated communications; perhaps by use of LLMs as 'weather vanes'. As well as standard methods of _defence_ against influence - fact-checking, education as immunization, etc. - LLM models could offer a compromising temptation to fight fire with fire, which seems unwise.
Moving away from the malign end of the application spectrum, we reach possibilities such as improved advertising and more effective political and corporate messaging. Depending on how effective the improvements are, legislation may be needed to place limits on what is permissible, just as subliminal advertising is prohibited in many countries [53].
Going further we reach applications which are arguably beneficial, such as encouragement of healthy behaviours (e.g. smoking cessation) or de-radicalization programmes. Scientific approaches are already used to optimize these, so it could be argued that deploying science gained from studying LLMs is just more of the same so unproblematic, but we feel that benevolent ends do not justify all possible means - methods could be developed which compromise the autonomy that a society expects its citizens should be allowed.
Looking across this spectrum of possible applications it seems to the authors that the negatives outweigh the positives. So, should influence of LLMs not be investigated further to prevent harms arising? But what then about detecting and defending against others not so constrained? And what about ensuring that future Als with the capacity for real-world actions are immune to unwanted influence? Further research and discussion is needed. |
2301.07513 | A Bayesian Nonparametric Stochastic Block Model for Directed Acyclic
Graphs | Directed acyclic graphs (DAGs) are commonly used in statistics as models,
such as Bayesian networks. In this article, we propose a stochastic block model
for data that are DAGs. Two main features of this model are the incorporation
of the topological ordering of nodes as a parameter, and the use of the
Pitman-Yor process as the prior for the allocation vector. In the resultant
Markov chain Monte Carlo sampler, not only are the topological ordering and the
number of groups inferred, but a model selection step is also included to
select between the two regimes of the Pitman-Yor process. The model and the
sampler are applied to two citation networks. | Clement Lee, Marco Battiston | 2023-01-18T13:32:02Z | http://arxiv.org/abs/2301.07513v1 | # A Bayesian Nonparametric Stochastic Block Model for Directed Acyclic Graphs
###### Abstract
Directed acyclic graphs (DAGs) are commonly used in statistics as _models_, such as Bayesian networks. In this article, we propose a stochastic block model for _data_ that are DAGs. Two main features of this model are the incorporation of the topological ordering of nodes as a parameter, and the use of the Pitman-Yor process as the prior for the allocation vector. In the resultant Markov chain Monte Carlo sampler, not only are the topological ordering and the number of groups inferred, but a model selection step is also included to select between the two regimes of the Pitman-Yor process. The model and the sampler are applied to two citation networks.
_key words_: Markov chain Monte Carlo, Pitman-Yor process, topological ordering
## 1 Introduction
Stochastic block models (SBMs) are a prominent class of statistical models in social network analysis. By modelling a network by a SBM, the nodes of the network are clustered into
different groups, and within each group, nodes display similar connectivity patterns. This phenomenon is termed _stochastic equivalence_(Holland et al., 1983). Upon fitting a SBM and carrying statistical inference, not only the originally unknown group memberships are uncovered, but community detection or assortativeness is also often achieved. This means that the nodes within the same group are stochastically equivalent and tightly connected, while connections are mostly sparse between nodes of different groups. SBMs have a long history line in the literature, from the early works by Holland et al. (1983) and Wang and Wong (1987), to the formalisation as latent models by Snijders and Nowicki (1997) and Nowicki and Snijders (2001). More recent breakthroughs include the mixed membership SBM by Airoldi et al. (2008), the degree-corrected SBM by Karrer and Newman (2011), the minimum description length (MDL) approach and the nested SBM by Peixoto (2014\(b\), 2017, 2019). Recent comprehensive reviews are provided by Abbe (2018) and Lee and Wilkinson (2019), which focus on theoretical results and modelling approaches, respectively.
SBMs can be applied to different kinds of datasets in bibliometrics, in which meta-analysis of a body of literature is carried out, usually quantitatively. Two types of networks arise naturally within bibliometrics, namely collaboration (or coauthorship) networks, in which the nodes are the _authors_, and citation networks, in which the nodes are the _articles_, to both of which SBMs can be applied. In graph theory, the former can be represented as undirected graphs, the latter as directed graphs. While more interest has been placed on the collaboration networks, such as Newman (2001\(a\),_b_, 2004), Newman and Girvan (2004), and Ji and Jin (2016), there have also been influential analyses on the citation networks, on which we shall focus in this article. One prominent example is Price (1976), who investigated the (in-)degree of articles in citation networks and proposed the idea of "cumulative advantage process", which became the preferential attachment model (Barabasi and Albert, 1999). Other kinds of networks also exist, such as the citation exchange between statistics _journals_ investigated by Varin et al. (2016), with a focus on aiding the comparisons of journal rankings.
One reason that we focus on citation networks is that they are (almost) always directed
_acyclic_ graphs, due to the nature of academic referencing. To illustrate this, assume that article \(A\) cites article \(B\), and such citation is represented by a directed edge from \(A\) to \(B\) in the underlying graph. (In the terminology of Bayesian networks or DAGs in general, perhaps confusingly, \(A\) is a _parent_ of \(B\).) Article \(A\) in reality usually appears later than article \(B\), and therefore \(B\) would not have cited \(A\). The main exceptions are articles that have common authors and/or appear in proximity temporally. As this kind of cyclic referencing takes up a very small proportion of the data in a citation network, it is straightforward to identify the edges that ought to be removed. After such necessary data cleaning, if we start from article \(A\) and go along the direction of the edges, it is not possible to reach \(A\) again. Consequently, the graph representing a citation network is not only directed but also acyclic, hence a DAG.
To the best of our knowledge, the acyclic property has not been fully explored in analyses of citation networks. Furthermore, DAGs are less being studied as _data_ than as _models_, in particular Bayesian networks. Not only can Bayesian networks neatly model the causal relationships, which are the edges of the DAG, between the random variables, which are the nodes of the DAG, but they also bring about efficient Bayesian inference algorithms by exploring the conditional independence between the random variables. As Bayesian networks are not the focus here, please refer to, for example, Scutari and Denis (2015).
When analysing a network that is a DAG, we can incorporate its _topological ordering_, which is an ordering of the nodes such that, for any directed edge from node \(A\) to node \(B\), \(A\) comes before \(B\) in the ordering. A graph is a DAG if and only if there exists a topological ordering for the nodes of the graph. Also, if the rows and columns of the adjacency matrix are rearranged according to a topological ordering, it will be upper triangular. This is illustrated in Figure 1 for two citation networks, with \(n=135\) and \(n=2248\), respectively, where \(n\) is the number of nodes (i.e. papers). Their corresponding network diagrams are in Figure 2.
For a given DAG, the topological ordering is not necessarily unique, meaning that there can be multiple orderings that are "topological", as long as each of them satisfies the definition above. Shown in Figure 1 is merely according to one topological ordering, obtained
by applying the sorting algorithm by Kahn (1962) to an arbitrary initial (non-topological) ordering. Therefore, in our proposed model, the topological ordering will be included as a parameter, and inferred jointly with the group memberships and other parameters. In the subsequent Markov chain Monte Carlo (MCMC) sampler, the updating of the topological order is performed by the algorithm.
Figure 1: The adjacency matrix, rearranged according to an arbitrary topological ordering, of the citation network analysed by Lee and Wilkinson (2018) (left) and Ji and Jin (2016) (right). The red dashed line is the major diagonal.
Figure 2: The network diagrams of the citation network analysed by Lee and Wilkinson (2018) (left) and Ji and Jin (2016) (right).
ordering will be similar to that for Mallow's model (Vitelli et al., 2018).
One common issue with fitting SBMs and clustering in general is how the number of groups, denoted by \(K_{n}\), is dealt with, in conjunction with the inference approach. One main approach is to fit the model and compute a criterion over a range of fixed values of \(K_{n}\), followed by selecting the value with the optimal criterion. This approach is more popular in works that employ variational Bayes as the inference approach. We use another main approach, in which \(K_{n}\) is modelled and inferred, with the help of Bayesian nonparametric methods. Specifically, a Pitman-Yor (PY) process (Pitman and Yor, 1997) is used as the prior for the allocation vector. Not only does the use of PY process incorporates \(K_{n}\) directly, it also leads naturally to an MCMC sampler that is similar to that for Dirichlet mixture models, (Neal, 2000). While other works such as Geng et al. (2019) have used similar Bayesian nonparametric formulation, they usually stayed within one of the two regimes of the PY process. These two regimes imply different asymptotic behaviour of \(K_{n}\) as \(n\) grows, and the model might be misspecified if only one regime was assumed before fitting to the data. To circumvent this, the choice of regime becomes part of the model, with a model selection step based on Gibbs variable selection (Carlin and Chib, 1995) embedded in the MCMC sampler.
The rest of this article is as follows: Section 2 provides some background of SBMs and the PY process, while the SBM for DAGs is proposed in Section 3. The MCMC sampler is outlined in 4, and the results of application to the two data sets shown in Figure 1 are presented in Section 5. Section 6 concludes the article.
## 2 Background
In this section, we introduce the terminology and present some basic results of SBM and the PY process, in preparation of the proposed model in Section 3.
### Stochastic block model
Consider a directed network represented as a directed graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\), where \(\mathcal{N}\) is the node set and \(\mathcal{E}\) is the edge list. The size of \(\mathcal{N}\), denoted by \(n:=|\mathcal{N}|\), is the order of the graph \(\mathcal{G}\). The \(n\times n\) adjacency matrix of the graph is denoted by \(\mathbf{Y}=(Y_{pq})_{1\leq p,q\leq n}\). If there is a directed edge from node \(p\) to node \(q\) i.e. \((p,q)\) is in \(\mathcal{E}\), then \(Y_{pq}=1\), otherwise \(Y_{pq}=0\). We also define \(\mathbf{Z}_{n}=(Z_{1},\ldots,Z_{n})\) to be the allocation vector of length \(n\), where \(Z_{p}\) is a label associated to node \(p\). Two nodes, \(p\) and \(q\), belong to the same group if and only if \(Z_{p}=Z_{q}\). Essentially, \(\mathbf{Z}_{n}\) represents the group memberships of the nodes. We assume there are \(K_{n}>1\) unique labels displayed in \(\mathbf{Z}_{n}\), denoted \((Z_{1}^{*},\ldots,Z_{K_{n}}^{*})\) hence \(K_{n}\) groups of nodes. For notation easiness, now on, we will often write \(\mathbb{I}(Z_{p}=i)\), in place of \(\mathbb{I}(Z_{p}=Z_{i}^{*})\) and, similarly, given a matrix \(C\) and the event \(Z_{p}=Z_{k}^{*}\), \(C_{Z_{p}j}\) will denote element \(C_{kj}\).
In the simplest version of SBM, it is assumed that \(Y_{pq}\) arises from a Bernoulli distribution, with the probability of an edge from node \(p\) to node \(q\) being independent of that of any other dyad, _conditional on their group memberships_, \(Z_{p}\) and \(Z_{q}\). Mathematically, \(Y_{pq}|\mathbf{Z}_{n},\mathbf{C}\ \sim\ \text{Bernoulli}(C_{Z_{p}Z_{q}})\), where \(\mathbf{C}=(C_{ij})\in[0,1]^{K_{n}\times K_{n}}\) is the block matrix. The use of the Bernoulli distribution is due to \(Y_{pq}\) being usually a binary variable, as it is in our case of citation networks. However, the Poisson distribution has been used more commonly recently, meaning that \(Y_{pq}|\mathbf{Z}_{n},\mathbf{C}\ \sim\ \text{Poisson}(C_{Z_{p}Z_{q}})\), where \(\mathbf{C}=(C_{ij})\in\mathbb{R}_{+}^{K_{n}\times K_{n}}\). This is due to a few reasons, namely the asymptotic equivalence between the edge probability and the expected number of edges for large sparse graphs (Karrer and Newman, 2011), the natural extension to valued graphs where \(\mathbf{Y}\) can take non-negative integer values, and also the computational simplicity that will be illustrated in our model.
The assumption so far that the probability distribution of the edge depends only on the memberships of the two nodes concerned is based on the concept of _stochastic equivalence_(Holland et al., 1983, Nowicki and Snijders, 2001). The consequence is that the nodes in each group will have the same degree distribution and expected degree, which is not quite a realistic assumption for real-life data. One major development in the literature in
the past decade is the _degree-corrected SBM_(Karrer and Newman, 2011) that takes into account the degree heterogeneity of the nodes within the same group. The model equation is modified to be \(Y_{pq}|\mathbf{Z}_{n},\mathbf{C},\boldsymbol{\xi}\ \sim\ \text{Poisson}(\xi_{p}\xi_{q}C_{Z_{p}Z_{q}})\), where \(\xi_{p}>0\) is the \(p\)-th element in \(\boldsymbol{\xi}=(\xi_{1},\ldots,\xi_{n})\) and the degree correction factor for node \(p\). Such node-specific factors also allow the introduction of covariate information into the model. In addition, the degree-corrected version is better at achieving clustering purposes, while the original version usually captures a different underlying structure of the network. Model comparisons of the two versions have been made by, for example, Yan et al. (2014), Yan (2016), Wang and Bickel (2017), and Hu et al. (2020).
Inference for \(\mathbf{Z}_{n}\), \(\mathbf{C}\), \(\boldsymbol{\xi}\) and other model parameters is usually carried out in a Bayesian way. There are two main types of Bayesian inference algorithms for SBMs, namely variational methods and MCMC methods. In the former, the joint posterior density is approximated by a variational distribution, denoted by \(Q(\mathbf{Z}_{n},\mathbf{C},\boldsymbol{\xi})\). After specifying a parametric form of \(Q\), its parameter are chosen by finding the values that minimize the Kullback-Leibler distance between \(Q\) and the joint posterior. In MCMC methods, a numerical approximation of the joint posterior density is provided, by drawing samples of them through an iterative algorithm. In the algorithm, typically, each latent variable or parameter takes turn to have a value drawn, from its conditional posterior distribution given all other latent variables and parameters. Examples of the MCMC approach for SBMs include Nowicki and Snijders (2001), Tallberg (2005), McDaid et al. (2013), Peixoto (2014_a_), Li et al. (2016), Newman and Reinert (2016), Gerlach et al. (2018), Lu and Szymanski (2019), and Passino and Heard (2020). We shall use MCMC for the proposed SBM for DAG because of two reasons, one being that it is difficult to assign a variational distribution \(Q\) when the topological ordering is incorporated, and the other being that MCMC algorithms have theoretical guarantees of approximating the joint posterior when the number of iterations in the algorithm is large.
### Pitman-Yor process
The two parameter Poisson-Dirichlet process, also known as the _Pitman-Yor (PY) process_, was introduced in Pitman and Yor (1997) as a generalization of the Dirichlet process by Ferguson (1973). Like the Dirichlet process, the PY process is also a probability measure on the space of distribution functions or probability measures on a given sample space, which assigns probability one to the set of discrete distributions. It is parametrized by three hyperparameters \((\alpha,\theta,P_{0})\), where \(P_{0}\), called base distribution, is a distribution on the sample space, and \(\alpha\) and \(\theta\) are two scalars satisfying either: 1) \(0\leq\alpha<1\) and \(\theta\geq-\alpha\); 2) \(\alpha<0\) and \(\theta=k|\alpha|\) for \(k\in\mathbb{N}\). The Dirichlet process corresponds to the special case \(\alpha=0\).
The PY process admits different constructions and representations, which can be useful to develop computational algorithms or prove theoretical properties. Among them, the _Stick Breaking (SB)_ representation is probably the most intuitive description of a sample from the PY process. Specifically, if \(P\) is a random probability measure distributed according to \(\text{PY}(\alpha,\theta,P_{0})\), then \(P\stackrel{{ d}}{{=}}\sum_{i\geq 1}p_{i}\delta_{Z_{i}^{**}}\), where \((Z_{i}^{**})_{i\geq 1}\) are independent and identically distributed (i.i.d.) random variables with distribution \(P_{0}\), the sequence \((p_{i})_{i\geq 1}\) are constructed through a stick breaking process \(p_{i}=V_{i}\prod_{1\leq k\leq i-1}(1-V_{k})\), with \(V_{i}\sim\text{beta}(1-\alpha,\theta+i\alpha)\) for all \(i\geq 1\), and \(\delta_{Z}\) denotes the Dirac measure at \(Z\).
A second property is the so-called Chinese Restaurant Representation. Let \((Z_{p})_{p\in\mathbb{N}}\) be an exchangeable sequence driven by a Pitman-Yor process, i.e. let \(Z_{p}|P\stackrel{{ iid}}{{\sim}}P\) for all \(p\in\mathbb{N}\) and \(P\sim\text{PY}(\alpha,\theta,P_{0})\). Then, by integrating out the unknown \(P\), the PY process admits the following _Chinese Restaurant Process (CRP)_ representation: \(Z_{1}|P_{0}\sim P_{0}\), and for all \(n\in\mathbb{N}\),
\[\mathbb{P}(Z_{n+1}\in\cdot|\mathbf{Z}_{n},\alpha,\theta,P_{0})=\sum_{j=1}^{K_ {n}}\frac{n_{j}-\alpha}{\theta+n}\delta_{Z_{j}^{*}}(\cdot)+\frac{\theta+\alpha K _{n}}{\theta+n}P_{0}(\cdot) \tag{1}\]
where \(\mathbf{Z}_{n}=(Z_{1},\ldots,Z_{n})\) displays \(K_{n}\) distinct values \((Z_{1}^{*},\ldots,Z_{K_{n}}^{*})\) with frequencies \((n_{1},\ldots,n_{K_{n}})\), i.e. \(n_{j}=\sum_{p=1}^{n}\mathbb{I}(Z_{p}=Z_{j}^{*})\) for \(j=1,\ldots,K_{n}\), where \(\mathbb{I}(A)\) is the indicator function for event \(A\).
Using the CRP representation, it is possible to compute the _Marginal Likelihood_ (when
\(P\) is marginalized out) of a sample \(\mathbf{Z}_{n}\) driven from the PY process,
\[\mathbb{P}(\mathbf{Z}_{n}=\mathbf{z}_{n}|\alpha,\theta,P_{0}) =\mathbb{P}(Z_{1}=z_{1}|P_{0})\prod_{p=2}^{n}\mathbb{P}(Z_{p}=z_{p }|\mathbf{Z}_{p-1},\alpha,\theta,P_{0})\] \[=\frac{\prod_{j=1}^{K_{n}-1}(\theta+j\alpha)}{(\theta+1)_{n-1 \uparrow}}\prod_{j=1}^{K_{n}}(1-\alpha)_{n_{j}-1\uparrow}\prod_{j=1}^{K_{n}}P _{0}(dZ_{j}^{*})\] \[=\pi(n_{1},\ldots,n_{K_{n}}|\theta,\alpha)\times\prod_{j=1}^{K_{n }}P_{0}(dZ_{j}^{*}) \tag{2}\]
where \((x)_{n\uparrow}=x(x+1)\ldots(x+n-1)\) denotes the rising factorial, with the convention that \((x)_{0\uparrow}=1\). The last line factorizes the marginal likelihood into two parts: the prior of the induced partition of \(\mathbf{Z}_{n}\), which is a symmetric function of the block frequencies, \((n_{1},\ldots,n_{K_{n}})\), called Exchangeable Partition Probability Function, (Pitman, 1996); the prior of the labels \(Z_{j}^{*}\) of each of the \(K_{n}\) blocks, which is simply the product of the base measure \(P_{0}\).
The PY process is defined for all values of the hyperparameters \((\alpha,\theta)\) such that either 1) \(0\leq\alpha<1\) and \(\theta\geq-\alpha\); or 2) \(\alpha<0\) and \(\theta=k|\alpha|\) for \(k\in\mathbb{N}\). However, the properties of the prior are very different in these two regimes. One the one hand, when \(0\leq\alpha<1\), the number of atoms of \(P\sim\text{PY}(\alpha,\theta,P_{0})\) is infinite, and the number of observed blocks \(K_{n}\) in a sample \(\mathbf{Z}_{n}\) of size \(n\) will grow unboundedly. On the other hand, when \(\alpha<0\), with \(\theta=k|\alpha|\) for \(k\in\mathbb{N}\), then the number of atoms of \(P\) is finite and equal to \(k\). Under this regime, \(P\stackrel{{ d}}{{=}}\sum_{i=1}^{k}p_{i}\delta_{Z_{i}^{**}}\) with \((p_{1},\ldots,p_{k})\sim\text{Dir}(k;|\alpha|,\ldots,|\alpha|)\), where Dir denotes the finite Dirichlet distribution. Within this regime, the number of blocks \(K_{n}\) displayed by a sample \(\mathbf{Z}_{n}\) is bounded, smaller or equal than \(k\), and converges a.s. to \(k\) as \(n\rightarrow\infty\).
Geng et al. (2019) considered an SBM with the \(\alpha<0\) regime PY process as a prior for the latent blocks assignment, corresponding to the finite symmetric Dirichlet prior for the block probabilities \((p_{1},\ldots,p_{k})\), and assigned a hyperprior to \(k\) such that this latter hyperparameter can be marginalized out. Here, we allow the hyperparameters of the PY process prior to range over all the parameter space, hence allowing both finite and infinite regimes. To learn the right regime of these hyperparameters from the data, we reformulate their inference as a
Bayesian model selection problem in Section 4.2. In the following sections, when considering the finite regime \(\alpha<0\), we apply the reparametrization \((\alpha,\theta)\rightarrow(\gamma,k)\), with \(\gamma:=|\alpha|>0\) and \(k\in\mathbb{N}\), and assign a prior to \((\gamma,k)\in\mathbb{R}_{+}\times\mathbb{N}\).
## 3 Model
In this section, we introduce the SBM for DAGs, and derive its likelihood. Upon specifying the priors of the allocation vector \(\mathbf{Z}_{n}\) and the model parameters, we will also derive their joint posterior, up to a normalizing constant.
The notation used is the same as in Section 2. Here, however, we only consider a graph \(\mathcal{G}\) that is a DAG. As the graph is directed, \(Y_{pq}\) is not necessarily the same as \(Y_{qp}\). However, due to the acyclic nature of \(\mathcal{G}\), the combinations of the possible values of \((Y_{pq},Y_{qp})\) are restricted to be \((0,0)\), \((0,1)\) or \((1,0)\). Also, \(Y_{pp}=0\) for all \(p=1,2,\ldots,n\), or equivalently the major diagonal contains all zeros.
To utilise a unique feature of DAGs, as discussed in Section 1, we define \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{n})\) as the \(n\)-vector random variable that represents the ordering of \(\mathcal{G}\), with the collection of all permutations of \(\{1,2,\ldots,n\}\) as the sample space. A value of \(\boldsymbol{\sigma}\) is deemed topological with respect to \(\mathcal{G}\) if it satisfies the definition of topological ordering, i.e. if node \(p\) topologically precedes node \(q\), then there cannot be edges from node \(q\) to node \(p\). We define several quantities implied from \(\boldsymbol{\sigma}\). First, \(\boldsymbol{\phi}=(\phi_{1},\ldots,\phi_{n})\) is the "inverse" of \(\boldsymbol{\sigma}\), which means that if node \(p\) is the \(r\)-th node in the topological ordering, we have \(\boldsymbol{\sigma}_{r}=p\) and \(\phi_{p}=r\). Essentially, \(\boldsymbol{\phi}\) contains the position of each node in \(\boldsymbol{\sigma}\). For convenience, without confusion, we write \(\boldsymbol{\sigma}^{-1}=\boldsymbol{\phi}\) and \(\boldsymbol{\phi}^{-1}=\boldsymbol{\sigma}\). Second, \(\mathbf{Z}_{n}^{\boldsymbol{\sigma}}=(Z_{1}^{\boldsymbol{\sigma}},Z_{2}^{ \boldsymbol{\sigma}},\ldots,Z_{n}^{\boldsymbol{\sigma}})\) is the _reordered_ allocation vector, where \(Z_{p}^{\boldsymbol{\sigma}}=Z_{\sigma_{p}}\), where \(Z_{\sigma_{p}}\) comes from the allocation vector \(\mathbf{Z}_{n}\). In the same way, if we define \(\boldsymbol{\xi}=(\xi_{1},\ldots,\xi_{n})\) as the vector of degree correction parameters, \(\boldsymbol{\xi}^{\boldsymbol{\sigma}}=(\xi_{1}^{\boldsymbol{\sigma}},\xi_{2 }^{\boldsymbol{\sigma}},\ldots,\xi_{n}^{\boldsymbol{\sigma}})\) is its reordered version. Lastly, \(\mathbf{Y}^{\boldsymbol{\sigma}}\) is the adjacency matrix reordered by \(\boldsymbol{\sigma}\) for the columns and rows of \(\mathbf{Y}\) simultaneously, such that \(Y_{pq}^{\boldsymbol{\sigma}}=Y_{\sigma_{p}\sigma_{q}}\).
The central component of the SBM is the distribution assumption about each dyad of \(\mathcal{G}\), essentially each element of \(\mathbf{Y}\) or equivalently \(\mathbf{Y}^{\boldsymbol{\sigma}}\). As \(\mathbf{Y}^{\boldsymbol{\sigma}}\) is upper trianuglar for a \(\boldsymbol{\sigma}\) that is topological, we require that, for all dyads \((p,q)\) where \(1\leq p<q\leq n\),
\[Y^{\boldsymbol{\sigma}}_{qp}=0,\qquad Y^{\boldsymbol{\sigma}}_{pq}|\mathbf{C}, \mathbf{Z}_{n},\boldsymbol{\xi}\sim\text{Pois}\left(\xi^{\boldsymbol{\sigma}}_ {p}\xi^{\boldsymbol{\sigma}}_{q}C_{Z^{\boldsymbol{\sigma}}_{p}Z^{\boldsymbol{ \sigma}}_{q}}\right), \tag{3}\]
where \(\mathbf{C}=(C_{ij})\in\mathbb{R}^{K_{n}\times K_{n}}_{+}\) is the block matrix.
As \(\mathbf{Z}_{n}\) is unknown prior to fitting the model, it will be treated as a vector of latent variables and assigned a prior, the inference of which is our interest. Specifically, we assume that \(\mathbf{Z}_{n}\) are the first of \(n\) elements of an exchangeable sequence driven by a PY process prior. The parametrisation of this prior will be detailed in Section 3.2. Given this prior choice, the vector \(\mathbf{Z}_{n}\) will display \(K_{n}\) distinct values \((Z^{*}_{1},\ldots,Z^{*}_{K_{n}})\), which appear in \(\mathbf{Z}_{n}\) with frequencies \(\mathbf{N}=(N_{1},\ldots,N_{K_{n}})\). The \(i\)-th element \(N_{i}=\sum_{p=1}^{n}\mathbb{I}(Z_{p}=Z^{*}_{i})\) is the number of nodes in group \(i\), displaying label \(Z^{*}_{i}\). As previously mentioned, we write \(\mathbb{I}(Z_{p}=i)\), in place of \(\mathbb{I}(Z_{p}=Z^{*}_{i})\). Also, we derive the \(K_{n}\times K_{n}\) edge matrix between groups from \(\mathbf{Z}_{n}\) and \(\mathbf{Y}\), denoted by \(\mathbf{E}\), where \(E_{ij}=\sum_{p=1}^{n}\sum_{q=1}^{n}Y_{pq}\mathbb{I}(Z_{p}=i,Z_{q}=j)\).
Next, we define a \(K_{n}\times K_{n}\) matrix \(\mathbf{M}\), where \(M_{ij}\) represents the number of dyads \((p,q)\) between groups \(i\) and \(j\) such that node \(p\) is topologically in front of node \(q\). Mathematically,
\[M_{ij}=\sum_{p=1}^{n-1}\sum_{q=p+1}^{n}\xi^{\boldsymbol{\sigma}}_{p}\xi^{ \boldsymbol{\sigma}}_{q}\mathbb{I}\left(Z_{\sigma_{p}}=i,Z_{\sigma_{q}}=j \right)=\sum_{p=1}^{n-1}\sum_{q=p+1}^{n}\xi^{\boldsymbol{\sigma}}_{p}\xi^{ \boldsymbol{\sigma}}_{q}\mathbb{I}\left(Z^{\boldsymbol{\sigma}}_{p}=i,Z^{ \boldsymbol{\sigma}}_{q}=j\right).\]
### Likelihood
With all required quantities defined, we will derive the likelihood with observed \(\mathbf{Y}\) given \(\mathbf{Z}_{n}\) and \(\boldsymbol{\sigma}\). We first check that \(\boldsymbol{\sigma}\) is topological, or equivalently \(\mathbf{Y}^{\boldsymbol{\sigma}}\) is upper triangular i.e. \(Y^{\boldsymbol{\sigma}}_{qp}=0\) for all dyads \((p,q)\) where \(1\leq p<q\leq n\), otherwise the likelihood is 0. Once \(\boldsymbol{\sigma}\) is
checked to be topological, using Equation 3, the observed data likelihood is
\[\mathbb{P}(\mathbf{Y}|\mathbf{C},\mathbf{Z}_{n},\boldsymbol{\sigma},\boldsymbol{\xi})=\mathbb{I}(\mathbf{Y}^{\boldsymbol{\sigma}}\text{ upper tri.})\times\mathbb{P}(\mathbf{Y}^{\boldsymbol{\sigma}}|\mathbf{C},\mathbf{Z}_{n}, \boldsymbol{\xi})\] \[=\mathbb{I}(\mathbf{Y}^{\boldsymbol{\sigma}}\text{ upper tri.}) \prod_{p=1}^{n-1}\prod_{q=p+1}^{n}\left(\frac{1}{Y_{pq}^{\boldsymbol{\sigma}}!}\exp\left(-\xi_{p}^{\boldsymbol{\sigma}}\xi_{q}^{\boldsymbol{\sigma}}C_{Z_{p }^{\boldsymbol{\sigma}}Z_{q}^{\boldsymbol{\sigma}}}\right)\left(\xi_{p}^{ \boldsymbol{\sigma}}\xi_{q}^{\boldsymbol{\sigma}}C_{Z_{p}^{\boldsymbol{\sigma} }Z_{q}^{\boldsymbol{\sigma}}}\right)^{Y_{pq}^{\boldsymbol{\sigma}}}\right)\] \[=\mathbb{I}(\mathbf{Y}^{\boldsymbol{\sigma}}\text{ upper tri.}) \left(\prod_{p=1}^{n-1}\prod_{q=p+1}^{n}\frac{1}{Y_{pq}^{\boldsymbol{\sigma}}!}\right)\left(\prod_{p=1}^{n}\prod_{q=1}^{n}\left(\xi_{p}^{\boldsymbol{\sigma }}\xi_{q}^{\boldsymbol{\sigma}}\right)^{Y_{pq}^{\boldsymbol{\sigma}}}\right)\] \[\quad\times\prod_{i=1}^{K_{n}}\prod_{j=1}^{K_{n}}\exp\left(-C_{ij} \sum_{p=1}^{n-1}\sum_{q=p+1}^{n}\xi_{p}^{\boldsymbol{\sigma}}\,\xi_{q}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i,Z_{q}^{ \boldsymbol{\sigma}}=j)\right)C_{ij}^{\sum_{p=1}^{n-1}\sum_{q=p+1}^{n}Y_{pq}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i,Z_{q}^{ \boldsymbol{\sigma}}=j)}\] \[=\mathbb{I}(\mathbf{Y}^{\boldsymbol{\sigma}}\text{ upper tri.}) \times\bar{Y}\times\prod_{p=1}^{n}\prod_{q=1}^{n}\left(\xi_{p}\xi_{q}\right)^{Y _{pq}}\times\prod_{i=1}^{K_{n}}\prod_{j=1}^{K_{n}}e^{-C_{ij}M_{ij}}C_{ij}^{E_{ ij}}, \tag{4}\]
where \(\bar{Y}=\prod_{p=1}^{n-1}\!\!\!\prod_{q=p+1}^{n}\left(Y_{pq}^{\boldsymbol{ \sigma}}!\right)^{-1}\), \(M_{ij}=\sum_{p=1}^{n-1}\sum_{q=p+1}^{n}\xi_{p}^{\boldsymbol{\sigma}}\,\xi_{q} ^{\boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i,Z_{q}^{ \boldsymbol{\sigma}}=j)\), and \(E_{ij}=\sum_{p=1}^{n-1}\sum_{q=p+1}^{n}Y_{pq}^{\boldsymbol{\sigma}} \mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i,Z_{q}^{\boldsymbol{\sigma}}=j)\). The following is implicitly assumed along the calculations:
\[\prod_{p=1}^{n-1}\prod_{q=p+1}^{n}\left(\xi_{p}^{\boldsymbol{\sigma}}\xi_{q}^{ \boldsymbol{\sigma}}\right)^{Y_{pq}^{\boldsymbol{\sigma}}}=\prod_{p=1}^{n} \prod_{q=1}^{n}\left(\xi_{p}^{\boldsymbol{\sigma}}\xi_{q}^{\boldsymbol{\sigma }}\right)^{Y_{pq}^{\boldsymbol{\sigma}}}=\prod_{p=1}^{n}\prod_{q=1}^{n}\left( \xi_{p}\xi_{q}\right)^{Y_{pq}},\]
of which the first equality is due to that the lower triangular matrix (including the major diagonal) of \(\mathbf{Y}^{\boldsymbol{\sigma}}\) is 0, while the second equality implies that this component does not depend on \(\boldsymbol{\sigma}\). Likelihood 4 is influenced by \(\mathbf{Z}_{n}\) and \(\boldsymbol{\sigma}\) through the two matrices \(\mathbf{E}\) and \(\mathbf{M}\).
### Priors and posterior density
We shall assign independent priors one by one to \(\mathbf{C}\), \(\mathbf{Z}_{n}\), \(\boldsymbol{\sigma}\) and \(\boldsymbol{\xi}\), in order to carry out inference within the Bayesian framework. In the subsequent calculations, some additional parameters of the priors used will be included in the notation.
For \(\mathbf{C}\), we assume each \(C_{ij}\) is _a priori_ independent and identically distributed according
to the Gamma\((a,b)\) distribution, where \(a\) and \(b\) are the positive shape and rate parameters, respectively. This enables \(\mathbf{C}\) to be integrated out, to obtain
\[\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma},\xi,a,b)= \int\mathbb{P}(\mathbf{Y}|\mathbf{C},\mathbf{Z}_{n},\boldsymbol{\sigma}, \boldsymbol{\xi})\mathbb{P}(\mathbf{C}|a,b)d\mathbf{C}\] \[=\mathbb{I}(\mathbf{Y}^{\boldsymbol{\sigma}}\text{ upper tri.}) \times\bar{Y}\times\prod_{p=1}^{n}\prod_{q=1}^{n}\left(\xi_{p}\xi_{q}\right)^{Y _{pq}}\times\left(\frac{b^{a}}{\Gamma(a)}\right)^{K_{n}^{2}}\prod_{i=1}^{K_{n} }\prod_{j=1}^{K_{n}}\frac{\Gamma(E_{ij}+a)}{(M_{ij}+b)^{E_{ij}+a}} \tag{5}\]
Independent and relatively uninformative gamma prior distributions are assigned to the parameters \(a\) and \(b\), as well as the components of \(\boldsymbol{\xi}\).
For \(\boldsymbol{\sigma}\), we assign a uniform prior to all permutations of \(\{1,2,\ldots,n\}\) i.e. \(\pi(\boldsymbol{\sigma})=(n!)^{-1}\). There is no issue with an ordering that is not topological having a positive prior probability, as such an ordering will result in \(\mathbb{I}(\mathbf{Y}^{\boldsymbol{\sigma}}\text{ upper tri.})\) and the likelihood (5) being equal to 0.
For \(\mathbf{Z}_{n}\), we use the Pitman-Yor process prior introduced in Section 2.2, denoted by \(\mathbb{P}(\mathbf{Z}_{n}|\boldsymbol{\eta}_{r})\), where \(\boldsymbol{\eta}_{r}\) is a parameter vector of length 2 and dependent on \(r\in\{0,1\}\), the choice of regime. This in turn requires the specification of the prior of \(\boldsymbol{\eta}_{r}\) under both regimes. Under the infinite regime, \(r=0\) and \(\boldsymbol{\eta}_{r}=\boldsymbol{\eta}_{0}=(\alpha,\theta)\), and we assume that \(\alpha\sim\text{Uniform}[0,1]\) and \(\theta+\alpha\) follows a Gamma distribution. Under the finite regime, \(r=1\) and \(\boldsymbol{\eta}_{r}=\boldsymbol{\eta}_{1}=(\gamma,k)\), and we assume that \(\gamma\) and \(k\) are independent _apriori_, \(\gamma\) follows a Gamma distribution, and \(k\) follows a truncated negative binomial distribution, with parameters \(a_{k}\) and \(b_{k}\), and density
\[\mathbb{P}(k=k^{\prime}|a_{k},b_{k})=(1-b_{k}^{a_{k}})^{-1}\times\frac{\Gamma (k^{\prime}+a_{k})}{\Gamma(a_{k})k^{\prime}!}\ b_{k}^{a_{k}}(1-b_{k})^{k^{ \prime}},\qquad k^{\prime}=1,2,\ldots\]
where \(\Gamma(\cdot)\) is the gamma function. The factor \((1-b_{k}^{a_{k}})^{-1}\) is due to the truncation of 0 from the original support of the negative binomial distribution.
The regime-dependent parameters and their priors are introduced this way because, ultimately, we want to enable model selection of the regime, which in turn requires the prior of \(r\), denoted by \(\mathbb{P}(r)\). The boundary cases \(\mathbb{P}(r=0)=1\) and \(\mathbb{P}(r=1)=1\) represent staying within the infinite and finite regimes, respectively, while the incorporation of the model
selection in the MCMC sampler is described in Section 4.2.
As all priors required have been specified, the joint posterior of \(\mathbf{Z}_{n}\), \(\boldsymbol{\sigma}\), \(\boldsymbol{\eta}_{r}\), \(a\) and \(b\) (and \(r\)), up to a proportionality constant, is readily available:
\[\mathbb{P}(\mathbf{Z}_{n},\boldsymbol{\sigma},\boldsymbol{\xi},\boldsymbol{ \eta}_{r},a,b,r|Y)\propto\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{ \sigma},\boldsymbol{\xi},a,b)\mathbb{P}(\mathbf{Z}_{n}|\boldsymbol{\eta}_{r}) \mathbb{P}(\boldsymbol{\sigma})\mathbb{P}(\boldsymbol{\xi})\mathbb{P}( \boldsymbol{\eta}_{r}|r)\mathbb{P}(a)\mathbb{P}(b)\mathbb{P}(r). \tag{6}\]
We shall carry out Bayesian inference by sampling from this joint posterior using MCMC.
## 4 Posterior inference
In this Section, we present how to perform inference for the model described in the previous section. Specifically, in Section 4.1, we present the MCMC sampler that targets the joint posterior distribution (6) and describe how to update each of its elements. The block assignment vector \(\mathbf{Z}_{n}\) is updated incrementally, one element at a time, using the collapsed likelihood (5). We also describe an alternative way of updating \(\mathbf{Z}_{n}\) using split-and-merge moves (Jain and Neal, 2004), in which blocks can be merged or split, hence updating jointly the allocation of multiple nodes in one step. Combining incremental and split-and-merge updates for \(\mathbf{Z}_{n}\) can improve the mixing of the algorithm. In Section 4.2, we present how to handle the choice of the PY process hyperparameters by reformulating it as a model selection problem. Additional details on the samplers are available in the Online Supplementary Material, where a sampler using the uncollapsed likelihood (4) is presented.
In all samplers, we will always use the superscript notation \({}^{-w}\) to denote that that specific quantity is computed without including node \(w\). For example, \(K_{n}^{-w}\) denotes the number of distinct values in \(\mathbf{Z}_{n}\) after removing \(Z_{w}\), or \(E_{kj}^{-w}\) is the number of edges between block \(k\) and block \(j\) computed but without including edges containing node \(w\).
### MCMC sampler
The MCMC algorithm to sample from (6) iteratively does the following:
1. **Update Z\({}_{n}\) incrementally**: For \(w=1,\ldots,n\), given that we are conditioning on \(\boldsymbol{\sigma}\), we can equivalently consider updating \(Z_{w}^{\boldsymbol{\sigma}}\), instead of \(Z_{w}\). Suppose that currently \(Z_{w}^{\boldsymbol{\sigma}}=k\) and the count matrices are \(\mathbf{M}\) and \(\mathbf{E}\). To update \(Z_{w}^{\boldsymbol{\sigma}}\): 1. First, for all \(1\leq i,j\leq K_{n}^{-w}\), compute \(\mathbf{M}^{-w}\) and \(\mathbf{E}^{-w}\) as follows \[M_{kj}^{-w} =M_{kj}-\sum_{w<q\leq n}\xi_{w}^{\boldsymbol{\sigma}}\xi_{q}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{q}^{\boldsymbol{\sigma}}=j),\quad E_{kj}^{- w}=E_{kj}-\sum_{w<q\leq n}Y_{pw}^{\boldsymbol{\sigma}}\mathbb{I}(Z_{q}^{ \boldsymbol{\sigma}}=j),\] \[M_{ik}^{-w} =M_{ik}-\sum_{1\leq p<w}\xi_{p}^{\boldsymbol{\sigma}}\xi_{w}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i),\quad E_{ik}^{ -w}=E_{ik}-\sum_{1\leq p<w}Y_{pw}^{\boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{ \boldsymbol{\sigma}}=i).\] If \(M_{kj}^{-w}\) and \(M_{ik}^{-w}\) are all zeros, remove the \(k\)-th column and row from \(\mathbf{M}\) and \(\mathbf{E}\). 2. Second, for \(k^{\prime}\in\{1,\ldots,K_{n}^{-w}\}\), we compute, for all \(1\leq i,j\leq K_{n}^{-w}\), \[M_{k^{\prime}j} =M_{k^{\prime}j}^{-w}+\sum_{w<q\leq n}\xi_{w}^{\boldsymbol{ \sigma}}\xi_{q}^{\boldsymbol{\sigma}}\mathbb{I}(Z_{q}^{\boldsymbol{\sigma}}=j ),\quad E_{k^{\prime}j}=E_{k^{\prime}j}^{-w}+\sum_{w<q\leq n}Y_{wq}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{q}^{\boldsymbol{\sigma}}=j),\] \[M_{ik^{\prime}} =M_{ik^{\prime}}^{-w}+\sum_{1\leq p<w}\xi_{p}^{\boldsymbol{ \sigma}}\xi_{w}^{\boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i ),\quad E_{ik^{\prime}}=E_{ik^{\prime}}^{-w}+\sum_{1\leq p<w}Y_{pw}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i)\] and \[p_{k^{\prime}} =(n_{k^{\prime}}^{-w}-\alpha)\prod_{1\leq i\leq K_{n}^{-w}}\left( \frac{\Gamma(E_{ik^{\prime}}+a)}{(M_{ik^{\prime}}+b)^{E_{ik^{\prime}+a}}} \frac{(M_{ik^{\prime}}^{-w}+b)^{E_{ik^{\prime}+a}^{-w}}}{\Gamma(E_{ik^{\prime} }^{-w}+a)}\right)\] \[\times\prod_{1\leq j\leq K_{n}^{-w}}\left(\frac{\Gamma(E_{k^{ \prime}j}+a)}{(M_{k^{\prime}j}+b)^{E_{k^{\prime}j}+a}}\frac{(M_{k^{\prime}j} ^{-w}+b)^{E_{k^{\prime}j}^{-w}+a}}{\Gamma(E_{k^{\prime}j}^{-w}+a)}\right),\] For \(k^{\prime}=K_{n}^{-w}+1\), compute, for all \(1\leq i,j\leq K_{n}^{-w}\), \[M_{k^{\prime}j} =\sum_{w<q\leq n}\xi_{w}^{\boldsymbol{\sigma}}\xi_{q}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{q}^{\boldsymbol{\sigma}}=j),\quad E_{k^{ \prime}j}=\sum_{w<q\leq n}Y_{wq}^{\boldsymbol{\sigma}}\mathbb{I}(Z_{q}^{ \boldsymbol{\sigma}}=j),\] \[M_{ik^{\prime}} =\sum_{1\leq p<w}\xi_{p}^{\boldsymbol{\sigma}}\xi_{w}^{ \boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{\boldsymbol{\sigma}}=i),\quad E_{ik^{ \prime}}=\sum_{1\leq p<w}Y_{pw}^{\boldsymbol{\sigma}}\mathbb{I}(Z_{p}^{ \boldsymbol{\sigma}}=i),\] \[\text{and}\quad p_{k^{\prime}} =(\theta+\alpha(k^{\prime}-1))\prod_{1\leq i\leq(k^{\prime}-1)} \left(\frac{\Gamma(E_{ik^{\prime}}+a)}{(M_{ik^{\prime}}+b)^{E_{ik^{\prime}+a}} }\frac{b^{a}}{\Gamma(a)}\right)\] \[\times\prod_{1\leq j\leq(k^{\prime}-1)}\left(\frac{\Gamma(E_{k^{ \prime}j}+a)}{(M_{k^{\prime}j}+b)^{E_{k^{\prime}j+a}}}\frac{b^{a}}{\Gamma(a)}\right)\]
3. Finally, sample the new value of \(Z_{w}^{\boldsymbol{\sigma}}\) from \(\mathbb{P}(Z_{w}=k)=\dfrac{p_{k}}{\sum_{k^{\prime}=1}^{K_{w}^{-w}+1}p_{k^{\prime}}}\) and update the \(\mathbf{M}\) and \(\mathbf{E}\) correspondingly.
2. **Update \(\mathbf{Z}_{n}\) using split and merge updates**: Given that in \(\mathbf{Z}_{n}\) there are currently \(K_{n}\) distinct blocks, a split-and-merge move allows either to merge two of these blocks into one or to split one block into two, hence updating at the same time all allocation variables \(Z_{w}\) of the involved nodes. Including such a move in the sampler can improve the mixing. The required steps and the relevant acceptance probabilities are detailed in the Online Supplementary material, and follow closely that of Jain and Neal (2004).
3. **Update \(\boldsymbol{\sigma}\)**: To update \(\boldsymbol{\sigma}\), we use a variation of the Leap-and-Shift proposal used for the ranking in the Mallows model by Vitelli et al. (2018). The variation described here allows the shift step to be a shift modulo \(n\). This produces a symmetric proposal in which the acceptance probability can be computed in a faster way. The Leap-and-Shift modulo \(n\) update of \(\boldsymbol{\sigma}\) works as follows. Given the current value of \(\boldsymbol{\sigma}=(\sigma_{1},\ldots,\sigma_{n})\) and the corresponding \(\boldsymbol{\phi}=\boldsymbol{\sigma}^{-1}\), for \(p=1,\ldots,n\): 1. Sample \(m\sim\text{Unif}(\{-L,-(L-1),\ldots,L\}/\{0\})\), where \(L\) is a tuning parameter. 2. If \(m>0\) and \(\phi_{p}+m>n\), set \(m\gets m-n\); else if \(m<0\) and \(\phi_{p}+m<0\), set \(m\gets m+n\). 3. If \(0<m(<n)\), set \(\boldsymbol{\phi}^{{}^{\prime}}\) where \(\phi_{p}^{{}^{\prime}}=\phi_{p}+m\) and \(\phi_{q}^{{}^{\prime}}=\phi_{q}-1\) for \(q=\sigma_{\phi_{p}+1},\sigma_{\phi_{p}+2},\ldots,\sigma_{\phi_{p}+m}\); else if \(0>m(>-n)\), set \(\boldsymbol{\phi}^{{}^{\prime}}\) where \(\phi_{p}^{{}^{\prime}}=\phi_{p}+m\) and \(\phi_{q}^{{}^{\prime}}=\phi_{q}+1\) for \(q=\sigma_{\phi_{p}+m},\sigma_{\phi_{p}+(m-1)},\ldots,\sigma_{\phi_{p}-1}\). 4. Set \(\boldsymbol{\sigma}^{{}^{\prime}}=\boldsymbol{\phi}^{{}^{\prime}-1}\) and compute \(\alpha(\boldsymbol{\sigma}^{\prime},\boldsymbol{\sigma})=\min\bigg{(}1,\dfrac {\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma}^{{}^{\prime}}, \boldsymbol{\xi},a,b)}{\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{ \sigma},\boldsymbol{\xi},a,b)}\bigg{)}\). 5. Set \(\boldsymbol{\sigma}=\boldsymbol{\sigma}^{\prime}\) with probability \(\alpha(\boldsymbol{\sigma}^{\prime},\boldsymbol{\sigma})\).
4. **Update \(\boldsymbol{\xi}\)**: For \(p=1,\ldots,n\), propose \(\xi_{p}^{{}^{\prime}}\) from \(\text{N}(\xi_{p},s_{p})\), where \(s_{p}\) is the \(p\)-th element of \((s_{1},s_{2},\ldots,s_{n})\), the vector of proposal standard derivations for \(\boldsymbol{\xi}\). Write
\((\xi_{1},\ldots,\xi_{p-1},\xi_{p}^{{}^{\prime}},\xi_{p+1},\ldots,\xi_{p})\), and set \(\xi_{p}=\xi_{p}^{{}^{\prime}}\) with probability \(\min\left(1,\dfrac{\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma}, \boldsymbol{\xi}^{{}^{\prime}},a,b)\mathbb{P}(\xi_{p}^{{}^{\prime}})}{\mathbb{ P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma},\boldsymbol{\xi},a,b)\mathbb{P}( \xi_{p})}\right)\). That \(\xi_{p}^{{}^{\prime}}\) has to be positive for a positive acceptance probability is implied in the Gamma prior \(\mathbb{P}(\xi_{p}^{{}^{\prime}})\).
5. **Update \(a\)**: Propose \(a^{{}^{\prime}}\) from \(\mathrm{N}(a,s_{a})\), where \(s_{a}\) is the proposal standard deviation. Set \(a=a^{{}^{\prime}}\) with probability \(\min\left(1,\dfrac{\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma}, \boldsymbol{\xi},a^{{}^{\prime}},b)\mathbb{P}(a^{{}^{\prime}})}{\mathbb{P}( \mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma},\boldsymbol{\xi},a,b)\mathbb{P}( a)}\right)\).
6. **Update \(b\)**: Propose \(b^{{}^{\prime}}\) from \(\mathrm{N}(b,s_{b})\), where \(s_{b}\) is the proposal standard deviation. Set \(b=b^{{}^{\prime}}\) with probability \(\min\left(1,\dfrac{\mathbb{P}(\mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma}, \boldsymbol{\xi},a,b^{{}^{\prime}})\mathbb{P}(b^{{}^{\prime}})}{\mathbb{P}( \mathbf{Y}|\mathbf{Z}_{n},\boldsymbol{\sigma},\boldsymbol{\xi},a,b)\mathbb{P}( a)}\right)\).
7. **Update \(\boldsymbol{\eta}_{r}\) (infinite regime)**: We update the components of \(\boldsymbol{\eta}_{r}=(\theta,\alpha)\) individually: 1. Propose \(\alpha^{{}^{\prime}}\) from \(\mathrm{N}(\alpha,s_{\alpha})\), where \(s_{\alpha}\) is the proposal standard deviation, and write \(\boldsymbol{\eta}_{r}^{{}^{\prime}}=(\theta,\alpha^{{}^{\prime}})\). Set \(\boldsymbol{\eta}_{r}=\boldsymbol{\eta}_{r}^{{}^{\prime}}\) with probability \(f\left(\mathbf{Z}_{n},\boldsymbol{\eta}_{r}^{{}^{\prime}},\boldsymbol{\eta}_{ r},r\right):=\min\left(1,\dfrac{\mathbb{P}(\mathbf{Z}_{n}|\boldsymbol{\eta}_{r}^{{}^{ \prime}})\mathbb{P}(\boldsymbol{\eta}_{r}^{{}^{\prime}}|r)}{\mathbb{P}( \mathbf{Z}_{n}|\boldsymbol{\eta}_{r})\mathbb{P}(\boldsymbol{\eta}_{r}|r)}\right)\). 2. Propose \(\theta^{{}^{\prime}}\) from \(\mathrm{N}(\theta,s_{\theta})\), where \(s_{\theta}\) is the proposal standard deviation, and write \(\boldsymbol{\eta}_{r}^{{}^{\prime}}=(\theta^{{}^{\prime}},\alpha)\). Set \(\boldsymbol{\eta}_{r}=\boldsymbol{\eta}_{r}^{{}^{\prime}}\) with probability \(f\left(\mathbf{Z}_{n},\boldsymbol{\eta}_{r}^{{}^{\prime}},\boldsymbol{\eta}_{ r},r\right)\).
8. **Update \(\boldsymbol{\eta}_{r}\) (finite regime)**: We update the components of \(\boldsymbol{\eta}_{r}=(\gamma,k)\) individually: 1. Propose \(\gamma^{{}^{\prime}}\) from \(\mathrm{Lognormal}(\gamma,s_{\gamma})\), where \(\gamma\) and \(s_{\gamma}\) are location and scale parameters, respectively, and write \(\boldsymbol{\eta}_{r}^{{}^{\prime}}=(\gamma^{{}^{\prime}},k)\). Set \(\boldsymbol{\eta}_{r}=\boldsymbol{\eta}_{r}^{{}^{\prime}}\) with probability \(f\left(\mathbf{Z}_{n},\boldsymbol{\eta}_{r}^{{}^{\prime}},\boldsymbol{\eta}_{ r},r\right)\times\gamma^{{}^{\prime}}/\gamma\). 2. Draw \(r\) from \(\mathrm{Geometric}(p_{k})\), where \(p_{k}\in(0,1]\) is a pre-specified tuning parameter. Propose \(k^{{}^{\prime}}\) to be \(k+r\) or \(k-r\) with equal probability, and write \(\boldsymbol{\eta}_{r}^{{}^{\prime}}=(\gamma,k^{{}^{\prime}})\). Set \(\boldsymbol{\eta}_{r}=\boldsymbol{\eta}_{r}^{{}^{\prime}}\) with probability \(f\left(\mathbf{Z}_{n},\boldsymbol{\eta}_{r}^{{}^{\prime}},\boldsymbol{\eta}_{ r},r\right)\).
Step 7 and step 8 are exclusive to the infinite and finite regimes, respectively, if the choice of regime is fixed. It is, however, possible to carry out model selection for the choice of regime. The details of this step, assumed to take place after the steps above within each iteration, are outlined in Section 4.2. The updating of \(\boldsymbol{\eta}_{r}\) is still as described above when
model selection is embedded, meaning that step 7 (8) will be used when the current choice of regime is infinite (finite).
### Model selection of regime
We introduce the prior of the regime, \(\mathbb{P}(r)\) in Section 3.2. When \(\mathbb{P}(r)\in(0,1)\), desired is the marginal posterior probability \(\mathbb{P}(r|\mathbf{Y})\), which tells us from the data how much one regime is preferred to the other. This probability can be computed from the output of the MCMC sampler, which embeds the model selection step described below. The boundary cases, when \(\mathbb{P}(r=0)=1\) and \(\mathbb{P}(r=1)=1\), essentially assume we stay within the infinite and finite regime, respectively, and therefore will not require this additional step.
We incorporate a Gibbs step for model selection introduced by Carlin and Chib (1995). The principle is that we first simulate the parameters not in the current regime, denoted by \(\boldsymbol{\eta}_{1-r}\), from a pre-specified _pseudoprior_, denoted by \(\mathbb{P}(\boldsymbol{\eta}_{1-r}|r)\). Next, we compute the respective weights for staying in the current regime (using the values sampled from the posterior) and for moving to the other regime (using the values simulated from the pseudoprior). The normalised weights are then the probabilities with which we select between the regimes. Specifically, we carry out the following steps:
1. Sample \(\mathbf{Z}_{n}\), \(\boldsymbol{\sigma}\) and \(\boldsymbol{\eta}_{r}\) from the posterior using the steps described in Section 4.1.
2. Sample \(\boldsymbol{\eta}_{1-r}\) from the pseudoprior \(\mathbb{P}(\boldsymbol{\eta}_{1-r}|r)\).
3. Compute \(A_{0}=\mathbb{P}(\mathbf{Z}_{n}|\boldsymbol{\eta}_{0})\mathbb{P}(\boldsymbol{ \eta}_{0}|r=0)\mathbb{P}(\boldsymbol{\eta}_{1}|r=0)\mathbb{P}(r=0)\) and also \(A_{1}=\mathbb{P}(\mathbf{Z}_{n}|\boldsymbol{\eta}_{1})\mathbb{P}(\boldsymbol{ \eta}_{1}|r=1)\mathbb{P}(\boldsymbol{\eta}_{0}|r=1)\mathbb{P}(r=1)\), where \(\mathbb{P}(\boldsymbol{\eta}_{1}|r=0)\) and \(\mathbb{P}(\boldsymbol{\eta}_{0}|r=1)\) are the pseudopriors.
4. Set \(r=0\) and \(r=1\) with probabilities \(\dfrac{A_{0}}{A_{0}+A_{1}}\) and \(\dfrac{A_{1}}{A_{0}+A_{1}}\), respectively.
The choices of pseudopriors in principle do not affect the target we sample from, but the efficiency of the sampler, which is optimal when \(\mathbb{P}(\boldsymbol{\eta}_{1-r}|r)\) is equal to the posterior of
when staying within regime \(1-r\) i.e. \(\mathbb{P}(\mathbf{\eta}_{1-r}|\mathbf{Y})\). Therefore, to facilitate the pre-specification of the pseudopriors, we first run the MCMC sampler for the finite regime (without model selection) and set \(\mathbb{P}(\mathbf{\eta}_{1}|r=0)\) to be close to \(\mathbb{P}(\mathbf{\eta}_{1}|\mathbf{Y})\), the joint posterior of \(\gamma\) and \(k\). Similarly, we run the sampler for the infinite regime and set \(\mathbb{P}(\mathbf{\eta}_{0}|r=0)\) to be close to \(\mathbb{P}(\mathbf{\eta}_{0}|\mathbf{Y})\), the posterior of \(\alpha\) and \(\theta\).
The regime prior \(\mathbb{P}(r)\) is chosen, usually with some trial and error, in such a way that the sampler spends comparable numbers of iterations in either regime, instead of actually representing the prior belief in the regime. To perform model selection after inference, we will compute the Bayes factor, which is the ratio of the posterior odds to the prior odds:
\[B_{10}=\frac{\mathbb{P}(r=1|\mathbf{Y})}{\mathbb{P}(r=0|\mathbf{Y})}\bigg{/} \frac{\mathbb{P}(r=1)}{\mathbb{P}(r=0)} \tag{7}\]
The numerator and denominator of the posterior odds will be approximated by the proportions of \(r=1\) and \(r=0\), respectively, in the MCMC output.
## 5 Application
In this section, we report the results of applying the SBM for DAGs and the MCMC sampler to two citation networks introduced in Section 1.
### Social network analysis citation network
We first look at the citation network analysed by Lee and Wilkinson (2018), of which the adjacency matrix and network diagram are plotted on the left of Figures 1 and 2, respectively. It contains 1118 citations (edges) between 135 articles (nodes) which are related to social network analysis (SNA). We shall call this data the SNA citation network hereafter.
The MCMC sampler outlined in Section 4.1 was applied, with 20000 iterations obtained after a burn-in period and thinning of 2000, with \(a\sim\text{Gamma}(1,0.01)\), \(b\sim\text{Gamma}(1,0.01)\). Such settings were used three times, once assuming the infinite regime (\(r=0\)) with
\(\mathrm{Gamma}(1,0.01)\), once assuming the finite regime (\(r=1\)) with \(\gamma\sim\mathrm{Gamma}(1,0.01)\) and \(k\sim\mathrm{truncated}\) negative binomial\((1,0.01)\), and once for model selection with \(\mathbb{P}(r=1)=0.2\) i.e. \(\mathbb{P}(r=0)=0.8\). All three runs were performed on a Linux machine with Intel Core i7-7700 Processor (3.6GHz), and took 0.005, 0.004, and 0.0045 seconds per iteration, respectively.
Figures 3 to 8 show some key inference results, except for the trace plots of the parameters, which are in the Online Supplementary Material. Of more importance is the posterior density (or mass function in the case of \(K_{n}\) and \(k\)) in Figures 3 and 4. On one hand, in the panels for \(k\) and \(\gamma\), the infinite regime is naturally missing, while the results by the finite regime and model selection coincide as expected. Similarly, in the panels for \(\theta\) and \(\alpha\), the finite regime is naturally missing, while the results by the infinite regime and model selection coincide as expected. On the other hand, the panels for \(K_{n}\), \(a\) and \(b\) illustrate that the posterior density from model selection is essentially a weighted average of that of the two regimes.
The departure of the posterior densities of \(\alpha\) and \(\gamma\) from 0 suggests that either the infinite regime or the finite regime is preferred to their shared boundary i.e. the Dirichlet process when \(\alpha=\gamma=0\). Between the two regimes, with \(\mathbb{P}(r=1)=0.2\) resulting in \(\mathbb{P}(r=1|\mathbf{Y})=0.5602\), Equation 7 gives the Bayes factor \(B_{10}=5.094\), suggesting a slight preference for the finite regime, for this social network analysis citation network.
The high number of thinning is mainly due to the Markov chain for the finite regime occasionally getting stuck at \(K_{n}=4\), which is in turn due to the mixing of \(k\) and \(\gamma\), and
Figure 3: Posterior mass function of \(K_{n}\) and \(k\), for the infinite (green) and finite (red) regime, and model selection (blue), for the SNA citation network.
the skewness of their joint posterior, which is plotted on log scale in Figure 5. As the opaqueness of the points increases with \(k\), the concentration towards the bottom indicates high posterior density around small values of \(k\), thus limiting \(K_{n}\) to grow. However, such issue of getting stuck at small values of \(k\) and \(K_{n}\) does not exist for either the infinite regime or the application to the larger data set in the next subsection.
Next, we look at the posterior of \(\mathbf{\sigma}\), or equivalently the positions of the nodes in the
Figure 4: Posterior density of the parameters for the infinite (green) and finite (red) regime, and model selection (blue), for the SNA citation network.
Figure 5: Joint posterior of \(\log\gamma\) and \(\log k\) for the finite regime for the SNA citation network. The opaqueness of the points increases with \(k\).
topological ordering, \(\mathbf{\phi}\). The mixing of the MCMC is sufficiently good that the trace plots are not shown here. The posterior density of each component of \(\mathbf{\phi}\) is plotted as a row in Figure 6, with the rows themselves in an arbitrary topological ordering. As the citations i.e.
Figure 6: The posterior density of the positions of the nodes in \(\mathbf{\sigma}\) for the infinite (left) and finite (right) regime, for the SNA citation network. Each row of colours is the posterior density of a component of \(\mathbf{\phi}\). The black dots are the mean positions.
Figure 7: The similarity matrix (red spectrum) and the adjacency matrix (black dots) for the infinite (left) and finite (right) regime, for the SNA citation network. The nodes are clustered (blue dashed lines) according to the point estimate \(\hat{\mathbf{Z}}_{n}\).
the edges in the DAG in general go from more recent works to older ones, the top (bottom) rows are the topologically earlier (later) articles within the network, or approximately the more recent (older) articles. The coloured part of each row represents the support of the posterior of the corresponding article, and in general is narrower towards the bottom. An interpretation is that chronologically earlier works are likely to cite between themselves and get cited by later ones, thus limiting their positions in \(\boldsymbol{\sigma}\).
To provide a posterior point estimate of \(\mathbf{Z}_{n}\), we follow the clustering approach introduced in Meila (2007) and further discussed in Wade and Ghahramani (2018). Specifically, the point estimate, denoted by \(\hat{\mathbf{Z}}_{n}\), is obtained using a decision theoretic approach, by minimising with respect to the posterior distribution a loss function on the space of allocation vectors,
\[\hat{\mathbf{Z}}_{n}=\underset{\hat{\mathbf{Z}}_{n}}{\text{argmin }}\mathbb{E}\left[L(\mathbf{Z}_{n},\tilde{\mathbf{Z}}_{n})|\mathbf{Y}\right]= \underset{\hat{\mathbf{Z}}_{n}}{\text{argmin }}\sum_{\mathbf{Z}_{n}}L(\mathbf{Z}_{n},\tilde{\mathbf{Z}}_{n})\mathbb{P}( \mathbf{Z}_{n}|\mathbf{Y}). \tag{8}\]
For the loss function \(L(\mathbf{Z}_{n},\tilde{\mathbf{Z}}_{n})\), Meila (2007) chose the _Variation of Information_ (VI), defined as
\[\text{VI}(\mathbf{Z}_{n},\tilde{\mathbf{Z}}_{n})=\sum_{i=1}^{K_{n}}\frac{n_{ i+}}{n}\log\left(\frac{n_{i+}}{n}\right)+\sum_{j=1}^{\bar{K}_{n}}\frac{n_{+j}}{n} \log\left(\frac{n_{+j}}{n}\right)-2\sum_{i=1}^{K_{n}}\sum_{j=1}^{\bar{K}_{n}} \frac{n_{ij}}{n}\log\left(\frac{n_{ij}}{n}\right)\]
where the counts variables are defined as \(n_{ij}=\sum_{p=1}^{n}\mathbb{I}(Z_{p}=i,\tilde{Z}_{p}=j)\), \(n_{i+}=\sum_{j=1}^{\bar{K}_{n}}n_{ij}\), and \(n_{+j}=\sum_{j=1}^{K_{n}}n_{ij}\). The loss function \(L(\mathbf{Z}_{n},\tilde{\mathbf{Z}}_{n})\) can be seen as a distance between \(\mathbf{Z}_{n}\) and \(\tilde{\mathbf{Z}}_{n}\), which can be computed even if \(K_{n}\neq\bar{K}_{n}\) i.e. the numbers of groups implied by \(\mathbf{Z}_{n}\) and \(\tilde{\mathbf{Z}}_{n}\) are different. Minimising this loss function then leads to a "mean" allocation vector. In order to implement the minimization, we apply the SALSO algorithm of Dahl et al. (2021).
This point estimate \(\hat{\mathbf{Z}}_{n}\) is obtained for both regimes, and used to cluster the nodes when plotting the adjacency matrix in Figure 7. As the ordering used here is according to \(\hat{Z}_{n}\) and not topological, the (asymmetric) adjacency matrix, that contains the same information as
that in Figure 1, is not upper triangular. The red colour spectrum underlaying the black dots is the similarity matrix, which is calculated using the MCMC output for \(\mathbf{Z}_{n}\). That clustering of nodes is achieved is echoed by the network diagrams in Figure 8, which are in the same layout as Figure 2. The nodes are coloured according to the point estimate \(\hat{\mathbf{Z}}_{n}\) for each regime, and in general nodes in the same group are close to each other.
### Statistics citation network
We apply the model to the second citation network, which corresponds to the plots on the right of Figures 1 and 2. The original data analysed by Ji and Jin (2016) contains 5722 citations between 3248 articles in the top statistics journals. Here, we only consider the largest connected component, and remove a half of the edges that result in cyclic citations (there were only 9 pairs of cyclic citations). Upon the data cleaning, we arrive at a citation network that is a DAG and contains 5563 citations between 2248 articles. We refer to this data as the statistics citation network hereafter.
The MCMC sampler was applied with \(1.5\times 10^{4}\) iterations obtained after a burn-in period
Figure 8: Network diagram with colours according to the point estimate for the infinite (left) and finite (right) regime, for the SNA citation network.
and thinning of 10, with the same priors for \(a\), \(b\), \(\alpha\), \(\theta\), \(k\) and \(\gamma\) as those for the application to the SNA citation network. For model selection, even with \(\mathbb{P}(r=0)=0.99\), the whole chain stays in the finite regime i.e. \(\mathbb{P}(r=1|\mathbf{Y})=1\). Therefore, we shall report the results under the finite regime only, as it is heavily preferred to the infinite counterpart for this network. The trace plots and posterior densities are provided in the Online Supplementary Material.
Similar to Figure 7, the adjacency matrix for the statistics citation network is plotted in Figure 9, with the nodes clustered according to the point estimate. The similarity matrix is not being overlaid here as the image size would otherwise be too large. The concentration of the black dots along the major block diagonal suggests that most groups are closely knitted. On the other hand, there are some concentrated blocks which are off-diagonal and asymmetric, indicating high number of one-way citations from one group to another.
Figure 9: The adjacency matrix (black dots) for the finite regime, for the statistics citation network. The nodes are clustered (blue dashed lines) according to the point estimate \(\hat{\mathbf{Z}}_{n}\).
Discussion
In this article, we proposed a Bayesian nonparametric SBM for DAGs. Specifically, by conditioning on a latent topological ordering, the likelihood of the data (which is composed of directed edges) becomes that of an undirected graph, i.e. that of an upper diagonal adjacency matrix. The topological ordering is treated as an unknown parameter, endowed with a prior and inferred a posteriori within the MCMC sampler, using a modified Leap-and-Shift proposal. Moreover, the use of the PY process prior for the allocation vector \(\mathbf{Z}_{n}\) allows the model to infer the number of groups \(K_{n}\) from the data. Moreover, a model selection step for the two regimes of the PY process can be included within the MCMC sampler. The model and sampler are applied successfully to two citation networks.
The model can be generalized in different ways. For example, the model can be extended by introducing covariate information, such as the topic of each document or its publication year. This could be achieved by modelling the the degree correction factors with a covariate dependent distribution. Also, in terms of parametrisation, the two regimes of the PY process could be unified so that \(\gamma\) and \(\alpha\) become one parameter that can take a value between \(-\infty\) and \(1\). Its posterior density will directly imply which regime is preferred. The main obstacle to overcome here would be the sampling from the non-standard joint parameter space of \(\theta\) and \(\alpha\) across the two regimes.
Another issue to be resolved is the inference of \(k\), which is naturally highly correlated with \(K_{n}\), under the finite regime. This is apparent in the parameter trace plots (in Online Supplementary Material) for the statistics citation network, while for the SNA citation network the model selection improves the mixing of \(k\) in the MCMC. Ideally \(k\) is integrated out, but the computations required mean that this is feasible only under certain special cases. Such issue with \(k\) remains to be resolved.
There are potential extensions regarding inference procedure and results. Similar to how \(\hat{\mathbf{Z}}_{n}\) is computed for \(\mathbf{Z}_{n}\), a point estimate could be provided for \(\boldsymbol{\sigma}\), but the distance function for the ordering has to be carefully considered. Relatedly, a Mallow's model prior could be used
for \(\mathbf{\sigma}\), as opposed to the uniform prior used here, to potentially provide more information to facilitate the inference. Lastly, the derivation of an efficient Variational Bayes algorithm would possibly allow the applicability of the proposed model to much larger datasets.
|
2301.01162 | Language Models are Drummers: Drum Composition with Natural Language
Pre-Training | Automatic music generation with artificial intelligence typically requires a
large amount of data which is hard to obtain for many less common genres and
musical instruments. To tackle this issue, we present ongoing work and
preliminary findings on the possibility for deep models to transfer knowledge
from language to music, by finetuning large language models pre-trained on a
massive text corpus on only hundreds of MIDI files of drum performances. We
show that by doing so, one of the largest, state-of-the-art models (GPT3) is
capable of generating reasonable drum grooves, while models that are not
pre-trained (Transformer) shows no such ability beyond naive repetition.
Evaluating generated music is a challenging task, more so is evaluating drum
grooves with little precedence in literature. Hence, we propose a tailored
structural evaluation method and analyze drum grooves produced by GPT3 compared
to those played by human professionals, exposing the strengths and weaknesses
of such generation by language-to-music transfer. Our findings suggest that
language-to-music transfer learning with large language models is viable and
promising. | Li Zhang, Chris Callison-Burch | 2023-01-03T15:47:53Z | http://arxiv.org/abs/2301.01162v1 | # Language Models are Drummers:
###### Abstract
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.12
Footnote 1: Accepted to the 1st workshop on Creative AI across Modalities in AAAI 2023.
Footnote 2: Data and code can be found at [https://github.com/zharry29/drums-with-llm](https://github.com/zharry29/drums-with-llm). The title is a parody of the viral trend of titling papers as “Language Models are...” in NLP venues.
## Introduction
Music understanding and generation using artificial intelligence has a long history [1] and has gained steady interests in recent years [14]. One strand of work focuses on symbolic music rather than audio, where music is represented as sequential data such as MIDI. While the analogy between music and language has long been studied [15], the symbolic representation of music exhibits an even clearer similarity to language in their surface form. For example, music has notes, measures, and sections, while language has tokens, sentences, and paragraphs. It is thus intuitive that some work has applied natural language processing (NLP) techniques to music. Specifically, most have attempted to learn an embedding space of music [11] similar to that of texts.
Recent work has leveraged Transformers [10] for symbolic music processing [12, 13]. Transformers, when pre-trained on a massive text corpus, are known as large language models (LLMs), the current state-of-the-art approach to many NLP tasks [1, 15]. Notably, Zeng et al. (2021) is one of the first and only work to pre-train a Transformer on a large symbolic music corpus containing more than 1 million songs, using a similar approach of pre-training on a text corpus, achieving state-of-the-art performance in various music understanding tasks. However, one of the most significant limitations of this and most other work in data-driven music processing is the large amount of symbolic music data required for training, which is extremely challenging to obtain for less-mainstream genres, particular styles, less-prevalent instruments, or out-of-the-ordinary specifications, severely limiting the versatility of such a method. Hence, low-resource symbolic music processing remains highly challenging. For texts, on the other hand, few-shot learning has been greatly empowered by LLMs due to the extremely large size of their pre-training textual data, magnitudes more than music data.
In this work, we are the first step to explore such **text
Figure 1: The sheet music of an example of our generated drum track, demonstrating our model’s ability to somewhat musically follow the motif and make variations. The first two measures are provided while the rest are generated.
to-music transfer learning potential in LLMs. In other words, we pose the hypothesis that present-day state-of-the-art LLMs, pre-trained with a massive amount of textual data, is capable of **generating symbolic music with little music data** to some nontrivial extent. We specifically focus on one instrument, the drum set, for multiple reasons. First, the drum set is one of the most common and important instruments in many genres of music such as jazz, funk, blues, gospel, latin, pop, rock, metal, etc. Second, the symbolic representation of the drum set is simpler than most pitched instruments, as each note does not have a pitch but corresponds to a hit on one drum. As the number of drums is usually greatly smaller than that of possible pitches, the resulting sequence is much shorter, and thus easier to be processed by models. Third, the performance of a drum set typically is endowed with more degree of freedom with regard to the audience's aesthetics than many other instruments, making it an appropriate entry point for studying music generation with LLMs, which is presumed to be highly challenging.
We focus on the task of drum generation or composition, which has a small body of published work in literature. While most if not all all of the existing work has treated drums as an accompaniment, we instead focus on drum solo generation, with a convenient analogy to story generation at which LLMs are known to excel.
We finetune a state-of-the-art LLM, GPT3 model [1] on the Groove dataset [13] of about 400 drum groove performances recorded as MIDI. To leverage the textual pre-training of GPT3, we propose a textual representation of a drum performance. We present the following core findings:
1. The largest and smallest GPT3 models both can generate nontrivial drum grooves after being finetuned.
2. A similar-sized model that is not pre-trained on language data, however, cannot.
We claim that the existing automatic evaluation of music generation is insufficient for our task. Hence, we propose an evaluation methodology specifically for drum grooves to both qualitatively and quantitatively evaluate the strengths and weaknesses of machine-generated drum grooves compared to those performed by humans. Finally, we provide some preliminary listening test results, with a plan to conduct scaled and rigorous tests in future work.
## The Drum Set
The drum set, also known as the drum kit, or colloquially the drums, is a compound musical instrument consisting of many sub-instruments, including drums and symbols, both generally referred to as drums in this paper. Here, we assume a simple, stereotypical drum set with a hi-hat, a crash symbol, a ride cymbal, a bass drum, a snare drum, and a tom (see Figure 2, but without the hi-tom and the floor-tom3).
Footnote 3: [https://r.redd.it/cen72phishb81.png](https://r.redd.it/cen72phishb81.png)
The performance on a drum set can be notated as sheet music, a human-readable symbolic representation, or as MIDI, which records the when each drum is hit at what velocity, a computer-readable symbolic representation.
## Dataset
Among just a few datasets of drum performances, Google's Groove MIDI Dataset4 is the largest and the most high-quality to date, containing 1,150 MIDI files and over 22,000 measures of drumming by 10 professional drummers. In this dataset, the drum performances are either grooves, long sequences of rhythmic ideas, or fills, short bursts of free-flowing expressions. As we focus on drum generation or composition throughout a long sequence, we only consider the grooves in the dataset. Each MIDI is marked with the style (e.g., rock, funk, gospel, etc.), the tempo (in beats per minute, BPM), and the time signature. For simplicity, we only consider those in the time signature of 4/4. We follow the train-development-test splits in the dataset. The statistics of the filtered Groove dataset are shown in Figure 1.
Footnote 4: [https://magenta.tensorflow.org/datasets/groove](https://magenta.tensorflow.org/datasets/groove)
The Groove dataset was originally proposed to study microtiming and expressive performance, and therefore the drum MIDI files encapsulate human imperfection. However, we re-purpose the dataset to study drum composition, leading to the following choice of simplification. First, we quantize all notes to a 16-th note grid. In other words, all note events in the MIDI are re-timed to the closest of the 16 equidistant timestamps in a measure. An implication of such quantization is that deliberate off-grid playing such as triplets or swing feels is lost. Second, we discard the velocity information (i.e., how hard a drum is hit), which can usually be inferred post-hoc to a coarse-grained extent. Third, while the drum set can be played with many expressions and articulations (e.g., hitting different part of a drum using different part of different tools), we reduce them to simply the basic
\begin{table}
\begin{tabular}{l c c c} \hline \hline & train & dev & test \\ \hline Total num. MIDI & 373 & 47 & 35 \\ - rock & 169 & 16 & 15 \\ - jazz & 41 & 6 & 4 \\ - latin & 37 & 10 & 3 \\ - funk & 31 & 6 & 4 \\ - hiphop & 26 & 1 & 3 \\ - others & 69 & 8 & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The number of MIDI files in each style in the filtered Groove dataset used in this work.
Figure 2: The anatomy of a basic drum set, and how each drum may appear on a sheet music.
articulation (head hit) of the hi-hat, crash cymbal, ride cymbal, bass drum, snare drum, and floor tom. In other words, each note only has 6 possible values. Fourth, we truncate each MIDI file to only the first 16 measures; at 128 BPM, for example, this equates to 30 seconds. Finally, we remove empty leading measures whose first quarter note is a rest, and ignore grooves with less than 8 measures.
## Experiments
### Representation
As discussed before, LLMs have demonstrated extremely strong few-shot transfer learning ability from one textual task to other. To possibly exploit this in music, it is necessary to come up with a textual representation of the drum grooves. We propose a pianoroll-like representation Brunner et al. (2018), referred to as a _drumroll_, that is essentially a multi-line string where each row corresponds to a 16-th note in the time sequence, and each character in a line corresponds to whether a drum is hit. Specifically, each character is 'o' if the particular drum is hit at the particular 16-th note, and '-' otherwise. See Figure 3 for an example. To help LLMs identify the boundary between measures, we add a newline of "SEP" between every 16 lines (a measure) and a newline of "END" after the final line.
### Task
We focus on an instance of drum generation referred to as drum completion, where the model is given the first 2 measures and must complete the rest of the 14 measures of the groove. The model may terminate at any point. This is analogous to conditioned story generation in NLP.
### Model
First, we consider two naive baselines, **randomly** choosing whether to play a note and **repeating** the second given measure. We then finetune a state-of-the-art LLM, OpenAI's **GPT3 Davinci** with 175 billion parameters, on the training set. In each file of a drum groove, the input (prompt) is the first 2 measures and the output (completion) is the remaining 14 measures. The temperature is set to 0.85 to encourage creativity. Finetuning the model on the training set costs $38.33 and takes around 30 minutes using OpenAI's API.
To ascertain the role model size plays in drum generation, we further consider a smaller **GPT3 Ada** model with 350 million parameters, which has been pre-trained on the same corpus and we use the identical settings as the larger Davinci model. Finetuning the model on the training set costs $0.51 and takes around 5 minutes using OpenAI's API. Both GPT3 models are later found to be able to generate nontrivial drum grooves.
The ascertain the role language pre-training plays in drum generation, we set the control to be an un-pre-trained GPT3 model, namely a Transformer Vaswani et al. (2017) with the same size as GPT3. Because our computing resources cannot accommodate a model as big as Davinci, and also because merely 373 text files each with 256 lines in the training set are likely insufficient to converge the training loss, we finetune a smaller, un-pre-trained Transformer with 85 million parameters, the same magnitude as GPT3 Ada. While the training loss does converge, the model predicts the same certain sequence regardless of what 2 measures are provided, performing no better than the random baseline. This suggests that language pre-training is a necessary condition for effective drum generation.
## Evaluation
How good are the drum grooves generated by GPT3? We follow the convention of music generation and consider both objective/automatic and subjective/human evaluation. We report our findings based on the test set.
### Objective Evaluation
We deem the established methods of automatic evaluation of symbolic music generation unsuitable for our task. To name a few: perplexity, for example, has long been shown to exhibit low correlation with human perception in NLP Kuribayashi et al. (2021); structural similarity, while a reasonable metric, is often approximated with some simple similarity measures Yu et al. (2022) and discourages creativity that stray from the reference, which by no means should be treated as the ground-truth.
In contrast, we consider what constitutes a good drum groove and propose a structural evaluation called the **pattern and fill analysis**. We assume that often a good drum groove minimally satisfies the following criteria:
1. There exists one or more consistent **patterns** of some rhythmic idea and occasional change-ups known as **fills**.
2. The measures in a pattern are sufficiently similar, but ideally not identical.
3. The measures in a fill are sufficiently different from those in adjacent patterns.
Later, we verify that the human-performed grooves mostly satisfy these criteria.
Figure 3: The drumroll representation of an example drum sheet music. Each measure corresponds to 16 lines in texts, where each line corresponds to a 16-th note. Each line contains 6 characters corresponding to 6 drums, where ‘o’ and ‘-’ denotes whether each drum is hit.
In a drum performance represented as a drumroll, each measure is represented by a string of 16 lines (Figure 3). To classify each measure as either a pattern or a fill, we take a sliding window of size 3 centered at some measure \(m_{i}\) and calculate the edit distances between this measure and its two neighbors. The minimum of these two distances is referred to as the _variation_ of the central measure:
\[\textit{variation}(m_{i})=\min(dist(m_{i},m_{i-1}),dist(m_{i},m_{i+1}))\]
Intuitively, the variation of a measure in a pattern would be small, while that in a fill would be large. Therefore, in a good drum groove, the variation of all measures (except the first and the last) can be expected to be clearly separated.
Next, we plot the variation of each measure sequentially for each drum groove in the development set and observe consistent patterns. Eight randomly chosen grooves of different styles are shown in Figure 4. Intuitively, for human performances, the variation of a measure in a pattern should be small, while that in a fill should be large. This is indeed the case for most examples, where human-played patterns are consistent but with some variations (plateaus), while fills are largely different (spikes). As expected, the random baseline results in high variation across all measures, while the repeat baseline no variation at all - both are undesirable.
Upon qualitative examination, the two GPT3 models of different sizes can clearly generate nontrivial drum grooves, with many plateaus and occasional spikes, though less spikes than those performed by human. Quantitatively, we calculate the average variation of all measures in all grooves. As shown in Table 2, drum grooves generated by GPT3 models tend to vary less than human. To see if the generated patterns have less variation while the fills have much more variation, we perform K-means to separate the measures in a drum groove into two clusters by their variation. We then calculate the average intra-distance between the two centroids, and the average inter-distance between each measure and the centroid it is assigned to. As shown in Table 2, the intra-centroid distance shows that the grooves performed by human have a much clearer-cut pattern-versus-fill separation than GPT3, than the random baseline. The inter-centroid distance shows that the spread of variations within the class of pattern or fill is more pronounced in human-performed grooves than in GPT3-generated ones.
### Subjective Evaluation
Our objective evaluation is clearly informative but insufficient. Hence, we conduct a listening test and perform an er
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & human & random & repeat & Davinci & Ada \\ \hline avg. variation & 5.1 & 40.4 & 0 & 3.0 & 3.4 \\ avg. intra-centroids & 10.1 & 5.7 & 0 & 6.9 & 7.2 \\ avg. inter-centroids & 1.4 & 1.2 & 0 & 0.6 & 0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Qualitative statistics of drum grooves produced with different means. The distance between cluster centroids suggests how dissimilar the patterns and the fills are. The distance between a cluster centroid and a measure suggests the amount that the measures in one class vary by.
Figure 4: The variation (y-axis) of each measure (x-axis) in 8 randomly sampled drum grooves from each style.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & human & random & repeat & Davinci & Ada \\ \hline Repetitive & 0 & 0 & 35 & 3 & 7 \\ Consistent & 32 & 0 & 0 & 29 & 15 \\ Chaotic & 3 & 35 & 0 & 3 & 13 \\ Has fill & 30 & 0 & 0 & 13 & 10 \\ Avg. length & 13.3 & 16 & 16 & 13.7 & 12.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The number of drum grooves judged to satisfy each criteria produced by each model in the test set.
ror analysis based on the following criteria:
* Is the groove repetitive, meaning there is little or no variation among measures?
* Is the groove consistent, meaning there is some variation among measures but a steady rhythmic idea (specifically, the back-beat placement) can be followed?
* Is the groove chaotic, meaning there is either too much variation, or a lack of a clear rhythmic idea?
* Does the groove contain any reasonable drum fill?
While scaling up this analysis rigorously with carefully chosen subjects is left for future work, our own judgements are shown in Table 2 as preliminary findings. Concretely, all drum grooves produced via different means are shuffled and randomly present to one of the authors who has had years of training in drumming. Naturally, all randomly generated grooves are judged as chaotic without any consistent motif, while all repeated ones are by nature repetitive. For the grooves performed by human, most are judged as consistent and most include at least one fill which is sufficiently different from the rest of the rhythmic patterns. In comparison, grooves generated by GPT3 Davinci is only slightly less consistent with desirable variations among measures, but significantly less of them contain any fills, rendering the grooves less interesting and more predictable overall. For the smaller GPT3 Ada model, the observation holds to a larger extent, with more inconsistent grooves and less fills.
### Case Study and Takeaways
We claim that the LLMs demonstrate an impressive ability to write good drum grooves given only hundreds of training examples without any knowledge of what features to pay attention to. From our objective and subjective evaluation above, we have observed that both LLMs and human are able to compose drum grooves with a structure and some variations. Why, however, are models' drum composition still worse than human's? We postulate several factors.
We observe that a common source of the lack of musicality in machine-generated drum grooves stems from the misplacement of back-beats, which are steadily accented beats in a measure, usually the 2nd and the 4th in 4/4 in many genres of music. Human drummers, when playing variations, tend to respect the back-beat placements, while models tend to disregard such concept. Another source of the lack of musicality is the lack of drum fills. While human drummers often inherently think about the cycle between patterns and fills to give a captivating performance, models are yet to realize the importance of those occasional deviations.
To qualitatively examine these claims, we showcase two sets drum grooves composed by GPT3, one showcasing its strengths and one its weaknesses, along with the human performed grooves with the same first 2 measures.
A positive example is shown in Figure 5. Clearly, both grooves produced by human and GPT3 alike contain consistent patterns interspersed with fills, a prototypical structure that is expected and desired by most audience. Closely examining the pattern groupings, both grooves vary the place
Figure 5: The sheet music of a satisfactory drum groove generated by GPT3 (above), juxtaposed with the groove with the same first 2 measures played by human from the dataset (below). We annotate each measure as either a pattern or a fill. For example, pattern\(x.a\) and pattern\(x.b\) are adjacent, have the same accented back-beats, but are not identical.
ment of the bass drum and the snare drum, known as syncopation, while keeping the back-beats at the 2nd and 4th quarter note in tact. As a typical drum groove, patterns in both grooves consistently place a snare drum hit in all patterns (with two exceptions in human's pattern 3.1 and 3.2). Such composition is highly desirable, as the steady back-beat placement guides the listener with a solid rhythmic foundation, while the variations keep the groove interesting and natural. Besides the ability to conforming to human drummers' standard practice, this example also showcases LLMs' ability to be creative. While the human-performed groove closely adheres to the template of 3 measures of patterns followed by 1 measure of fill, the GPT3-generated groove does not. Instead, it employs multi-measure fills, and more interestingly, grouping of an odd number of patterns followed by fills, which is uncommon in the genre.
Two negative examples are shown in Figure 6, representing two common problems of GPT3-generated drum grooves. This first case is referred to as _displaced measures_. When human drummers play a pattern, they often fit each measure with some rhythmic idea. In other words, the boundaries between two measures are often clear. In our drum/l representation, each measure, 16 lines of texts, is followed by a newline of 'SEP' to denote such boundary. However, GPT3 Ada sometimes fail to respect these boundaries and displaces the measures, thus having more "chaotic" patterns as reported in Table 3. In comparison, GPT3 Davinci makes a lot less such mistakes. The second case demonstrates LLMs' known drawback of the procity to repeat generations. In the given example, GPT3 Davinci generates 14 identical measures, while a human drum/ or would "sneak in" minor variations while maintaining the motif. It is worth noting that the repetition of drum grooves is accepted in some music genres such as pop, dance, or rock, but frowned upon in most others as they are often thought to be mechanical. For drum grooves generated by LLMs, repetition is logically a trait to be avoided.
As of now, we have taken a model-free approach where the LLMs are only trained on some drum groove data without any additional information or priors. To reinforce the strengths and alleviate the weaknesses discussed before, we suggest future work to take a modular approach instead of tackling the task in an end-to-end fashion. For example, the patterns and fills can be generated from different distributions or by different models; repetition can be explicitly discouraged by over-generating each measure and perform some voting or selection. Furthermore, it is possible to control the variation within a drum groove by injecting additional labels in the training data to condition the LLM on.
### Improvisation or Recitation?
By far, we have hinted at LLMs' ability to be creative with regard to drum composition. Here, we pose one additional question: are the LLMs really _creating_ some drum grooves that they have never seen during finetuning, or are they simply _regurgitating_ what they have already seen?
Figure 6: The sheet music of two problematic drum grooves generated by GPT3 (the 1st and 3rd), juxtaposed with those performed by human (the 2nd and 4th). In the first case, GPT3 generates an extra note at the beginning of the fourth measure, displacing all grooves afterwards. In the second case, GPT3 generates strictly repeated measures; the rest are omitted.
We answer this question by calculating the frequency of each generated measure in the test set appearing in the the training set. As shown in Figure 7, only a small portion of generated measures are duplicates of any seen measures during finetuning, suggesting LLMs' ability to compose novel and unseen drum grooves.
## Related Work
**Automatic music generation** has a long history, and recent work has focused on using artificial intelligence Kaliakatsos-Papakostas et al. (2020) and specifically deep neural networks. Such efforts are driven by not only the prospect of having AI aid or replace human composers and arrangers for a variety of music, but also the pursuit of probing the artistic creativity of the state-of-the-art data-driven models. Most modern work in automatic music generation leverages model architectures that have shown to be effective in computer vision or NLP, such as LSTM Lyu et al. (2015), Transformers Huang et al. (2018); Zeng et al. (2021); Yu et al. (2022), or custom architectures to learn an embedding space Liang et al. (2020). However, the majority of the work on music generation with AI has happened in a supervised setting, which greatly limits its application to lesser known genres or specific instruments, such as drum generation. While there is yet to be a pre-trained music generation model as versatile as its counterpart in NLP such as GPT3, we believe the exploration of language-to-music transfer is necessary but uncharted.
**Automatic drum generation** has a small body of existing work. Similar to music generation in general, the drum generation task can happen in various settings. In the simplest setting, only one measure of drum pattern (also known as a "beat") is generated that is supposed to repeat throughout a song Vogl and Knees (2017); Bruford et al. (2020); Tikhonov and Yamshchikov (2021), while we focus on a more involved setting of non-repetitive drum composition. Alternatively, some work simply considers the general rhythm Lattner and Grachten (2019), but not the orchestration of different drums that we emphasize on. In more practical settings, a long sequence of drum composition is generated conditioned on musical signals such as the basslines Makris et al. (2017, 2019). While this line of work is most similar to ours which by far only deal with drum solo performance, all the work above has used drum composition data with limited size and variations as well as models (such as LSTM) that are relatively outdated in the AI community. Nevertheless, a direct comparison would be beneficial in future work. Less related is another line of work focuses on the microtiming and humanization of drum performance Gillick et al. (2019); Burloiu and Unatc (2020); Burloiu (2020), striving to mimic human's expressive imperfections.
A small body of work has focused on rhythm games Donahue et al. (2017); Liang et al. (2019). While rhythm games and drumming have certain similarities such as the focus on note placement with regard to the rhythm, their core difference lies in that the choreography of rhythm games is optimized for difficulty and playability, while the composition of drums is optimized for musicality. Due to this fundamental difference of motivation, we consider this line of work to be mostly irrelevant to ours.
**Large language models** (LLMs) are deep neural models, such as Transformers, pre-trained on a massive text corpus. For example, GPT3 is pre-trained on the compilation of Common Crawl, containing most texts in the world wide net, publicly available books, Wikipedia, and so on. From BERT Devlin et al. (2019) to GPT3 Brown et al. (2020), LLMs have been dominant is most tasks and applications in NLP. While much about how LLMs work is unknown, and thus LLMs are notoriously known as black-boxes, there is a generally consensus that LLMs' power can be attribute to the large size of both the pre-training data and the model, which give rise to LLMs' ability to effectively adapt to low-resource domains via transfer learning by being finetuned on a small amount of data. Interestingly, a few recent work has found that some of these abilities include transfer from pre-trained language to non-language tasks, such as chess Stockl (2021). While non-language textual representations such as chess moves or music charts are similar to natural language superficially, each manifests vastly different structures, and so we claim that transfer learning between them is well worth studying.
## Conclusion and Future Work
Our preliminary findings show that pre-trained large language models (LLMs) finetuned with merely hundreds of symbolic music files, such as drum grooves, can learn to generate music non-trivially. We also provide evidence that such ability can be attributed both the model size and the presence of language pre-training. We hope this observation inspires research efforts in not only low-resource music generation, but also exploration of extraordinary potentials of LLMs. We also attempt to pioneer the automatic evaluation of drum grooves which we hope to facilitate future work in drum generation with AI.
We also briefly discuss our plans for future work. While drum generation is scarce in literature and challenging to reproduce, we will still strive compare some existing specialized models. While our current drumroll representation ignores velocity, such information can easily be encoded by replacing the marker 'o' with the velocity value. However, the effects of doing so remains to be explored. Our methodology might be ported to other instruments such as piano, which we plan to explore, whereas it would be more involved to tackle multi-instrument conditioned generation.
Figure 7: The count (y-axis) of occurrences (x-axis) of each generated measure appearing in the training set. |
2310.17041 | On Surgical Fine-tuning for Language Encoders | Fine-tuning all the layers of a pre-trained neural language encoder (either
using all the parameters or using parameter-efficient methods) is often the
de-facto way of adapting it to a new task. We show evidence that for different
downstream language tasks, fine-tuning only a subset of layers is sufficient to
obtain performance that is close to and often better than fine-tuning all the
layers in the language encoder. We propose an efficient metric based on the
diagonal of the Fisher information matrix (FIM score), to select the candidate
layers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE
tasks and across distinct language encoders, that this metric can effectively
select layers leading to a strong downstream performance. Our work highlights
that task-specific information corresponding to a given downstream task is
often localized within a few layers, and tuning only those is sufficient for
strong performance. Additionally, we demonstrate the robustness of the FIM
score to rank layers in a manner that remains constant during the optimization
process. | Abhilasha Lodha, Gayatri Belapurkar, Saloni Chalkapurkar, Yuanming Tao, Reshmi Ghosh, Samyadeep Basu, Dmitrii Petrov, Soundararajan Srinivasan | 2023-10-25T22:42:30Z | http://arxiv.org/abs/2310.17041v1 | # On Surgical Fine-tuning for Language Encoders
###### Abstract
Fine-tuning all the layers of a pre-trained neural language encoder (either using all the parameters or using parameter-efficient methods) is often the de-facto way of adapting it to a new task. We show evidence that for different downstream language tasks, fine-tuning only a subset of layers is sufficient to obtain performance that is close to and often better than fine-tuning all the layers in the language encoder. We propose an efficient metric based on the diagonal of the Fisher information matrix (FIM score), to select the candidate layers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE tasks and across distinct language encoders, that this metric can effectively select layers leading to a strong downstream performance. Our work highlights that task-specific information corresponding to a given downstream task is often localized within a few layers, and tuning only those is sufficient for strong performance1. Additionally, we demonstrate the robustness of the FIM score to rank layers in a manner that remains constant during the optimization process.
Footnote 1: Our code is publicly available at: Github
## 1 Introduction
Fine-tuning of language encoders is a crucial step towards applying natural language processing solutions to real-world challenges. It allows adaptation of knowledge on target distribution after generally training on source data, but requires curation of adequately sized labelled dataset to gain 'new' knowledge while retaining information obtained during the pre-training phase.
Although preserving knowledge of target distribution by tuning the entire model can yield impressive results, it can be expensive and may increase data volume requirements. Additionally, fine-tuning all layers arbitrarily might risk overfitting or adversely affecting the generalization ability of the model during the transfer learning process. While, recently, the focus has shifted to development of parameter efficient approaches of fine-tuning large-language and language models Liu et al. (2022), Lialin et al. (2023), Han et al. (2021), these techniques still require development of an 'adapter' architecture relative to a target dataset. We therefore focus on developing a data-driven criteria to automatically identify and tune a smaller sub-set of layers using only \(\approx 100\) target data samples.
In this paper, we propose a simple strategy to select layers for fine-tuning for real-world NLP tasks, leveraging the Fisher Information Matrix (FIM) score, which quantifies the impact of parameter changes on a model's prediction. We further demonstrate the effectiveness of FIM score on language encoder model(s) on practical NLP tasks from GLUE and SuperGLUE benchmark, by identifying a subset of layers that are most informative to adapt to the target data distribution. We find that fine-tuning parameters in identified subset of layers by FIM score, outperforms full model fine-tuning in some, and results in comparable performance to full fine-tuning in almost all the GLUE and SuperGLUE tasks. In niche scenarios, where FIM scores leads to selection of layers that contribute to sub-optimal performance in comparison with full model fine-tuning, we investigate the nuanced characteristics of the GLUE and SuperGLUE tasks through the lens of linguistic features learned during transfer learning process, and potential categories of target data distribution shifts that could influence the performance while we surgically fine-tune.
Interestingly, we find that GLUE and SuperGLUE tasks that are dependent on a simpler understanding of linguistic features such as syntax and semantics as well as discourse understanding, can be surgically fine-tuned using our proposed
FIM score criteria. However, we find that for tasks that rely on learning more complex knowledge of both high and low level linguistic features such as textual entailment, common sense and world knowledge FIM score criteria unperformed to select the relevant layers. On investigating categories of target distribution shifts that could surface in various GLUE/SuperGLUE tasks, we also find that FIM score signals at efficient tuning of parameters for the group of tasks that align closely with concepts of domain, environmental, and demographic shifts, but fails to perform optimally on tasks that require learning of temporal drifts in language.
## 2 Related Work
Surgical fine-tuning has been widely explored in various computer vision applications to identify definitive distribution shifts in target datasets. Lee et al. (2023) explains why surgical fine-tuning could match or outperform full fine-tuning on distribution shifts and proposes methods to efficiently select layers for fine-tuning. However, in natural language applications, it is challenging to define such delineations due to the rapid changes in language based on context, domain, and individuals.
Several attempts have been made to conduct Parameter Efficient Fine Tuning (PEFT) in natural language. Lialin et al. (2023) and Han et al. (2021) explores the landscape of utilizing adapter layers and soft prompts, including additive methods, selective methods, and reparameterization-based techniques, whereas He et al. (2022) applies network pruning to develop a pruned adapter (sparse adapter). In another body of work, Sung et al. (2021) constructed a FISH (Fisher-Induced Sparse uncHanging) mask to choose parameters with the largest Fisher information. Additionally, Hu et al. (2021) attempted to efficiently fine-tune by proposing a Low Rank Adapter that reduces trainable parameters by freezing the weights of the pre-trained model and injecting trainable rank decomposition matrices into each layer of the architecture.
Some fine tuning techniques involve fine-tuning a small subset of the model parameters. Sung et al. (2022) propose a way of reducing memory requirement by introducing Ladder Side Tuning (LST). A small 'ladder side' network connected to each of the layers of the pre-trained model is trained to make predictions by taking intermediate activations as input from the layers via shortcut connections called ladders. Liu et al. (2022) demonstrated the advantages of few-shot parameter-efficient fine-tuning over in-context learning in terms of effectiveness and cost-efficiency. Additionally, techniques like prompt tuning are also considered as parameter-efficient fine-tuning methods.
A handful of studies have investigated the knowledge gain in fine-tuning process for Language Encoders, particularly BERT. Merchant et al. (2020), Hessel and Schofield (2021) investigated the impact of shuffling the order of input tokens on the performance of the BERT model for several language understanding tasks. Sinha et al. (2021) further investigates the effectiveness of masked language modeling (MLM) pre-training and suggests that MLMs achieve high accuracy on downstream tasks primarily due to their ability to model distributional information.
However, our approach of efficient fine-tuning using the proposed FIM score criteria (that is able to capture signals from \(\approx 100\) target data samples), differs from all existing methods, as it focuses on helping NLP practitioners with small size target datasets to efficiently rank and select important layers for optimizing the fine-tuning process.
## 3 Proposed Method
### Fisher Information Matrix score
The significance of a parameter can be assessed by examining how modifications to the parameter affect the model's output. We denote the output distribution over \(y\) generated by a model with a parameter vector \(\theta\in\mathbb{R}^{|\theta|}\) given input \(x\) as \(p_{\theta}(y|x)\). To quantify the impact of parameter changes on a model's prediction, one approach is to compute the Fisher Information Matrix (FIM), which is represented by equation 1:
\[F_{\theta}=\mathbb{E}x\sim p(x)\left[\mathbb{E}y\sim p(y\mid x)\nabla_{\theta} \log p(y\mid x)\nabla_{\theta}\log p(y\mid x)^{\mathrm{T}}\right] \tag{1}\]
where, \(F_{\theta}\) is the FIM for the model with parameter vector \(\theta\), quantifying the impact of parameter changes on the model's prediction, \(\mathbb{E}x\sim p(x)\) is expectation operator over \(x\) drawn from the distribution \(p(x)\), \(\mathbb{E}y\sim p_{\theta}(y\mid x)\) is the expectation operator over \(y\) drawn from the output distribution \(p_{\theta}(y\mid x)\), \(\nabla_{\theta}\) is gradient operator with respect to the parameter vector \(\theta\), \(\log p_{\theta}(y\mid x)\)is the logarithm of the conditional probability of \(y\) given \(x\) under the model with parameter vector \(\theta\), and \(\nabla_{\theta}\log p_{\theta}(y\mid x)\nabla_{\theta}\log p_{\theta}(y\mid x )^{\mathrm{T}}\) is the outer product of the gradients, which is used to compute the FIM.
To analyze the impact of individual layers, we aggregate the diagonal elements of the FIM using the Frobenius norm. In our experimental setup, we randomly select a small sample (100 samples) from the validation set for each task. For fine-tuning, we specifically choose the top \(5^{2}\) layers with the highest FIM scores. The FIM score measures the amount of information provided by an observable random variable about an unknown parameter in its distribution. It reflects the sensitivity of the likelihood function to changes in parameter values. A higher Fisher information score indicates that more information can be gleaned from the data regarding the parameter, leading to a more precise estimation of the parameter. In essence, a higher score suggests that the likelihood function is more responsive to changes in parameter values, improving the precision of parameter estimation.
## 4 Layer-wise Fisher Information Score Does Not Change During Fine-tuning
In Fig. (2), we compute the rank of distinct layers leveraging the Fisher Information Score across the fine-tuning process of BERT at different epochs. Across tasks including WSC and WIC, we find that the rank of the different layers remain more or less consistent across the entire optimization trajectory during fine-tuning. This shows that the layers which are important for a given task, can indeed be selected even before the start of fine-tuning and after pre-training is done. Using this observation, in the next section, we show the effectiveness of fine-tuning _only_ the layers selected using Fisher Score at the start of the fine-tuning step.
## 5 Experiments and Results
### Experimental Setup
We applied FIM score criteria to identify the 'layer importance rankings' for BERT3 across real-world NLP tasks from GLUE and SuperGLUE (more details on experimental setup and hyperparameters in Appendix A.1). Based on these identified layer rankings, we performed surgical fine-tuning and iteratively tuned parameters of top 1 to 5 most important layers, in the identified ranked layer order determined by FIM score, to compare and constrast the performance against full model fine-tuning.
Footnote 3: We also validated the effectiveness of FIM with RoBERTa for some tasks to understand effectiveness of FIM scores across language encoder, results in Table 1 and Table 11
Furthermore, to comprehensively understand scenarios where FIM scores leads to sub-optimal identification of layer ranks, we investigate the sensitivity of GLUE/SuperGLUE tasks (representing sentiment analysis, paraphrase detection datasets, NLI, question answering, linguistic acceptability, commonsense reasoning, etc.) with respect to four possible data distribution shift categories, namely:
**Domain shift:** Comparable shift from source data in target data distribution due to differences in fields or areas of knowledge.
**Environmental shift:** Changes from source data in target data distribution due to differences in contexts.
**Temporal drift:** Changes in use of certain language entities over time.
**Demographic shift:** Changes in data distribution across different demographic groups.
Additionally, we also qualitatively investigate influence of six primary linguistic features that are possibly influenced during the fine-tuning process depending on the task, namely Semantic Understanding, Discourse Understanding, Syntactic Understanding, Co-reference Resolution and Pronoun Disambiguation, Commonsense and World Knowledge, and Textual Entailment and Contradiction (for more details, refer to Appendix A.2).
### Does surgical fine-tuning work across NLP tasks?
Our objective was to empirically analyze the performance of surgical fine-tuning approach leveraging FIM on real-world NLP tasks against full model fine-tuning. Results in Figure 1 (synthesized from Table 4 and Table 5) show that for GLUE and SuperGLUE tasks, surgical fine-tuning of identified most important layers results in comparable performance and sometimes outperforms tuning all parameters in all layers of BERT-base-cased model on target data distribution. Furthermore, we find that by selectively fine-tuning most relevant layer(s), as identified by FIM, the resulting performance on (_almost all_) GLUE and SuperGLUE tasks are in the ball park range of \(\pm 5\%\) of the full fine-tuning performance.
We also discover that the identified layer impor
tance rank through FIM is different across settings, depending on the nature of task from GLUE and SuperGLUE benchmark.
### Sensitivity of localized knowledge gain
For some tasks (RTE, STSB, CB, and COPA) FIM score based selected layers under-performed in surgical fine-tuning approach. Thus, we attempt to investigate through the lens of differences in learned linguistic features and possible distributional shifts in target data, the overall effectiveness of FIM for real-world NLP tasks.
#### 5.3.1 Effect of linguistic features
Across the GLUE and SuperGLUE benchmarks, we observed that tasks requiring localized linguistic knowledge, such as discourse understanding (MNLI, MRPC, WNLI, WSC, and MultiRC), syntactic understanding (CoLA), semantic understanding, and commonsense/world knowledge (SST-2, QNLI), can be effectively fine-tuned with fewer localized parameters identified through ranked layer importance from the FIM scores.
However, for tasks that involve textual entailment (RTE) and require a strong grasp of common sense/world knowledge (COPA), as well as tasks focusing on understanding propositions (CB), surgically fine-tuning the model using FIM rankings resulted in sub-optimal performance. These tasks rely heavily on semantic understanding, logical reasoning, and the ability to integrate contextual information. Fine-tuning only a subset of layers based on FIM rankings may not adequately capture the necessary information and intricate relationships between linguistic elements, leading to decreased performance on these complex tasks.
We hypothesize that complex tasks such as RTE, COPA and CB, require a holistic understanding of language and reasoning abilities that span across multiple layers in the model. Consequently, selectively fine-tuning based solely on localized knowledge gain identified by FIM scores may not be sufficient to achieve optimal performance.
#### 5.3.2 Effect of target data distributional shifts
We also investigate the effectiveness of FIM in suggesting the appropriate layer importance ranks that maximize the localization of knowledge while adapting to proposed categories of distributional shifts in target GLUE/SuperGLUE tasks.
When categorizing tasks based on their sensitivity to distribution shifts, it becomes evident that MNLI and MRPC tasks primarily revolve around the comprehension of semantic relationships within sentence pairs. These tasks exhibit a high sensitivity to shifts in the domain of discourse, as opposed to temporal or environmental variations. Conversely, tasks such as SST-2, CoLA, and QNLI heavily rely on contextual information to ascertain sentiment analysis accuracy, linguistic acceptability, and question answering, respectively. Consequently, these tasks are inclined towards being influenced by environmental shifts relative to the training data (for BERT-base) originating from Wikipedia and BookCorpus. Furthermore, STSB and RTE tasks demonstrate a notable level of change in the target data distribution with time as the language reference can change.
When comparing surgical fine-tuning with full fine-tuning in Figure 1, we observe that BoolQ and MRPC outperform the full model fine-tuning, while tasks such as QNLI, CoLA, MNLI, WSC, WiC, and MultiRC yield comparable performance. In contrast, RTE and STSB underperform in the surgical fine-tuning process. This indicates that our proposed approach of utilizing FIM to identify layer importance ranks works well in cases of domain and environmental shifts but fails to adapt to temporal drifts.
Figure 1: Plot of relative performance, i.e., the percentage point difference between the performance of surgical fine-tuning and full model fine-tuning, across GLUE and SuperGLUE tasks in two runs. Fine-tuning parameters in the ranked important layer(s) can outperform full fine-tuning, which is of significant importance, and in almost all tasks of GLUE and SuperGLUE, result in relative performance in the range of \(\pm 5\%\). Only for the case of RTE, CB, and COPA(showed no change) selected layers using FIM scores lead to sub-optimal results.
### Ranking layers using FIM score vs. optimization trajectory
Upon investigating the efficacy of our proposed approach even further, we observed that the ranking of layers for surgical fine-tuning determined through FIM scores for SuperGLUE (Figure 2) and GLUE (Figure 3) tasks remains constant across various checkpoints of the optimization trajectory.
In particular, we investigate the rankings at epochs 0, 2, 5, 8, and 10 and observe that for SuperGLUE and GLUE tasks, the deviation in rankings is almost negligible (deviation plots in Appendix A.3), and in some tasks like CB, WNLI, and RTE, the trajectory is identical. Thus, the arrangement of layers selected by the FIM score remains unchanged for the task at hand as the fine-tuning process progresses.
## 6 Conclusion
This paper contributes to the growing body of work that demonstrate that selective fine-tuning of language models is not only efficient but also effective in many downstream tasks. Summarizing our contributions, we strongly prove that selecting layers for finetuning based on ranking according to the FIM scores gives optimal results on a majority of GLUE and SuperGLUE tasks and could thus help NLP practitioners with small datasets to efficiently select a sub-set of relevant layers for optimized fine-tuning for many real-world natural language tasks. In future work, we plan to investigate the linguistic correlates of different layers in large-language models (LLMs) and the value of FIM in surfacing them.
## Limitations and Future Work
The FIM score criteria proposed in this paper shows promising results on several GLUE and SuperGLUE tasks with language encoders. However, additional experiments are needed on some of the recent very large parameter models that perform well in zero-shot settings.
In addition, we plan to extend our evaluations and compare our method with existing solutions, such as Low-Rank Adaption (LoRA), to quantify the benefits of our approach.
## Acknowledgements
This work was supported by GPU resources from the University of Massachusetts, Amherst, and Microsoft Corporation, New England Research and Development Center.
|
2308.02160 | Speaker Diarization of Scripted Audiovisual Content | The media localization industry usually requires a verbatim script of the
final film or TV production in order to create subtitles or dubbing scripts in
a foreign language. In particular, the verbatim script (i.e. as-broadcast
script) must be structured into a sequence of dialogue lines each including
time codes, speaker name and transcript. Current speech recognition technology
alleviates the transcription step. However, state-of-the-art speaker
diarization models still fall short on TV shows for two main reasons: (i) their
inability to track a large number of speakers, (ii) their low accuracy in
detecting frequent speaker changes. To mitigate this problem, we present a
novel approach to leverage production scripts used during the shooting process,
to extract pseudo-labeled data for the speaker diarization task. We propose a
novel semi-supervised approach and demonstrate improvements of 51.7% relative
to two unsupervised baseline models on our metrics on a 66 show test set. | Yogesh Virkar, Brian Thompson, Rohit Paturi, Sundararajan Srinivasan, Marcello Federico | 2023-08-04T06:37:34Z | http://arxiv.org/abs/2308.02160v1 | # Speaker diarization of scripted audiovisual content
###### Abstract
The media localization industry usually requires a verbatim script of the final film or TV production in order to create sub-titles or dubbing scripts in a foreign language. In particular, the verbatim script (i.e. as-broadcast script) must be structured into a sequence of dialogue lines each including time codes, speaker name and transcript. Current speech recognition technology alleviates the transcription step. However, state-of-the-art speaker diarization models still fall short on TV shows for two main reasons: (i) their inability to track a large number of speakers, (ii) their low accuracy in detecting frequent speaker changes. To mitigate this problem, we present a novel approach to leverage production scripts used during the shooting process, to extract pseudo-labeled data for the speaker diarization task. We propose a novel semi-supervised approach and demonstrate improvements of 51.7% relative to two unsupervised baseline models on our metrics on a 66 show test set.
Yogesh Virkar, Brian Thompson, Rohit Paturi, Sundararajan Srinivasan, Marcello Federico AWS AI Labs
[email protected]
**Index Terms**: speaker diarization, spectral clustering, constrained k-means, media localization
## 1 Introduction & related work
Media localization is the process of adapting audiovisual content across languages and cultures to reach international audiences. Media localization is a very labor-intensive process [1, 2], where complexity and cost depend, among other factors, mainly on the chosen localization modality (subtitling, voiceover or dubbing) and the required quality. Among the initial localization steps is the creation of a so called as-broadcast-script, which mainly consist in a verbatim transcript of the audio, structured into dialogue lines in paragraph form, each annotated with character (speaker)1 name and timing information. In the media and entertainment industry, a significant amount of localization deals with expedient content, such as movies, television series, and documentaries. Recent progress in speech technology has contributed in reducing the labor costs of the transcription process by providing drafts that can be post-edited much faster than transcribing from scratch. However, there is much room to improve on segmenting and labeling the transcript with speaker and timing information. This process falls under the scope of speaker diarization technology, which addresses the question of "who spoke when" inside a given audio file or stream.
Footnote 1: To conform with naming conventions in the speech community, we will henceforth refer to speaker and character interchangeably, although there is a clear distinction between the two concepts.
Despite the significant progress on speaker diarization using end-to-end neural diarization models [3, 4, 5, 6], clustering-based approaches based on speaker embeddings are still the most popular for handling long audios with more than 4 speakers [7]. However, as we show, conventional clustering-based techniques using even the state-of-the-art speaker embeddings like ECAPA-TDNN [8] and Resnet [9] architectures, still fall short of delivering useful speaker diarization performance for media localization. This for two main reasons: the high number of speakers that need to be tracked inside a movie or TV show and the required precision in detecting speaker changes.
In this work, we investigate methods for improving speaker diarization of audiovisual content for which we assume to have a production script available. Briefly, a production script is a version of the screenplay used during the production of the show. It guides the director and actors while performing or shooting, but can be subject to many changes during the production: dialogue lines can be deleted, changed or moved to a different position. The closest literature similar to this is the target-speaker voice activity detection (TS-VAD) [10], where we are given (or we infer) voice samples for each speaker, and we would like to detect whenever these speakers are present in the audio. However, this work is different in that the speaker labels form the production scripts are noisy and may not cover all speakers in the show, and we want to leverage the power of speaker embeddings instead of training a TS-VAD from scratch.
In the context of creating an as-broadcast script for the post-production of a movie or a TV show, we frame speaker diarization
Figure 1: Illustrative example showing the difference between as-broadcast and production scripts.
as the process of obtaining character names with corresponding timing information for each dialogue line. Inputs of this process [11] include the final cut video, the audio stem file, a clean version of the final audio mix with no background music or sound effects, and the production script. As-broadcast scripts are verbatim transcripts of the final edited version of the production and are often used to produce subtitles, and translations for dubbing. The as-broadcast script can be seen as a revised and enriched version of the production script. Figure 1 shows an example of a production script and corresponding as-broadcast script. As shown, the as-broadcast script also includes time codes and annotations of sound effects and on screen graphics. The core part, in common with the production script, is the arrangement of the transcript into dialogue lines with the corresponding speaker. However, on our data, we empirically measured on average an exact match of dialogue lines between as-broadcast and production scripts to be only around 10%.
Constrained spectral clustering has been used in the context of speaker diarization [12, 13], but these constraints relate to speaker _turns_ - that is, pairs of frames which can and cannot come from the same speaker. In contrast, in our work we locate sub-segments where we can determine the speaker a-prior with high confidence and our constraint is the actual speaker label.
Prior works have also considered using a two-step clustering method of TV shows [14] to first cluster speakers within each scene and then combine speakers from multiple scenes. While it can help to perform multi-modal diarization using lexical [15, 16], visual [17, 18, 19, 20, 21, 22, 23, 24] or longitudinal [25, 26] information, we leave this for future work.
First, we automatically extract overlapping dialogue lines between production script and ASR transcript and use them as pseudo labels to inform speaker diarization. Then, we introduce a novel semi-supervised version of the spectral clustering method. We report results on a proprietary test set of 66 English TV shows. Our approach improves on our metrics on average by +XX% over a strong unsupervised baseline.
## 2 Method
### Extraction of Pseudo-labeled Data
To utilize the production scripts, we detect sections of the script that match the final audio with high confidence. To do this, we first perform ASR using an in-house system to extract a noisy transcript with word-level timestamps. Note that we do not use segmentation from our ASR model as do not want to bias our pseudo-labels with potentially erroneous segmentation coming from ASR. Instead, we perform alignment between dialogue lines in the production scripts to _words_ in the ASR transcript using the Vecalign2[27, 28] sentence alignment toolkit.
Footnote 2: [https://github.com/thompson/vecalign](https://github.com/thompson/vecalign)
We search for alignments of sizes {1,1}, {1,2},...{1,50}; that is, we allow each dialogue line to align with up to 50 words in the ASR transcript. We allow for deletions of both dialogue lines and words from the transcript, to account for dialogue lines that are missing/added to the production script, respectively. For Vecalign, we empirically set a deletion percentile fraction of 0.015 in order to find good-quality alignments.
For each dialogue line that is aligned to one or more words in the ASR transcript, we take the start time of the first ASR word and the end time of the last ASR word to get a time range where the character associated with the given dialogue line is speaking. These character names and time ranges are used as pseudo labels in the following sections.
### Speaker Diarization
Similar to prior works on speaker diarization3[12, 13, 30], our speaker diarization models are based on first extracting embeddings for speech segments. We first run an in-house Voice Activity Detector (VAD) to obtain speech segments from audio. These are further subdivided into uniform sub-segments of 1s duration. Each sub-segment is transformed into 512 dimensional embeddings using a speaker embedding model. The speaker embedding model follows a ResNet34 architecture [9] and is trained with a combination of classification and metric loss [31] with 12k speakers and 5k hours of data.
Footnote 3: We refer the reader to [29] for a comprehensive and up to date survey on this topic.
#### 2.2.1 Unsupervised Speaker Diarization Method (Baseline)
Let \(X=[x_{1},x_{2},\ldots,x_{n}]\) denote the speaker embeddings for \(n\) sub-segments extracted from the given input audio. These embeddings are used to construct an affinity matrix, \(A\), such that \(A_{ij}\) denotes the cosine similarity between the embeddings \(x_{i}\) and \(x_{j}\). We perform a series of refinements on the Affinity matrix to both smooth and denoise the data followed by Spectral Clustering as outlined in [12]. The number of speaker clusters, \(k=\tilde{k}\) where \(\tilde{k}\) is automatically determined by the maximum eigen-gap of the eigenvalues in the Spectral Clustering step.
#### 2.2.2 Semi-supervised Speaker Diarization Method
Following the spectral clustering step of the unsupervised model, let \(v_{1},v_{2},\ldots,v_{k}\) be the eigenvectors corresponding to the \(k\) largest eigenvalues. For the \(i^{th}\) sub-segment, we obtain the corresponding spectral embedding \(e_{i}=[v_{1i},v_{2i},\ldots,v_{ki}]\). In order to cluster the embeddings, we replace the unsupervised K-means algorithm and propose a semi-supervised version that can utilize the prior information as outlined in Algorithm 1.
We consider as inputs the spectral embeddings \([e_{1},e_{2},\ldots,e_{n}]\) and pseudo labels \([l^{\prime}_{1},l^{\prime}_{2},\ldots,l^{\prime}_{n}]\) for each of n input audio sub-segments. If \(i^{th}\) sub-segment does not have a pseudo label, we assign \(l^{\prime}_{i}=0\). Additional inputs are \(\tilde{k}\), i.e., the estimated number of speakers using eigen-gap method and \(k^{{}^{\prime}}\) that denotes the number of unique pseudo labels or known speakers. Since \(\tilde{k}\) is often underestimated (see Figure 1), in step 1, we set number of speakers \(k\) as the maximum of \(\tilde{k}\) and \(k^{{}^{\prime}}\). For the constrained K-means algorithm, we first initialize the cluster centroids using prior information from pseudo labels in lines 2-4. Let the cluster centroids be denoted as \([\mu_{1},\mu_{2},\ldots,\mu_{k}]\). For all the \(k^{{}^{\prime}}\) known speakers we compute the centroids by averaging the corresponding embeddings. In steps 5-6, the remaining \(k-k^{{}^{\prime}}\) speaker centroids using the standard K-means++ algorithm [32]. For the E-step of the K-means we compute the label assignments in lines 8-12. For pseudo-labeled sub-segments we do not change label assignments and label only the unknown sub-segments. The idea is to not change the label assignment of known speakers in order to bias the clustering algorithm. Finally, lines 13-14 denote the M-step of K-means in order to recompute the speaker centroids. We repeat the E and M steps until convergence.
## 3 Evaluation Data & Metrics
For evaluation, we collected a test set of 66 episodes from 21 shows from a major studio such that for each episode we have a production script, an audio speech stem and the ground-truth as-broadcast script. This is a total of 36.3 hours of episode runtime with 880 distinct speakers and 36429 total dialogues lines with shows spanning diverse genres such as drama, comedy, suspense, and kids. Note that the distribution of speech time across the 880 speakers is highly skewed making it a challenging test set for this task.
In order to obtain the ground-truth data for speaker diarization, we first segment the audio stem at the dialogue level using the timing information available in the as-broadcast script. For each dialogue segment, we additionally run an in-house voice activity detector in order to correctly identify speech segments. All speech segments corresponding to the same dialogue are annotated with the same speaker label.
We use the following automatic metrics for evaluating speaker diarization:
1. Diarization Error Rate (DER) is the standard metric for comparing speaker diarization systems and consists of three components: false alarm, missed detection and speaker error.
2. Speaker Change Detection F1 (SCD) that defines the F1 score for correctly identifying the time boundary between speaker turns under some tolerance [33]. This is particularly important since not identifying the correct speaker changes can result in more time-consuming and expensive post-editing process in order to obtain quality as-broadcast scripts.
## 4 Experiments & Results
In this section, we conduct multiple experiments to evaluate the performance of our models described in SS 2.2. Due to a small test set of 66 episodes, we do not perform tuning using our in-domain test set. Instead, for all the models, the hyperparameters for spectral clustering namely thresholding factor and thresholding percentile were tuned to minimize DER on the validation sets of Dihard [34] and ICSI [35]. Our unsupervised model achieves an DER of 3.5% on the test split of AMI [36]. As an additional baseline, we compare again the publicly-available Pyannote speaker diarization [37] pipeline4.
Footnote 4: [https://huggingface.co/pyannote/speaker-diarization](https://huggingface.co/pyannote/speaker-diarization)
We use the unsupervised model described in SS 2.2.1 and Pyannote speaker diarization pipeline as two baseline systems. To make the results comparable to these unsupervised baseline models, for the semi-supervised model we convert the predicted speaker names to speaker labels and report results using the Hungarian assignment [38] for all models.
First, we test the performance of pseudo-labeled data extraction process as described in SS 2.1. In particular, we find that on average, with high confidence we are able to label 10.9% dialogue lines over the entire test set. Further, by comparing with the ground-truth as-broadcast script, the pseudo labels are found to be 74.5% accurate.
Next, we conduct two sets of experiments. In the first set, we compare the baseline approaches with the semi-supervised model (SS 2.2.2) utilizing the entire available pseudo-labeled data. In the second set of experiments, we vary the amount of pseudo-labeled data in order to assess the impact on the proposed semi-supervised method. We report the Diarization Error Rate (DER) and Speaker Change Detection F1 (SCD). For SCD, unlike a more relaxed tolerance of 200ms as considered in [33], we use a more conservative value of 100ms since correcting speaker changes is quite time-consuming and expensive.
### Speaker Diarization Results
As shown in Table 1, for our unsupervised baseline system obtains a poor DER of 48.99% and a poor SCD of 32.53%. This is primarily due to a large number of speakers typically found in TV shows and the lack of prior knowledge to handle them. Additionally, we find that the spectral clustering algorithm highly underestimates the number of clusters (or speakers), thereby worsening speaker confusion and consequently the DER. This is illustrated in Figure 2 that shows the scatter plot of true (x-axis) vs predicted (y-axis) number of speakers for unsupervised and semi-supervised models. The dashed line shows y=x line. As shown, the semi-supervised approach gets closer to the true number of speakers than the unsupervised approach. This underestimation also results in a low recall in identifying speaker changes thereby resulting in a poor SCD.
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & DER \(\downarrow\) & SCD \(\uparrow\) \\ \hline Unsupervised & 48.99 & 32.53 \\ Unsupervised (\(k=k^{{}^{\prime}}\)) & 53.55 & 38.14 \\ Pyannote & 40.65 & 34.36 \\ Semi-supervised [this work] & \(\mathbf{27.04^{++}}\) & \(\mathbf{54.86^{++}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model performance on diarization error rate (DER) and speaker change detection F1 (SCD). Significance testing is done at level \(p<0.01\) against Unsupervised (\(*\)) and Pyannote (\(+\)) models.
The high number of predicted speakers, i.e., \(\geq 60\), for the semi-supervised approach comes from production scripts overestimating \(k^{{}^{\prime}}\). On inspecting on a handful episodes, we find that these errors are mostly due to large changes in the script during the shooting process (see SS 1) as well as noise in production scripts such as lack of character names being normalized.
Next, we correct the predicted number of speakers for the unsupervised approach by fixing \(k=k^{{}^{\prime}}\). However, as shown in Table 1, while this improves SCD by +17.2% relative it worsens DER by 9.3% relative. Thus the lack of prior knowledge is a limiting factor for performance.
The publicly available Pyannote speaker diarization pipeline based on Bayesian HMM clustering of x-vectors (VBx) [37] improves over our baseline approach on both DER and SCD by relative +17.0% and +5.6% respectively.
On the other hand, using prior information, i.e., using the entire pseudo labeled data helps the semi-supervised method to outperform our unsupervised baseline with statistically significant5 relative improvements of +44.8% and +68.6% on DER and SCD respectively. Semi-supervised approach also improves over Pyannote pipeline by relative +33.5% and +59.7% on the same metrics.
Footnote 5: Significance testing is done at level \(p<0.01\) using the two-sample Student’s t-test.
### Ablation Study
The amount of pseudo-labeled data depends heavily on how similar the final recording is to the production script. In Figure 3, we vary the amount of pseudo-labeled data and plot DER (left panel) and SCD (right panel) for the proposed semi-supervised method. This simulates what would happen if the shows were more dissimilar to the production script. Performance of our unsupervised model and the Pyannote diarization pipeline are shown for reference.
We find that even using small amount of pseudo labels, helps the semi-supervised approach beat both the baseline models. Using 3% psuedo labels improves performance over our unsupervised baseline and Pyannote pipeline respectively by relative 20.3% and 3.9%. For SCD, the improvements are stronger as using only 1% of pseudo labels helps obtain relative improvements of 24.0% and 17.4%. Finally, adding more pseudo labels helps the model achieve consistent performance improvements for both metrics.
## 5 Conclusions
In this paper, we focus on the problem of speaker diarization for the media localization industry that requires a verbatim script of the the final film in order to localize content in particular foreign languages. While the current state-of-art speech recognition technology works reasonably well for transcription, it is unable to cope with large number of speakers for the problem of speaker diarization. We propose a novel approach to extract pseudo-labels from drafts of final scripts, also called as production scripts. We then present a novel semi-supervised speaker diarization method based on constrained clustering that is able to utilize these pseudo labels in order to vastly improve performance over a strong unsupervised baseline model and the publicly available Pyannote diarization pipeline. Our proposed approach shows strong performance improvements on average over both baselines and considered metrics by relative +51.7%.
|
2310.18372 | The fundamental unit of quantum conductance and quantum diffusion for a
gas of massive particles | By analogy with the fundamental quantum units of electrical conductance
$G_0^e=\frac{2 e^2}{h}$ and thermal conductance $K_0^t=\frac{2 K_B^2 T}{h}$ we
define a fundamental quantum unit of conductance, $G_0^m$, and diffusion of a
massive gas of atomic particles, respectively given by $$ G_0^m=\frac{m^2}{h} \
, \ D_0=\frac{h}{m}$$ with $h$ the Planck constant, $K_B$ the Boltzmann
constant, $T$ the absolute temperature, $e$ the unit charge and $m$ the mass of
the atomic gas particle that move balistically in a one dimensional medium of
length $L$. The effect of scattering can be accounted for by introducing an
appropriate transmission probability in analogy with the quantum electrical
conductance model introduced by Landauer in 1957. For an electron gas
$G_0^m=1.25 \times 10^{-27} \ Kg^2/(J s)$ and $D_0 = 7.3 \times 10^{-3} \
m^2/s$, and we found a quantum expression for the generalized Einstein relation
that writes $$G_0^e = \frac{2e^2m}{h^2} D_0 $$ | Lino Reggiani, Eleonora Alfinito, Federico Intini | 2023-10-26T13:14:11Z | http://arxiv.org/abs/2310.18372v1 | # The fundamental unit of quantum conductance and quantum diffusion for a gas of massive particles
###### Abstract
By analogy with the fundamental quantum units of electrical conductance \(G_{0}^{e}=\frac{2e^{2}}{h}\) and thermal conductance \(K_{0}^{t}=\frac{2K_{0}^{2}T}{h}\) we define a fundamental quantum unit of conductance, \(G_{0}^{m}\), and diffusion of a massive gas of atomic particles, respectively given by
\[G_{0}^{m}=\frac{m^{2}}{h}\,\ D_{0}=\frac{h}{m}\]
with \(h\) the Planck constant, \(K_{B}\) the Boltzmann constant, \(T\) the absolute temperature, \(e\) the unit charge and \(m\) the mass of the atomic gas particle that move ballistically in a one dimensional medium of length \(L\). The effect of scattering can be accounted for by introducing an appropriate transmission probability in analogy with the quantum electrical conductance model introduced by Landauer in 1957. For an electron gas \(G_{0}^{m}=1.25\times 10^{-27}\ Kg^{2}/(Js)\) and \(D_{0}=7.3\times 10^{-3}\ m^{2}/s\), and we found a quantum expression for the generalized Einstein relation that writes
\[G_{0}^{e}=\frac{2e^{2}m}{h^{2}}D_{0}\]
pacs: 73.23.Ad, 72.10.Bg This letter presents the definition of the universal quantum unit of conductance and diffusion coefficient for a massive gas thus completing the scheme of mesoscopic quantum transport coefficients already existing in the literature [1].
By analogy with the electrical conductance, the classical definition of the conductance for a classical gas of particles reads:
\[G^{m}=\frac{mN\tau}{L^{2}}=\frac{m^{2}N}{m\sqrt{v_{x}^{\prime 2}L}}\Gamma \tag{1}\]
where \(m\) is the particle mass, \(N\) the particles number, \(\tau\) the scattering time, \(v_{x}^{\prime 2}\) a mean squared differential (with respect to carrier number) quadratic velocity component along the \(x\) direction [2], and \(L\) the sample length.
The second expression of Eq. (1) refers to a one dimensional sample of length \(L\) with
\[l=\sqrt{v_{x}^{\prime 2}\tau}\]
the associated mean free path, and
\[\Gamma=\frac{l}{L}\]
a transmission probability, that describes the collisions.
The second form of Eq. (1) leads to a quantized expression under the quantum condition
\[h=m\sqrt{\overline{v_{x}^{\prime 2}}}L \tag{2}\]
for \(h\geq m\sqrt{\overline{v_{x}^{\prime 2}}}L\), with \(h\) the Planck constant.
Indeed, by analogy with Landauer quantum-conductance model [3; 4; 5], Eqs. (1, 2) define a quantum conductance for a massive gas satisfying a Landauer paradigm: conductance is transmission:
\[G^{m}=\frac{Nm^{2}}{h}\Gamma \tag{3}\]
that for the balistic condition, \(\Gamma=1\) and \(N=1\), gives the fundamental unit of conductance for an atomic massive gas with elementary mass \(m\).
\[G_{0}^{m}=\frac{m^{2}}{h} \tag{4}\]
For an electron gas \(G_{0}^{m}=1.25\times 10^{-27}\ Kg^{2}/(Js)\).
By going to diffusion, the classical definition of the longitudinal diffusion coefficient reads [2; 6; 7]:
\[D_{x}=\overline{v_{x}^{\prime 2}}\tau=\sqrt{\overline{v_{x}^{\prime 2}}}L\Gamma \tag{5}\]
The second form of Eq. (5) leads to a quantized expression under the analog quantum condition in Eq. (2).
Even for diffusion, by analogy with Landauer quantum-conductance model [3; 4; 5], Eqs. (5, 2) define the fundamental unit of quantum diffusion satisfying a Landauer paradigm: diffusion is transmission:
\[D=\frac{h}{m}\Gamma \tag{6}\]
that for the balistic condition, \(\Gamma=1\), gives the fundamental unity of diffusion for an atomic massive gas with elementary mass \(m\).
\[D_{0}=\frac{h}{m} \tag{7}\]
For an electron gas \(D=7.3\times 10^{-5}\ m^{2}/s\) that should be compared with the experimental values in Si at 77 \(K\) of \(6.0\times 10^{-3}\ m^{2}/s\) and of \(1.6\times 10^{-2}\ m^{2}/s\) for electrons and holes, respectively. We remark that the two values of diffusion give a ratio of 2.7 in close agreement with the value of the corresponding effective mass of \(m_{h}/m_{e}=0.53/0.19=2.8\)[8]. The sample dimensions and the temperature in experiments were not consistent with the conditions posed by quantization, which explain the significant higher values of the experiments when compared with the theoretical quantum expectations.
Interesting enough, the single particle mass satisfies the kinetic definition
\[m=G_{d}\times D_{0} \tag{8}\]
that is valid in both classical and quantum cases.
In addition, the classical expression of the generalized Einstein relation [9]
\[G_{e}=\frac{e^{2}m\overline{N}}{(Lm)^{2}\overline{v_{x}}^{\prime 2}}D \tag{9}\]
with \(G_{e}\) the electrical conductance, \(e\) the electrical charge unit, and \(\overline{N_{e}}\) the average total number of charge particles inside the sample, for a one dimensional geometry takes the quantum form:
\[G_{e}=\frac{2e^{2}m}{h^{2}}D_{0} \tag{10}\]
We remark, that going from classical to quantum the average number of charge carriers is substituted by the number 2 relating to the number of transverse modes accounting for spin degeneracy. We also notice that by considering an alternative Einstein relation of the form \(G_{e}\times D_{0}\), we obtain:
\[G_{e}\times D_{0}=\frac{2e^{2}}{m} \tag{11}\]
that is fully compatible with the classical expression, in other words quantum effects are washed out.
The above results need some comments concerning the one dimensional and balistic conditions necessary to find the fundamental quantum units of the kinetic coefficient considered, as for the other coefficients available in the literature [1]. The above conditions give the macroscopic values in the form of global quantities, that cannot be factorized in a geometrical and local contribution (for example the local conductivities) as for the case of the classical 3D case. The quantum results contains two basic quantities: the Planck constant as signature of quantum mechanics, and a physical quantity that is the signature of the physical magnitude of interest. In the present case the mass of the particles. For the electrical and thermal conductance we had the electrical charge (or eventually a multiple of the unit in case of multiple-charged particles) and the Boltzmann constant coupled with the temperature in the case of thermal conductance. In a global representation, there is no connection between the external perturbation and the response as typical of a kinetic approach. In particular, the kinetic approach relates the kinetic coefficient basically to local scattering mechanisms, parametrized by a relaxation time \(\tau\) in the simplest case. In the present balistic approach the kinetic coefficients are controlled by the contacts and the external reservoirs. For example, in our case balistic diffusion implies a closed system. Accordingly, the number of particles inside the sample are constant in time. Thus, the only scatterings are internal specular reflection of particles at the opposite contacts. This reflection process is then responsible for the smoothing of any initial concentration gradient towards a final homogenous condition in the long time limit as expected by thermal equilibrium conditions. By contrast, balistic conductance implies an open system so that the particle number inside the sample, being controlled by a chemical potential, is not constant in time. Thus, the contacts are perfectly transparent for particles going into and out from the sample, and the stochastic mechanism comes from the fluctuations of the total number of particle inside the sample that are responsible of the incoherent mechanism of entry and leaving of particles from the opposite contacts. Thus, conductance can be determined by making use of the fluctuation-dissipation theorem already under thermal equilibrium conditions, that is in the absence of external forces [6]. By considering a transmission probability, that can be less than unity, the presence of local scattering can be accounted for, as originally suggested by Landauer in 1957. However, we want to stress that local scattering in the present model are not necessary to define the diffusion and conductance, and their inclusion has the net effect to decrease the value of the fundamental unit dawn to a zero value for a vanishing value of the transmission probability.
In conclusion, by introducing a quantum unit of con
ductance and diffusion under 1D quantum balistic conditions we complete the quantum definition of four fundamental kinetic coefficients of linear response [1]; i.e. conductance, diffusion, electrical and thermal conductance. An alternative form of the generalized Einstein relation evidences an intriguing property of being compatible with the classical result. We further notice that the classical definition of diffusion can be extended to relativistic particle, that is a photon gas [7] as:
\[D_{0}^{rel}=cL \tag{12}\]
with \(c\) the light velocity in vacuum. Present findings concerning quantum conductance and diffusion are open to a further experimental validation.
###### Acknowledgements.
Prof. Tilmann Kuhn from Munster University is warmly thanked for the very valuable comments he provided on the subject.
|
2305.03282 | Enhanced sensitivity via non-Hermitian topology | Sensors are indispensable tools of modern life that are ubiquitously used in
diverse settings ranging from smartphones and autonomous vehicles to the
healthcare industry and space technology. By interfacing multiple sensors that
collectively interact with the signal to be measured, one can go beyond the
signal-to-noise ratios (SNR) than those attainable by the individual
constituting elements. Such distributed sensing techniques have also been
implemented in the quantum regime, where a linear increase in the SNR has been
achieved via using entangled states. Along similar lines, coupled non-
Hermitian systems have provided yet additional degrees of freedom to obtain
better sensors via higher-order exceptional points. Quite recently, a new class
of non-Hermitian systems, known as non-Hermitian topological sensors (NTOS) has
been theoretically proposed. Remarkably, the synergistic interplay between
non-Hermiticity and topology is expected to bestow such sensors with an
enhanced sensitivity that grows exponentially with the size of the sensor
network. Here, we experimentally demonstrate NTOS using a network of photonic
time-multiplexed resonators in the synthetic dimension represented by optical
pulses. By judiciously programming the delay lines in such a network, we
realize the archetypical Hatano-Nelson model for our non-Hermitian topological
sensing scheme. Our experimentally measured sensitivities for different lattice
sizes confirm the characteristic exponential enhancement of NTOS. We show that
this peculiar response arises due to the combined synergy between
non-Hermiticity and topology, something that is absent in Hermitian topological
lattices. Our demonstration of NTOS paves the way for realizing sensors with
unprecedented sensitivities. | Midya Parto, Christian Leefmans, James Williams, Alireza Marandi | 2023-05-05T04:56:22Z | http://arxiv.org/abs/2305.03282v1 | # Enhanced sensitivity via non-Hermitian topology
###### Abstract
Sensors are indispensable tools of modern life that are ubiquitously used in diverse settings ranging from smartphones and autonomous vehicles to the healthcare industry and space technology [1; 2; 3]. By interfacing multiple sensors that collectively interact with the signal to be measured, one can go beyond the signal-to-noise ratios (SNR) than those attainable by the individual constituting elements. Such distributed sensing techniques have also been implemented in the quantum regime, where a linear increase in the SNR has been achieved via using entangled states [4]. Along similar lines, coupled non-Hermitian systems [5; 6] have provided yet additional degrees of freedom to obtain better sensors via higher-order exceptional points [7; 8]. Quite recently, a new class of non-Hermitian systems, known as non-Hermitian topological sensors (NTOS) has been theoretically proposed [9; 10]. Remarkably, the synergistic interplay between non-Hermiticity and topology is expected to bestow such sensors with an enhanced sensitivity that grows exponentially with the size of the sensor network. Here, we experimentally demonstrate NTOS using
a network of photonic time-multiplexed resonators in the synthetic dimension represented by optical pulses. By judiciously programming the delay lines in such a network, we realize the archetypal Hatano-Nelson model [11] for our non-Hermitian topological sensing scheme. Our experimentally measured sensitivities for different lattice sizes confirm the characteristic exponential enhancement of NTOS. We show that this peculiar response arises due to the combined synergy between non-Hermiticity and topology, something that is absent in Hermitian topological lattices. Our demonstration of NTOS paves the way for realizing sensors with unprecedented sensitivities.**
The ability to accurately and reliably measure physical quantities is at the heart of modern sensors with applications ranging from molecular sensing in chemistry [12] and biology [13] to light detection and ranging (LiDAR) [14] and observing gravitational waves [15]. Significant efforts have been made towards enhancing the response of individual sensing elements, for instance by using high-quality resonators [16] or exploiting quantum effects [17]. A different, more generic route to achieving higher sensitivities is to employ a multitude of modes that collectively contribute to a coherent signal that encapsulates information about the quantity to be measured. This has led to distributed classical and quantum sensing networks which allow for an enhancement of \(\sqrt{N}\) and \(N\)[18] in the sensitivity figure, as compared to a single sensing element, respectively.
An alternative path to achieve higher sensitivities is by employing concepts from non-Hermitian physics [5, 6, 19]. For instance, the eigenvalues associated with a non-Hermitian system can respond to perturbations in a remarkably stronger manner compared to its Hermitian counterparts. This realization is the foundation of a class of sensors that operate in the vicinity of non-Hermitian degeneracies known as exceptional points (EPs), where the corresponding response scales as the \(N\)-th root of the perturbation with the order of the EP [7, 8, 20].
In addition, the introduction of non-Hermiticity to topologically non-trivial lattices is known to result in an eigenspace that behaves very differently from that associated with Hermitian topological systems [21, 22]. Recent studies have observed the manifestation of this distinct behavior in the form of a new type of bulk-boundary correspondence and the non-Hermitian skin effect [23, 24, 25, 26, 27].
Quite recently, a new class of sensors based on the synergy between non-Hermiticity and topology has been proposed [9, 10]. Dubbed as non-Hermitian topological sensors (NTOS), such devices can exhibit a sensitivity that grows exponentially with respect to the number of lattice sites. Remarkably, unlike typical non-Hermitian sensing schemes, this boosted response does not require fine tuning of the system parameters. Despite intense activities in the field of non-Hermitian topology, an experimental observation of this enhanced sensitivity has so far remained elusive. Here, we experimentally demonstrate this peculiar behavior in a network of photonic time-multiplexed resonators. By using different number of optical pulses, we realize non-Hermitian topological lattices with number of lattice sites up to \(N=23\). Based on measurement results from these structures, we experimentally demonstrate the characteristic exponential growth of the sensitivity associated with NTOS. It will be shown that this extraordinary response arises exclusively due to the cooperative interplay between non-Hermiticity and topology, something that is absent in other Hermitian topological settings.
For our realization of NTOS, we consider the Hatano-Nelson [11] model as described by the Hamiltonian:
\[\hat{H}_{HN}=\sum_{n}t_{\mathrm{R}}\hat{a}_{n+1}^{\dagger}\hat{a}_{n}+t_{ \mathrm{L}}\hat{a}_{n}^{\dagger}\hat{a}_{n+1}, \tag{1}\]
where \(\hat{a}_{n}^{(\dagger)}\) is the annihilation (creation) operator associated with site \(n\) while \(t_{\mathrm{R}}\), \(t_{\mathrm{L}}\) represent the nonreciprocal right and left nearest-neighbor couplings within the lattice. When truncated corresponding to a finite lattice, the Hamiltonian of Eq. 1 can exhibit a multitude of spectral
Figure 1: **Non-Hermitian topological sensors (NTOS).** Schematic diagram of the NTOS demonstrated here based on the Hatano-Nelson model which features nonreciprocal couplings between the adjacent elements of the array. Depending on the boundary conditions, this lattice exhibits different eigenvalue spectra, as shown in the top part of the figure. This can be represented by the strength \(\Gamma\) of the coupling between the first and last resonators in the system. When \(\Gamma\) is equal to the other couplings in the array (the rightmost part of the scale), the structure follows periodic boundary conditions (PBC), where the eigenvalues form an ellipse around the origin in the complex plane. In this case, a nonzero winding number \(\mathcal{W}\) can be defined. On the other hand, when \(\Gamma=0\), i.e. under open boundary conditions (OBC), all the eigenvalues reside on the real axis, with one eigenvalue exactly equal to zero \(E=0\) (for odd values of \(N\)). This eigenvalue tends to shift from its original value by \(\Delta E\) which is proportional to the strength of the boundary coupling \(\Gamma\), as long as the coupling is sufficiently small. This mechanism can be effectively harnessed for sensing any perturbation that modifies \(\Gamma\).
behaviors, depending on the associated boundary conditions (Fig. 1). In particular, when the lattice is arranged in a uniform fashion with periodic boundary conditions (PBC), the set of eigenvalues form a closed loop in the complex plane with a nonzero winding around the origin (Fig. 1), associated with uniformly distributed bulk eigenstates across the array. We would like to emphasize that here, since the coupling mechanism between lattice elements are dissipative [28], the real part of the system eigenvalues represent dissipation while the imaginary part corresponds to phase/frequency shift. On the other hand, when the structure is terminated with open boundary conditions (OBC), the resulting spectrum is entirely real (Fig. 1). This corresponds to the case where all the eigenstate become localized on one edge of the system, known as the non-Hermitian skin effect [29]. Furthermore, under such OBC conditions, provided that the number of elements in the lattice is odd \(N=2k+1\), the Hamiltonian \(\hat{H}_{HN}\) always possesses an eigenstate \(\left|\psi_{0}\right\rangle_{R}\) with an eigenvalue equal to zero.
To experimentally demonstrate NTOS, we use a time-multiplexed photonic resonator network depicted schematically in Fig. 2. The network consists of a main fiber loop which supports \(N\) resonant pulses separated by a repetition period, \(T_{\rm R}\). Here, each individual pulse represents a single resonator associated with the annihilation (creation) operators \(\hat{a}_{j}^{(\dagger)}\) in Eq. 1. To realize the non-reciprocal couplings \(t_{\rm R}\) and \(t_{\rm L}\), we use delay lines to dissipatively couple nearest-neighbor pulses. Each delay line is equipped with intensity modulators that control the strengths of such couplings (see Fig. 2). To induce the perturbation signal, we consider a change in the lattice of the form \(\Delta\hat{H}=\Gamma\hat{a}_{N}^{\dagger}\hat{a}_{1}\) which shows a small deviation from the OBC configuration. In response to this, the unperturbed eigenstate \(\left|\psi_{0}\right\rangle_{R}\) will now change accordingly to \(\left|\psi(\Gamma)\right\rangle_{R}\) associated with a new eigenvalue that shifts from the zero point by \(\Delta E\). In addition, to implement the perturbation \(\Delta\hat{H}\) we use a third delay line which couples the first pulse to the last one in a non-reciprocal fashion. The strength of this coupling is then modulated accordingly to provide different values of the perturbation strength \(\Gamma\).
In our experiments, we first initialize the system by shaping the amplitudes and phases of the input pulses to represent the zero eigenstate \(\left|\psi_{0}\right\rangle_{R}\) associated with the Hamiltonian in Eq. 1. Figure 3 shows an example of such pulses in the experiments concerning \(N=23\) time-multiplexed resonators (the green inset depicts the zero eigenstate \(\left|\psi_{0}\right\rangle_{R}\)). In order to increase the power of the pulses in the measurements, we repeatedly inject this pulse pattern into the closed cavity (with closed delay lines) which results in building up power inside the cavity (Fig. 3 for times \(<10.5\mu s\)). After this initialization, the input to the cavity is blocked so that the pulses start to circulate through the cavity and the delay lines according to the discrete-time evolution corresponding to the Hatano-Nelson model. Subsequently, at each time step that is defined by the integer multiples of the cavity round-trip time, we project the state of the network on the left eigenstate of the unperturbed Hamiltonian \(\left|\psi_{0}\right\rangle_{L}\). The perturbed eigenvalue can now be estimated from the decay rate of this projection per cavity round-trip. Using this, we then measure \(\Delta E\) by calculating the difference between this new eigenvalue and the unperturbed one (see Methods).
Figure 4 displays experimentally measured values obtained from different lattices with various number of elements together with simulated results. For perturbations well below a critical value \(\Gamma\ll\Gamma_{C}\), the NTOS exhibits a linear response with respect to the input parameter. However, for larger inputs, the perturbed eigenvalue associated with \(\left|\psi(\Gamma)\right\rangle_{R}\) is no longer real, signaling a crossover to the PBC where the sensor response is no longer linear [23]. By increasing the perturbation further, the non-Hermitian skin effect breaks down and the eigenstates are no longer exponentially localized at the edge of the structure. Since the performance of the NTOS as a sensor is contingent upon this localization, it is crucial to avoid this non-Hermitian phase transition. Although in the thermodynamic limit \(\Gamma_{C}\) tends to vanish, our analytical results show that for finite lattices its value remains nonzero and scales exponentially with \(N\). In order to fully characterize our NTOS, we applied perturbations in a wide range of strengths spanning
Figure 2: **Schematic of the network of time-multiplexed resonators used to demonstrate NTOS.** Synthetic resonators are defined by femtosecond pulses emitted by a mode-locked laser with a repetition rate of \(T_{R}\) passing through an electro-optic modulator (EOM) before injection into the optical fiber-based cavity (yellow fibers). An Erbium-doped fiber amplifier (EDFA) is used in the main cavity to compensate for the losses and increase the number of measurement roundtrips. Two delay lines with smaller and larger lengths than the main cavity (corresponding to delays of \(-T_{R}\) and \(+T_{R}\), respectively) are utilized to provide nonreciprocal couplings between the nearest-neighbor resonators, necessary to implement the non-Hermitian topological model of Eq. 1. In addition, a third delay line with a length that corresponds to an optical delay of \(+(N-1)T_{R}\) associated with the perturbation \(\Delta\hat{H}\) is also included. The strength of such a perturbation, i.e. \(\Gamma\), can be accurately adjusted via a controlled misalignment in the free space section depicted in the figure.
both below and above the aforementioned critical coupling. As shown in Fig. 4, the experimentally measured results exhibit a linear system response to small \(\Gamma\). For larger inputs, the sensor response eventually becomes nonlinear, hence setting the dynamic range of our demonstrated NTOS.
To evaluate the performance of NTOS, we calculated the sensitivity defined as \(S\equiv\partial E/\partial\Gamma\) using our measurement data in the small parameter regime \(\Gamma\ll\Gamma_{C}\). Figure 5 shows theoretically expected values along with experimental results for different lattice sizes \(N\). From here, it is evident that the sensitivity of the NTOS grows exponentially with the size of this non-Hermitian topological system. Remarkably, the exponential enhancement of the sensitivity is known to arise in scenarios where the non-Hermitian topological winding number \(\mathcal{W}\) defined as
\[\mathcal{W}=\frac{1}{2\pi i}\int_{-\pi}^{\pi}dk\frac{\partial}{ \partial k}\log\{\det[H(k)]\}, \tag{2}\]
is nonzero [9]. Here, \(H(k)\) denotes the Bloch Hamiltonian associated with the implemented lattice under PBC conditions. To corroborate this, we simulated the behavior of other types of lattices when subjected to the same perturbation \(\Gamma\) in their boundary conditions as the NTOS studied here. We first consider the limiting case of the Hamiltonian in Eq. 1 where the nearest-neighbor couplings become reciprocal \(t_{\rm R}=t_{\rm L}\), resulting in a trivial system \(\mathcal{W}=0\). As shown in Fig. 5, the sensitivity of a sensor implemented using a uniform lattice tends to deteriorate as \(1/N\) with respect to the number of array elements. As a second example, we choose a Hermitian, but topologically non-trivial lattice, namely that associated with the Su-Schrieffer-Heeger (SSH) model [30]. When properly terminated, such a lattice also supports a pair of topological edge states that are localized in the open ends of the structure, in a way similar to the NTOS constructed in our experiments. However, unlike NTOS, the SSH Hamiltonian exhibits a trivial non-Hermitian winding number according to Eq. 2. For this system, it can be
Figure 3: **Measurement procedure for the time-multiplexed NTOS.** Experimental time trace showing the pulse patterns at the output of the time-multiplexed resonator network for \(N=23\). At the beginning (\(t<10.5\mu s\)) optical pulses representing the zero eigenstate \(\left|\psi_{0}\right\rangle_{R}\) of the unperturbed Hamiltonian in Eq. 1 (bottom green inset) are repeatedly injected into the closed cavity (power build-up regime). After this, the input path to the cavity is blocked while the delay lines are opened, allowing for the pulses to circulate inside the cavity and the delay lines. This results in a temporal decay of the input eigenstate for \(t>10.5\mu s\). By measuring these pulses and projecting them onto the left eigenstate of the unperturbed Hamiltonian \(\left|\psi_{0}\right\rangle_{L}\), we experimentally estimate the shift in the zero eigenvalue \(\Delta E\) associated with the Hatano-Nelson model resulting from the nonzero perturbation in the system.
Figure 4: **Experimental demonstration of NTOS.** Experimentally measured shifts in the eigenvalue \(\Delta E\) as the boundary coupling strength \(\Gamma\) is perturbed from zero value (OBC conditions), for different lattice sizes \(N=7,13,17\) and \(23\). As evident in the figure, as long as \(\Gamma\) is small enough, our NTOS responds linearly to the induced perturbations. However, as \(\Gamma\) passes a threshold which depends on the size of the non-Hermitian topological lattice \(N\), the change in the eigenvalue is no longer linear. The transition to this nonlinear regime is marked for each case in the figure by vertical dashed lines. Theoretically expected values are shown as solid curves. Here, \(T_{RT}\) represents the round-trip time of the optical cavity.
shown that the sensitivity of the eigenvalues associated with such Hermitian edge states are in fact exponentially _insensitive_ to the changes in the boundaries of the array as \(N\) grows (Fig. 5). These results hence confirm that the unusual enhancement in the sensing response observed in our experiments arises uniquely due to the synergy between non-Hermiticity and topology.
In summary, we have experimentally demonstrated enhanced sensitivity by non-Hermitian topological amplification based on the non-reciprocal Hatano-Nelson model. For various lattices with different number of elements, we characterized the response of the system as the shift in one of its eigenvalues as the boundary conditions change. While this response tends to saturate for perturbations larger than a critical limit, it tends to be linear for smaller range of values. The sensitivity parameter calculated using experimental data clearly exhibits an exponential growth with the lattice size \(N\), in agreement with theoretical predictions. Using examples of other types of lattices, we showed that this peculiar enhancement arises due to the collaborative effect of non-Hermiticity and topology, something that does not occur for instance in Hermitian topological systems.
## Methods
### Experimental Procedure
To realize non-Hermitian topological sensors (NTOS), we construct the fiber-based time-multiplexed resonator network shown in Fig. 2. This network consists of a main cavity (yellow fiber) and three optical delay lines (blue fiber). We populate this network with optical pulses separated by a repetition period \(T_{\rm R}\approx 4{\rm ns}\), and we choose the lengths of the delay lines to introduce couplings between these pulses. The \(\pm 1T_{\rm R}\) delay lines produce couplings between nearest-neighbor pulses in the cavity, while the \(+(N-1)T_{\rm R}\) delay line, which is where we introduce perturbations, couples the first pulse in our synthetic lattice to the final pulse. While the main cavity can support up to \(74\) pulses, we use the \(+(N-1)T_{\rm R}\) delay line to set the size of the
Figure 5: **Exponential enhancement in the sensitivity of the NTOS.** Experimentally obtained sensitivities \(S\) of the NTOS for different lattice sizes \(N\) are shown as green circles on the left plot. The corresponding theoretically predicted values are also depicted as orange squares. The data shows an exponential enhancement in the sensitivity \(S\) as the NTOS lattice size grows (green dashed line). For comparison, we performed similar analysis for other types of lattices including a trivial lattice with uniform couplings as well as the Hermitian topological lattice represented by the SSH Hamiltonian (depicted on the right side of the figure). As shown in the plot, in sharp contrast to NTOS, such lattices tend to become less sensitive to their boundary conditions as the structure grows. The plot also displays the enhancement in the sensitivity resulting from distributing sensing (DS) techniques with \(N\) sensing elements.
lattice under study, and we do not excite the unused time slots in the main cavity.
Prior to an experiment, we calibrate the electro-optic modulators (EOMs) in the network using the calibration procedure described in Supplementary Information Sec. 1b. We calibrate the EOM between the laser and the main cavity to carve the zero-mode of the unperturbed Hatano-Nelson lattice from the pulse train of the laser, while we calibrate the modulators in the \(\pm 1T_{\mathrm{R}}\) delay lines to implement the Hatano-Nelson model's asymmetric couplings. The \(+(N-1)T_{\mathrm{R}}\) delay line also contains two EOMs (not shown in Fig. 2), which control the strength of the perturbation between the first and final sites of the HN lattice. We calibrate the throughput of these modulators to set the perturbation strength for any given experiment.
After completing our calibration, we begin an experiment by injecting the Hatano-Nelson zero-mode into the network for 10 roundtrips, which allows the power in the zero-mode to resonantly build up within the cavity. During this time, we leave the IMs in the \(\pm 1T_{\mathrm{R}}\) delay lines biased to minimum throughput so that we do not couple neighboring pulses through these delay lines. After 10 roundtrips, we stop injecting the zero-mode and we turn on the couplings in the \(\pm 1T_{\mathrm{R}}\) delay lines. We save a trace of the cavity ring-down, and we repeat this measurement on the order of 50 times to generate statistics for our data analysis.
In addition to injecting the zero-mode into our network, we also inject a single pulse into one of the unused time slots of the main cavity. We leave this single pulse uncoupled to the surrounding time slots so that this pulse decays at the intrinsic decay rate of just the main cavity. In the absence of the perturbation, this is the same decay rate that we would expect for the zero-mode of the Hatano-Nelson model. Therefore, this auxiliary pulse acts as a reference from which we can extract the change in the decay rate of the zero-mode due to the perturbation.
## Acknowledgments
The authors acknowledge support from ARO Grant W911NF-23-1-0048 and NSF Grants No. 1846273 and 1918549. The authors wish to thank NTT Research for their financial and technical support.
## Author Contributions
All authors contributed to the writing of this manuscript.
## Competing Interests
The authors declare no competing interests with regards to the publication of this work.
## Data Availability
The data used to generate the plots and results in this paper is available from the corresponding author upon reasonable request.
## Code Availability
The code used to analyze the data and generate the plots for this paper is available from the corresponding author upon reasonable request.
|
2310.08161 | Phase offset method of ptychographic contrast reversal correction | The contrast transfer function of direct ptychography methods such as the
single side band (SSB) method are single signed, yet these methods still
sometimes exhibit contrast reversals, most often where the projected potentials
are strong. In thicker samples central focusing often provides the best
ptychographic contrast as this leads to defocus variations within the sample
canceling out. However focusing away from the entrance surface is often
undesirable as this degrades the annular dark field (ADF) signal. Here we
discuss how phase wrap asymptotes in the frequency response of SSB ptychography
give rise to contrast reversals, without the need for dynamical scattering, and
how these can be counteracted by manipulating the phases such that the
asymptotes are either shifted to higher frequencies or damped via amplitude
modulation. This is what enables post collection defocus correction of contrast
reversals. However, the phase offset method of counteracting contrast reversals
we introduce here is generally found to be superior to post collection
application of defocus, with greater reliability and generally stronger
contrast. Importantly, the phase offset method also works for thin and thick
samples where central focusing does not. Finally, the independence of the
method from focus is useful for optical sectioning involving ptychography,
improving interpretability by better disentangling the effects of strong
potentials and focus. | Christoph Hofer, Chuang Gao, Tamazouzt Chennit, Biao Yuan, Timothy J. Pennycook | 2023-10-12T09:30:09Z | http://arxiv.org/abs/2310.08161v3 | # Phase offset method of ptychographic contrast reversal correction
###### Abstract
The contrast transfer function of direct ptychography methods such as the single side band (SSB) method are single signed, yet these methods still sometimes exhibit contrast reversals, most often where the projected potentials are strong. In thicker samples central focusing often provides the best ptychographic contrast as this leads to defocus variations within the sample canceling out. However focusing away from the entrance surface is often undesirable as this degrades the annular dark field (ADF) signal. Here we discuss how phase wrap asymptotes in the frequency response of SSB ptychography give rise to contrast reversals, without the need for dynamical scattering, and how these can be counteracted by manipulating the phases such that the asymptotes are either shifted to higher frequencies or damped via amplitude modulation. This is what enables post collection defocus correction of contrast reversals. However, the phase offset method of counteracting contrast reversals we introduce here is generally found to be superior to post collection application of defocus, with greater reliability and generally stronger contrast. Importantly, the phase offset method also works for thin and thick samples where central focusing does not.
keywords: Electron ptychography, Phase wrap, 4D STEM +
Footnote †: journal: Ultramicroscopy
## 1 Introduction
Electron ptychography offers very high dose efficiency [1; 2; 3], the ability to reveal the locations of light elements neighboring heavy atoms [4; 5], post collection aberration correction and superresolution [6; 7]. These advantages make the method a very attractive complement to Z-contrast annular dark field (ADF) workflows [8], with the phase images providing stronger images of the structure and the Z-contrast stronger sensitivity to composition. With advances in cameras having greatly reduced or completely removed the problem of slow scans with 4D STEM [9; 10], there is now relatively little drawback to collecting data for ptychography.
Compared to phase contrast imaging with conventional high resolution transmission electron microscopy (HRTEM), the contrast transfer function (CTF) of direct focused probe methods such as single side band (SSB) [11; 12; 13] and Wigner distribution deconvolution [14] are very simple, requiring no aberrations to form contrast and exhibiting no zero crossings. This makes these ptychographic methods much easier to interpret than HRTEM, at least for potentials that are not overly strong. The phase is related to the strength of the potential encountered by the beam electrons and thus as the strength of the potential increases eventually the phase can exceed the limit imposed by the \(2\pi\) range of values available to phase and wrap around. This means that as phase increases it can suddenly go from being maximally positive at \(\pi\) to maximally negative at -\(\pi\), causing a very large change in contrast from only a small change in the sample. Therefore, even though the SSB ptychography CTF, derived using the same weak phase approximation as HRTEM CTFs, shows all frequencies being passed with the same sign, contrast reversals can occur because of wrap around.
One of the most observed contrast reversal behaviour in atomic resolution imaging is a dip in the phase at the center of the atomic columns [15; 5; 14]. These often appear as donut shapes in the images and like a volcano with a caldera in line profiles, and represents a reversal from the centrally peaked probe shaped atoms one observes when the potential is weaker. This makes some intuitive sense as the center of an atomic column in the location of the strongest potential, and thus it is natural to expect that this will be where wrap around will occur
first as the potential increases. This intuitive expectation is also in accord with the fact that it is the heavier atomic columns that exhibit donuts first as thickness increases [5]. Such contrast reversals have also been observed in iCoM and iterative ptychography [15; 16]. This again makes intuitive sense as all these methods are attempting to retrieve the same phase shift induced on the beam electrons by the sample. Perhaps less intuitive is the fact that the range of phase values in the final images generally remains much less than \(2\pi\) in atomic resolution imaging, at least in single slice ptychography, but it is not just the phases in the final image that can phase wrap - the individual frequency components can also phase wrap.
If the goal is to locate the light elements hidden in the ADF signal by strong scattering of nearby heavy elements, the appearance of donut contrast on the heavier columns is often not a significant impediment. However as the thickness increases the contrast can become more complex [5]. Furthermore, it is often preferable for images of atomic structure to appear as close to the relatively simple probe shaped spikes in intensity occurring in ADF imaging, even if simply for ease of interpretation. However the contrast reversals can also degrade overall contrast and reduce the visibility of the structure at lower doses [17], as well as complicate quantification.
For quite a range of thicknesses, central focusing of the beam offers phase images free from contrast reversals and with the strongest contrast overall [5; 15; 17]. However this conflicts with the optimal focus for ADF imaging, the entrance surface. Thus optimizing the probe focus for the phase images during acquisition can significantly degrade the quality of simultaneously acquired ADF images, especially as the sample thickness increases and the distance between the entrance surface and optimal focus for ptychography widens. Fortunately post collection adjustments can be applied. The ability of ptychography to adjust aberrations post collection can be leveraged to apply a post collection defocus which can often remove the contrast reversals [5]. However the application of post collection defocus also often reduces the overall contrast, even if the atoms all appear "atom like" after the contrast reversal correction. Furthermore, in some cases, post collection defocus does not remove contrast reversals with satisfying results, and indeed in other cases neither does physically focusing the probe during data acquisition [15; 17]. Another interesting approach to overcome contrast reversals is multislice ptychography [18]. Here, the specimen is divided in multiple slices and the phase is solved in each slice separately. Crucially, each slice is thin enough so that phase wraps are avoided within the slices.
Here we delve deeper into the phase wrapping process causing the contrast reversals, and demonstrate a better way to counteract them for SSB ptychography when using focus during acquisition either is not an option or does not satisfactorily remove the reversals. As the potential of a single atom is increased, the phases of the spatial frequencies change nonlinearly, with the higher frequencies changing faster than the lower frequencies. This means that as the potential is increased, the higher spatial frequencies eventually hit the limit imposed by the \(2\pi\) range of phases available and wrap around. Once a frequency wraps around, its phase contrasts very strongly with the frequencies that have not yet wrapped around. Thus these asymptotes in the phase response produce contrast reversals, and they can do so without any dynamical scattering. Applying defocus can roll the phases back around or reduce the amplitude of the wrapped around frequencies sufficiently that the contrast reversals can be removed. However, we find directly adjusting the phases with an offset applied to the zero frequency (DC) phase is generally a better method. We show that this method can robustly counteract contrast reversals regardless of the thickness or initial focus of the probe. Although central focusing remains preferable for the absolute best phase contrast in many cases, the phase offset method provides significantly improved contrast compared to defocus correction when a post collection solution is required. Furthermore, the phase offset method can correct contrast reversals in cases where physical defocus cannot satisfactorily do so. The ability of the phase offset to retain contrast is especially important when the sample is fragile and one has a low dose budget. We demonstrate this experimentally at a dose of at 50 \(\mathrm{e}^{-}/\mathrm{\AA}^{2}\) with a thin highly beam sensitive methylammonium (MA)-PbI\({}_{3}\) perovskite solar cell material[19; 20] which exhibits contrast reversals that cannot be corrected by defocus at all, whether applied during data collection or after.
## 2 Results and Discussion
Figure 1 displays single atom SSB images phase vs frequency plots starting with a U atom, and incrementally multiplying the strength of the potential by a factor of 5 up to 20. For the SSB reconstruction, the package pyPtychoSTEM [21] is used and for the simulation of the 4D data abTEM [22] is used and. The convergence angle is set to 30 mrad. Obviously, the multiplied potential then corresponds to an atom heavier than any known, but this super heavy atom allows us to probe the effect of the potential without dynamical scattering
complicating matters. With the already very heavy U atom, the phase curves significantly upwards as a function of the magnitude of the spatial frequency, but remains entirely within the 2\(\pi\) phase range without any phase wrap. Multiplying the U potential by a factor of 5, the phase increases more rapidly with respect to frequency, and a phase wrap occurs as the phase extends beyond \(\pi\). This results in a significant proportion of the higher spatial frequencies switching from being strongly positive to strongly negative, creating a strong contrast between frequencies lower than and higher than the asymptote, and resulting in a contrast reversal in the form of a small donut hole appearing in the center of the image of the atom. As the potential is increased to U \(\times\)10, the phase vs frequency curvature increases. The first wrap around asymptote shifts to lower frequency, a second wrap around point appears and the donut hole increases in size. As the potential is further increased the curvature further increases, further altering the balance between positive and negative phase frequencies, and overall the donut hole increases in width.
Figure 2 shows how the different ranges of frequencies influence the phase image for the U\(\times\)20 single atom potential using masking in probe reciprocal space. For this potential, the strongest single atom potential we have used, there are three phase wraps occurring within the 2\(\alpha\) range of frequencies passed using our 30 mrad convergence angle (\(\alpha\)), with the last wrap occurring almost at the 2\(\alpha\) limit of the SSB contrast transfer function [12; 13]. These three distinct ranges have frequen
Figure 1: Single atom SSB simulations using potentials ranging from a U atom potential to 20 times that potential, showing how the potential strength itself causes contrast reversals, which manifest here as donut shaped atom contrast. The 2nd row shows the phase vs frequency in 2D, and the 3rd row line profiles of the rotationally symmetric phase response. As the potential increases the curvature of the phase increases, the phase hits the top limit and wraps around resulting in contrast reversal at the center of the atom. As the wrap around shifts to lower frequencies the donut hole expands. Scale bar is 2 Å.
Figure 2: Illustration of the effects of the different frequency ranges on the simulated SSB image of the U\(\times\)20 potential using masking in Fourier space. With the full range of frequencies out to 2\(\alpha\) included (a), the atom appears as a thin ring with a slight peak in its center. As we progressively mask out the frequencies after each phase wrap (b–d) around asymptote the positive ring of phase progressively fills more of the central region of the atom, until after removing the contributions from all frequencies above the first wrap around it becomes atom like again, with a peak at the center of the atom. Of course, by limiting the contribution to lower spatial frequencies the image is also limited in resolution. Scale bar is 1 Å for the phase image and 20 mrad for the phase and amplitude vs. frequency plot, respectively.
cies with strongly contrasting phases. Starting with all spatial frequencies included (Fig. 2a), the single atom appears as a thin ring of strongly positive phase with a lower phase region in the center. The central region has a small peak of slightly more positive phase with this potential. As we mask out more higher frequencies, such that frequencies higher than the second phase wrap occurring in the middle of the 2\(\alpha\) range of frequencies are incrementally excluded, the ring of positive phase in the image fills inwards (Fig. 2b). The small positive peak in phase at the center of the atom disappears and instead becomes a minimum.
As we mask more of the higher frequencies, between the first asymptote and second asymptote, the donut shape fills in more, with the central "hole" becoming more positive overall but still dipping significantly compared to the phase further from the center of the atom (Fig. 2c). In this step we have reduced the number of strongly negatively phased frequencies above the first phase wrap asymptote contrasting with the strongly positively phased frequencies below the asymptote. When we remove all the frequencies above the first wrap point, we remove these strongly contrasting negative phase frequencies and the atom turns "atom" shaped (Fig. 2d).
As has been shown previously, defocus can often be used to counteract the contrast reversals [5; 15; 17]. For thicker samples, physically placing the probe focus in the centre of the sample is often found to be optimal. With a single atom, however, there is no difference between the entrance surface and the center of the sample. Thus there is no significant variation of defocus through the sample, the reason for central focusing being optimal in many cases [17]. However we can still counteract the contrast reversals of a single atom with defocus as shown in figure 3 using a 5 nm defocus with the U\(\times\)20 potential.
While physical defocus is most commonly found best for the ptychographic contrast, particularly with intermediate thickness samples in which central focusing is optimal, the option to counteract the reversals with the probe focused elsewhere can be a significant benefit. This can often be achieved with the ability of ptychography to alter aberrations post collection [5]. Figure 3 illustrates this for the U\(\times\)20 potential using a 3 nm defocus applied post collection.
Physical and post collection defocus are generally found to behave somewhat differently, as is also the case here despite this being a single atom with presumably insignificant dynamical effects. Although both physical and post collection defocus remove the donut shape in the SSB image, the behavior further from the atom is different with a ring of higher phase appearing in the
Figure 3: Illustration of the influence of post-collection and physical defocus on SSB ptychography with simulations using the U\(\times\)20 potential. The SSB images are displayed on the same intensity scale, with the contrast ratio (C) of the maximum phase to background phase indicated. To ease interpretation, the profiles of the frequency response include just one of the two frequency curves that arise with a single atom. The full frequency response is displayed in the 2D phase vs frequency plots, with the alternating checkerboard pattern the result of the tandem phase vs frequency curves associated with the single atom that hit the \(\pi\) upper limit at different frequencies and separately wrap around as is visible in the line profiles in figure 1. From the line profiles it is apparent that the the physical defocus brings the amplitudes close to zero after the first wrap around, whereas the post collection defocus pushes the first wrap point out to higher spatial frequencies, resulting in stronger contrast in this case.
post collection defocus results closer to its center than in the physically defocused case. The 2D plot of phase vs frequency is more complex in the physical defocus case here, with the post collection appearing to nonlinearly push the phase wraps out to higher frequency. There are still three phase wraps in the post collection case, but they are concentrated closer to the 2\(\alpha\) transfer limit, leaving a broader range of unwraped lower frequencies. In the physically defocused case, it appears more that it is the suppression of the amplitudes of the phases after the first wrap that results in the contrast reversal removal.
The reduction of contrast that can result from post collection defocus motivated us to search for an alternative strategy to counteract contrast reversals. We present here what we call the phase offset method. Given that the DC term provides the baseline phase which all other frequencies interfere with when transforming from probe reciprocal space into real space to form an image, by altering the relative phase of the DC term and all the other phases with a rigid offset, we can manipulate the phase wrap point and move it to higher frequencies without otherwise altering the overall shape of the phase vs frequency plot. In practice one can simply shift the DC term itself, although for the purposes of illustration here we instead shift the phases of the other frequencies while keeping the DC term phase constant in our plots of phase vs frequency as this better shows the effect on the phase wraps.
Figure 4 illustrates the use of a phase offset with the U\(\times\)10 potential. Without any correction the phase vs frequency curve displays a sharp jump from the DC term to the first non zero frequency. Applying an offset of -1.8 radians to the non-zero frequency components rigidly shifts the phase curve down such that there is no discontinuity moving from the DC term to the higher spatial frequencies until the positive \(\pi\) upper limit is hit and the phase wrap occurs as shown in Figure 4e. Importantly, with this offset the phase wrap occurs at a significantly higher spatial frequency than without the offset, and as can be seen from the figure the resulting image, Figure 4c, is donut free and much more closely resembles the shape of a lighter single atom that does not cause wrap around. There is a negative halo, but this is normal for a single atom in ptychographic images [23]. Using other values for the offset either does not fully correct the contrast reversal as in Figure 4b or inverts the donut image as in 4d.
While a single atom is a rather simple system, it turns out that this phase offset method is quite robust with crystals. As a first example, Fig. 5 examines the use of the phase offset with 16 nm thick SrTiO\({}_{3}\) (STO). In panel a the probe is focused to the entrance surface, which we emphasize is the best condition for ADF imaging. However, this leads to contrast reversals of the heavy Sr and Ti sites in the SSB image. Physically focusing to the middle of the specimen, as in Fig. 5b, removes the contrast reversals as a result of the defocus phase compensation of different layers [17]. To correct the contrast reversals using post collection defocus applied to data taken with the probe physically focused to the entrance surface during acquisition requires, in this case, a significantly larger defocus which, as we will show, results in a significant contrast reduction of the phase image. The post collection defocus adjustment often leads to sufficiently large contrast reduction that the atoms are not visible at low doses such as the 500 e\({}^{-}\)/A\({}^{2}\) used in the bottom row of Fig. 5. As seen in the figure, the image in which the contrast reversals have been corrected using physical defocus remains quite clear at this dose. Compared to post collection defocus, the phase offset does not reduce the contrast nearly as much, providing an image in which all the locations of the atoms are easily identified also at the lower dose.
We note that as the ptychographic contrast is not as high here with the focus set at the entrance surface even with the phase offset method, compared to physically focusing to the center of the sample, one must choose to prioritize either optimal ptychographic contrast at the expense of the ADF or having a better ADF contrast by focusing to the entrance surface and compensating the ptychography with a phase offset. Many materials science samples can handle many orders of magnitude higher doses than 500 e\({}^{-}\)/A\({}^{2}\), and for these one may wish to to optimize the ADF by focusing to the entrance surface while still obtaining a high quality contrast reversal free ptychographic image via the phase
Figure 4: Illustration of the use of a phase offset on strong phase objects. (a–d) SSB images simulated with the U\(\times\)10 potential using offsets phases of 0.0, -0.9, -1.8 and -2.7 mrad on the non-zero spatial frequencies with the DC values set to zero. (e) Line profiles of of the phases with no offset and the optimal -1.8 rad offset, again with only the upper phase vs frequency curve shown for simplicity.
offset method. On the other hand, if the dose budget for a given sample is very low, one might consider that one might not obtain useful information from the far less dose efficient ADF signal even with the focus at the entrance surface, and choose to physically focus to the center of the sample. Of course, optimizing the focus under low dose conditions can also be very difficult and thus there likely remains benefit to optimizing via post collection adjustments such as the offset method at low doses as well, even if a central focus was the aim.
Since ptychographic contrast is quite sensitive to the sample thickness, we now demonstrate that the offset correction can be successfully applied to a large variety of thicknesses. Fig. 6 shows STO SSB images noise free and with a dose of 500 \(\mathrm{e}^{-}/\mathrm{\AA}^{2}\) with thicknesses of 16 nm, 20 nm, 24 nm and 50 nm. This covers a range of specimen thicknesses typical of atomic resolution electron microscopy in materials science. Focusing on the entrance surface leads to contrast reversals as seen in the first and third rows of Fig. 6. The phase offset correction leads to a reasonable contrast with all thicknesses. In the 24 nm case, the oxygen columns have a much weaker contrast in both the uncorrected and the corrected phase images. This can be improved by physically focusing to the center of the sample thickness, as we showed previously [17], however, this can be difficult in experiments in practice without live processing. For 50 nm of STO, the contrast reversals are sufficiently complex in the uncorrected image that it is practically uninterpretable at 500 \(\mathrm{e}^{-}/\mathrm{\AA}^{2}\). This is a very low dose for STO, but it is
Figure 5: Comparison of SSB images simulated for 16 nm thick STO with the probe focused to the entrance surface (df = 0), the central slice (df = 8 nm) without further correction, and focused to the entrance surface with a post-collection defocus of 10 nm and using the phase offset method. The top row of images is noise free, while the bottom row of images use a dose of 500 \(\mathrm{e}^{-}/\mathrm{\AA}^{2}\). Here the central focusing is optimal, correcting the reversal with strong contrast. The post-collection defocus correction is very noisy in the low dose simulation, but the phase offset method retains sufficient contrast to locate all the columns at low dose while retaining the optimal probe focus for the ADF. Scale bar is 3 Å.
Figure 6: Simulated SSB imaging of STO as a function of thickness comparing the uncorrected and phase offset results with the probe focused to the entrance surface. The top half of the figure is with infinite dose, and the bottom with a dose of 500 \(\mathrm{e}^{-}/\mathrm{\AA}^{2}\). These show the robustness of the phase offset method across a wide range of thickness. Scale bar is 3 Å.
Figure 7: Comparison of physical defocus and the phase offset method for contrast reversal removal with 50 nm thick STO simulations. Central focusing (25 nm df) does not remove the contrast reversals in this case. Instead, close to the exit surface, using 44 nm of defocus, was found to be optimal using a focal series. However, physically focusing just 6 nm from the exit surface results in additional atom like features appearing where there are no atoms. This does not occur using the phase offset on data taken with the probe focused to the entrance surface. Furthermore the O columns are more visible at the low 500 \(\mathrm{e}^{-}/\mathrm{\AA}^{2}\) dose using the phase offset method than physical focusing. Scale bar is 3 Å.
nevertheless informative regarding the contrast generally as well for samples that handle only very low doses.
50 nm is also an interesting case because the center of the sample is not the focal plane exhibiting optimal contrast, as we found previously by performing a simulated focal series [5]. Figure 7 shows that the central focal plane exhibits quite strong contrast complexity that is not "atom-like". Instead, it was found that physically focusing to near the exit surface provides much better correction of contrast reversals. An example of this is shown in figure 7 using a 44 nm defocus from the entrance surface, just 6 nm from the exit surface of the sample. Here the contrast is much better, and is much more interpretable, including at 500 e\({}^{-}\)/A\({}^{2}\). However some artifacts remain in the form of atom like spots in between the actual atoms, as is seen in the noise free image. These are not present in the phase offset images, which not only show no contrast reversals or artifacts but actually show more visible contrast on the O sites at 500 e\({}^{-}\)/A\({}^{2}\). Given the difficulty of optimizing focus during low dose work, the performance of the phase offset here is encouraging.
Furthermore, focus adjustment of the beam cannot not always remove contrast reversals. While one might expect that contrast reversals arise only in relatively thick materials, surprisingly thin materials can also encounter contrast reversals, and these can be impossible to remove with defocus. Clark et al. showed this for very thin GaN [15]. We have explored this for 5 nm thick STO [17], which we find also exhibits contrast reversals which cannot be counteracted with central focusing. This is perhaps intuitive given the small range of defocus that exists within the sample. However, the reversals also can not be corrected with any focal point within the sample, or even within a useful range beyond as shown in Fig. 8. While perhaps the contrast reversal begins to be counteracted far beyond the exit surface, the contrast has already been significantly reduced to the point where the O sites are hardly visible. In contrast, applying a phase offset to the entrance surface focused data completely removes the contrast reversals. This shows that the phase offset method can be more robust than defocus adjustment generally.
We now demonstrate the phase offset method with experimental data. 4D STEM data of a methylammonium(MA)-PbJ\({}_{3}\) perovskite was acquired using our Timepix3 event driven camera to easily achieve very low doses and avoid drift [9]. Due to the extreme beam sensitivity of the material, we use a dose of just 50 e\({}^{-}\)/A\({}^{2}\). We note that this in the dose regime used in cryo electron microscopy of proteins. Although the event driven camera makes such low dose scans easy to achieve, the very low dose still makes it very difficult to find the best focus for the ptychography during the experiment, especially as one wishes to spend all the dose budget on imaging the regions of interest on the sample, and not adjusting the focus on that area. In practice, at present focusing is performed by optimizing the ADF image, which again is usually not the best defocus for the simultaneously acquired ptychography data, at least without further correction.
An SSB image of the MAPbI\({}_{3}\) is shown without correction in Fig. 9a. The heavy columns, which include Pb, tend to be donut shaped, despite the very low thickness of approximately 4 nm as indicated from EELS measurements. This thickness falls within the regime where the contrast reversals cannot be corrected via defocusing, assuming the STO results are representative as we expect. Indeed a post acquisition defocus series does not remove the contrast reversals. The best result we could achieve with post collection defocus is with 8 nm of defocus as shown in Fig. 9c. Applying a phase offset, however, removes the contrast reversals without any obvious compromise of the contrast, as shown in Fig. 9b.
Further increasing the post collection defocus reduces the contrast to the point of losing the lattice contrast completely. This is in agreement with the earlier discussion regarding the thin STO and the defocus series shown in Fig. 8, where a high defocus only corrects the reversals to a small extent. For low dose data, such as that of the MAPbI\({}_{3}\), such high defocus values lead to a complete loss of the signal as a result of the contrast reduction associated with defocusing. In this case, the phase offset is the only method that can practically be used to obtain an easily interpretable image without contrast reversals.
Since contrast reversals have also been observed in iterative ptychography reconstructions [15], it is interesting to see if the phase offset can also be used for these methods as one would expect. For this reason, we simulated a 4D data set of MAPbI\({}_{3}\) with an
Figure 8: Simulated focal series for 5 nm thin STO showing that no defocus value can counteract the contrast reversals within a range that does not overly distort the images. Importantly, the phase offset method counteracts the contrast reversals with the probe focused to the entrance surface and retains good contrast.Scale bar is 2 Å.
80 nm defocus and processed it with the gradient descent iterative ptychography algorithm as implemented in py4DSTEM [24]. Donuts appear on the Pb, similar to the SSB case as shown in Fig. 10a. Applying the phase offset to the gradient descent result indeed removes the contrast reversals as shown in panel b. A 50 e\({}^{-}\)/A\({}^{2}\) dose version of the phase offset corrected ePIE reconstruction is shown in Fig. 10c.
In conclusion, the phase offset method offers a significant boost to our ability to counteract contrast reversals in electron ptychography. Optimizing the focus used with the data acquisition often provides the best ptychographic contrast, such as central focusing with intermediate thicknesses. However there are many situations where using a focus optimal for the ptychographic contrast is not practical, and indeed cases where defocus cannot be used to counteract contrast reversals at all. Often one may prefer to optimize defocus for the ADF, as this cannot be corrected post collection. At doses sufficiently high that the less efficient ADF signal shows good contrast, the contrast reduction from using the offset method vs a physical defocus optimised for the ptychography will often not be so significant as to matter for locating atoms. At very low doses where the ADF provides very poor contrast, one may choose to abandon the ADF and prioritize focusing for the ptychography. However, the ADF can provide very informative information at surprisingly low doses, even if exceedingly noisy, and in practice accurate focusing is particularly challenging at extremely low doses. Furthermore, defocus cannot always correct contrast reversals as is the case for the thin few nm thick samples we discussed. Thus, as we see with the experimental example with MAPbI\({}_{3}\) here, the phase offset can be an important tool to remove contrast reversals, even at the extremely low doses used in cryo electron microscopy of proteins. Overall, we find the phase offset method reliably overcomes contrast reversals with minimal contrast reduction, or even improved contrast, compared to a defocus optimized for ptychography at the time of acquisition and thus we expect it to become a standard tool in the use of direct electron ptychography.
## 3 Acknowledgement
We acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme via Grant Agreement No. 802123-HDEM (C.H., C.G., T.C., B.Y. and T.J.P.) and FWO Project G013122N "Advancing 4D STEM for atomic scale structure property correlation in 2D materials" (C.H.).
|
2301.11417 | Are Labels Needed for Incremental Instance Learning? | In this paper, we learn to classify visual object instances, incrementally
and via self-supervision (self-incremental). Our learner observes a single
instance at a time, which is then discarded from the dataset. Incremental
instance learning is challenging, since longer learning sessions exacerbate
forgetfulness, and labeling instances is cumbersome. We overcome these
challenges via three contributions: i. We propose VINIL, a self-incremental
learner that can learn object instances sequentially, ii. We equip VINIL with
self-supervision to by-pass the need for instance labelling, iii. We compare
VINIL to label-supervised variants on two large-scale benchmarks, and show that
VINIL significantly improves accuracy while reducing forgetfulness. | Mert Kilickaya, Joaquin Vanschoren | 2023-01-26T21:07:12Z | http://arxiv.org/abs/2301.11417v4 | # Are Labels Needed for Incremental Instance Learning?
###### Abstract
In this paper, we learn to classify visual object instances, incrementally and via self-supervision (self-incremental). Our learner observes a single instance at a time, which is then discarded from the dataset. Incremental instance learning is challenging, since longer learning sessions exacerbate forgetfulness, and labeling instances is cumbersome. We overcome these challenges via three contributions: i). We propose VINIL, a self-incremental learner that can learn object instances sequentially, ii). We equip VINIL with self-supervision to by-pass the need for instance labelling, iii). We compare VINIL to label-supervised variants on two large-scale benchmarks [35, 6], and show that VINIL significantly improves accuracy while reducing forgetfulness.
## 1 Introduction
This paper strives for incrementally learning to recognize visual object instances. Visual instance recognition aims to retrieve different views of an input object instance image. It can be seen as fine-grained object recognition, where the goal is to distinguish different instantiations of the same object, such as cup 1 from cup 2. Instance recognition finds applications in many domains, such as in visual search [42], tracking [5, 51, 52] and localization [63].
Distinguishing between different object instances is a challenging task as they often differ only by small nuances. Metric learning [55] is a commonly used approach to learn visual object instances by comparing two views of the same object using a deep convolutional network, such as ResNet [22]. The network is trained to bring representations of the same object close together and separate representations of different objects in a large batch of images.
However, this approach requires iterating over potentially million-scale datasets multiple times to refine the metric space, which can be impractical for privacy reasons (some data may have to be deleted) or scale (when dealing with billions of images). Additionally, using the trained deep net to query a large database of images by comparing the feature representation of the input image to the database representations is time-consuming and computationally expensive.
This paper builds upon incremental learning to mitigate privacy and scale issues. In incremental learning, the learner observes images from a certain class for a number of iterations. Then, the data of the previous class is discarded, and the learner receives examples from a novel category. Such approach is called class-incremental learning, and receives an increasing amount of attention recently [27, 29, 38, 39, 60].
Existing class-incremental learners are ill-suited for instance-incremental learning for two reasons. First, class-incremental learners rely on full label supervision. Collecting such annotation at the instance level is very expensive. Second, despite years of efforts, class-incremental learners are forgetful, since they lose performance on previously observed categories.
This paper proposes **V**isual Self-**I**ncremental **I**nstance **L**earning, VINIL, to perform instance-incremental learning, consider Figure 1. VINIL observes multiple views of a single instance at a time, which is then discarded from the dataset. Such examples can be easily captured via turntable cameras [31, 6, 18, 40] or via hand-interactions [15, 36, 53]. Then, VINIL extracts its own supervision via self-supervision [59], therefore self-incremental. Self-incremental learning not only is label-efficient, it also consistently outperforms competitive label-supervised variants, as we will show. In summary, this paper makes three contributions:
1. [label=()]
2. In this paper, we study the challenging task of incremental visual instance learning,
3. We propose VINIL, an incremental instance learner solely guided by self-supervision, by-passing the need for heavy supervision,
4. Through large-scale experiments on [35, 6], we show that VINIL is more accurate and much less forgetful with respect to competitive label-supervised variants, hence unlocking the potential of large-scale incremental learning for free.
## 2 Related Work
**Visual Instance Recognition.** Visual instance recognition has been extensively researched in recent years and has been applied to various computer vision problems, including product retrieval [23, 34, 42, 55], object tracking [51, 52, 5], and geo-localization [56, 49, 54, 62, 33]. The most common approach for these tasks is to induce a discriminative embedding space, often using metric learning techniques [23, 14]. These methods require access to the entire dataset and fine-grained similarity labels. In contrast, this paper presents a novel method for incremental and label-free visual instance recognition, in a similar vein to [43].
**Class-Incremental Learning.** Class-incremental learning involves expanding a deep classifier with novel objects, with the goal of maintaining performance on previous categories and avoiding forgetting [38]. Popular techniques to prevent forgetting include regularization, which limits abrupt changes in network weights [28, 30, 32, 46], and memory replay of previous data [48, 50, 4, 24]. Our approach differs from conventional class-incremental learning in two ways. First, while class-incremental learning focuses on object categories, our approach operates at the instance level, presenting new challenges. Second, class-incremental learning requires fully labeled datasets, which is often not possible in instance learning. To overcome these limitations, we use self-supervision and adapt relevant evaluation techniques. Specifically, we use Elastic Weight Consolidation (EwC) as a regularization method [30] and Replay as a memory technique [48] due to their adaptability for label-free learning.
**Self-Supervised Learning.** Self-supervision involves creating pretext tasks to learn deep representations without using labels. Early methods predicted rotations [19] or patches [41], but contrastive learning has become dominant in recent years [21, 11, 8, 12]. In our work, we utilize self-supervision to extract learning signals in place of instance labels. We experimented with both BarlowTwins [59] and SimSiam [13] due to their high performance and adaptation in incremental learning tasks [37, 16]. We found that BarlowTwins [59] performs better than SimSiam for our incremental learning setup. We believe this is due to its ability to reduce redundancy across different views of the input. Reducing visual redundancy is especially important for different instances of the same object, as visual object instances may only differ in small details.
**Incremental Self-Supervised Learning.** Recently, there has been a surge of interest in use of self-supervision to replace label supervision for incremental learning. We identify three main directions. _i)__Pre-training:_ Researchers use self-supervised learning either for pre-training prior to incremental learning stage [7, 26, 17] or as an auxiliary loss function to improve feature discrimination [61]. However, these papers still require labels during the incremental learning stage. _ii)__Replay:_ Second line of techniques propose replay-based methods [37, 45, 10] to supplement self-supervised learners with stored data within the memory. _iii)__Regularization:_ Third line of work proposes to regularize self-learned representations [16, 37, 20].
In this work, we focus on replay and regularization-based self-incremental learning. More specifically, we closely follow UCL [37] and ask ourselves: What is the contribution of self-supervision for instance incremental learning? Our main observation is that self-supervision consistently yields less forgetful, more accurate and transferable representations, as will be shown via large-scale experiments.
Figure 1: **Top:** Label-incremental learning demands instance-level annotations and trains a new weight per instance. This approach is not suitable for handling large numbers of visual instances, and it is also prone to forgetting previously learned instances. **Bottom:** In this paper, we introduce VINIL, a self-incremental instance learning method. VINIL focuses solely on learning a discriminative embedding and uses Self-Supervised Learning (SSL) to extract supervision from different views of the same instance. As a result, VINIL is label-free, more scalable, and significantly less prone to forgetting compared to label-incremental learning.
## 3 Vinil
We present an overview of VINIL in Table 1. The goal of VINIL is to train an embedding network \(f_{\theta_{t}}(\cdot)\) parameterized by \(\theta_{t}\). The network maps an input image \(x\) to a \(D\)-dimensional discriminative embedding, \(h=f_{\theta_{t}}(x)\) which will then be used to query the database to retrieve different views of the input query for instance recognition. Here, \(t\) denotes the incremental learning step, where the tasks are arriving sequentially: \(\mathbf{T}=(\mathbf{T}_{1},\mathbf{T}_{2},...,\mathbf{T}_{t})\). We train VINIL via minimizing the following objective:
\[\mathcal{L}=w_{c}\cdot L_{inst}+(1-w_{c})\cdot L_{incr} \tag{1}\]
where \(w_{c}\) controls the contribution of instance classification loss \(L_{inst}\) and incremental learning loss \(L_{incr}\). Incremental learning loss either corresponds to memory replay [48] or weight regularization [30] whereas instance classification loss \(L_{inst}\) is either cross-entropy with labels or a self-supervision objective.
### Incremental Learning
**Fine-Tuning (FT).** A vanilla way to perform incremental instance learning is to apply simple fine-tuning via SGD [47]. In fine-tuning, no incremental learning loss is applied (_i.e_. \(w_{c}=1.0\)) and the sole objective is classification.
In case of label-supervision, a task is defined by a dataset \(D_{t}^{label}=\{(x_{i,t},y_{i,t})_{i=1}^{k_{t}}\}\) where \(k_{t}\) is the data size at time \(t\). Then, fine-tuning corresponds to instance discrimination via cross-entropy \(L_{inst}=CE(y_{i,t},y_{i,t}^{\prime})\). Here, instance category prediction for the instance \(i\) at time step \(t\) is obtained with a simple MLP classifier. Notice that this classifier will expand in size linearly with the number of instance categories.
In case of VINIL, a task is defined by a dataset \(D_{t}^{self}=\{(x_{i,t})_{i=1}^{n}\}\) (_i.e_. no labels). Then, fine-tuning corresponds to minimizing the self-supervision objective \(L_{inst}=BT(x_{i,t},x_{i,t}^{\prime})\) where \(BT(\cdot)\) is the BarlowTwins [59].
**EwC [30].** EwC penalizes big changes in network weights via comparing the weights in the current and the previous incremental learning step. Originally, EwC re-weights the contribution of each weight to the loss function as a function of instance classification logits (_i.e_. label-supervision). In VINIL, in the absence of labels, we omit this re-weighting and simply use identity matrix.
**Replay [48].** Replay replays a portion of the past data from previous incremental steps to mitigate forgetting. In case of label-supervision, this corresponds to replaying both the input data and their labels via cross-entropy: \(CE(y_{i,t}^{m},y_{i,t}^{m^{\prime}})\) where \(y_{i,t}^{m^{\prime}}\) is the instance categories for the memory instance \(i\) at time \(t\). For VINIL, we simply replay the input memory data and its augmented view via self-supervision of BarlowTwins as \(BT(x_{i,t}^{m},x_{i,t}^{m^{\prime}})\).
### Self-Supervised Learning
In BarlowTwins, the features are extracted from the original and the augmented view of the input image with a siamese deep network, at time step \(t\) as: \((z_{i,t},z_{i,t}^{\prime})=(f_{\theta^{t}}(x_{i,t}),f_{\theta^{t}}(x_{i,t}^{ \prime}))\) where \(x_{i,t}^{\prime}=aug(xi,t)\) is the augmented view of the input. BarlowTwins minimizes the redundancy across views while maximizing the representational information. This is achieved via operating on the cross-covariance matrix via:
\[BT=\sum_{i}(1-C_{ii})^{2}+w_{b}\cdot\sum_{i}\sum_{j\neq i}(C_{ij})^{2} \tag{2}\]
where:
\[C_{ij}=\frac{\sum_{\beta}z_{\beta,i}z_{\beta,j}^{\prime}}{\sum_{\beta}\sqrt{z_ {\beta,i}^{2}}\cdot\sum_{\beta}\sqrt{(z_{\beta,j}^{\prime})^{2}}} \tag{3}\]
is the cross-correlation matrix. Here, \(w_{b}\) controls invariance-redundancy reduction trade-off, \(i\) and \(j\) corresponds to network's output dimensions.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Supervision & Input & Memory & Loss \\ \hline \hline Fine-Tuning & Label-supervised & \((x,y)\) & ✗ & \(CE(y,y^{\prime})\) \\ Fine-Tuning & Self-supervised & \((x)\) & ✗ & \(BT(x,x^{\prime})\) \\ \hline EwC & Label-supervised & \((x,y)\) & ✗ & \(CE(y,y^{\prime})+Reg(\Theta,y^{\prime})\) \\ EwC & Self-supervised & \((x)\) & ✗ & \(BT(x,x^{\prime})+Reg(\Theta)\) \\ \hline Replay & Label-supervised & \((x,y)\) & \((x^{m},y^{m})\) & \(CE(y,y^{\prime})+CE(y^{m},y^{m^{\prime}})\) \\ Replay & Self-supervised & \((x)\) & \((x^{m})\) & \(BT(x,x^{\prime})+BT(x^{m},x^{m^{\prime}})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: VINIL performs incremental instance learning via self-supervision, and is compared with label-supervision. We use memory replay [48] and weight regularization [30] as well as simple fine-tuning. _Fine-Tuning_[47] relies on Cross-Entropy **(CE)** or BarlowTwins **(BT)**[59] to perform incremental learning. _EwC_[30] penalizes abrupt changes in network weights via regularization (\(Reg(\cdot)\)). _Replay_[48] replays a part of previous data in the form of input-labels (label-supervised) or input-only (self-supervised).
## 4 Experimental Setup
**Implementation.** All the networks are implemented in PyTorch [44]. We use ResNet-\(18\)[22] as the backbone \(f(\cdot)\), and a single-layer MLP for the instance classifier. We train for \(200\) epochs for each incremental steps with a learning rate \(0.001\) decayed via cosine annealing. We use SGD optimizer with momentum \(0.9\) and batch-size \(256\). We use random cropping and scaling for augmentation.
We follow the original implementation of BarlowTwins [1]. \(10\%\) of the data is stored within the memory for replay [48]. We set \(w_{c}=0.7\) and \(w_{b}=0.03\).
**Datasets.** We evaluate VINIL on iLab-\(20\)M [6] and Core-\(50\)[35], since they are large-scale, sufficiently different, and widely adopted in incremental learning.
_iLab-\(20\)M_ is a turntable dataset of vehicles. It consists of \(10\) objects (_i.e_. bus, car, plane) with varying (\([25,160]\)) number of instances per category. Objects are captured by varying the background and the camera angle, leading to \(14\) examples per-instance. We use the public splits provided in [3] with \(125\)k training and \(31\)k gallery images.
_Core-\(50\)_ is a hand-held object dataset used in benchmarking incremental learning algorithms. The dataset includes \(10\) objects (_i.e_. phones, adaptors, scissors) with \(50\) instances per-category. Each instance is captured for \(300\) frames, across \(11\) different backgrounds. We use \(120\)k training and \(45\)k gallery images [2].
**Protocol.** We first divide each dataset into \(5\) tasks, with \(2\) object categories per-task. Then, each task is subdivided into \(N\) object instance tasks depending on the dataset. We discard the classifier of label-supervised variants after training, and evaluate all models with instance retrieval performance via k-NN with \(k=100\) neighbors on the gallery set, as is the standard in SSL [8, 11, 12, 13, 21].
We use the mean-pooled activations of layer4 of ResNet to represent images. All exemplars in the gallery set are used as query.
**Metrics.** We rely on two well established metrics to evaluate the performance of the models, namely accuracy and forgetting.
_i). Accuracy_ (Acc) measures whether if we can retrieve different views of the same instance from the gallery set given a query. We measure accuracy for each incremental learning steps, which is then averaged across all sessions.
_ii). Forgetting_ (For) measures the discrepancy of accuracy across different sessions. Concretely, it compares the maximum accuracy across all sessions with the accuracy in the last step.
## 5 Experiments
Our experiments address the following research questions: **Q1**: Can VINIL improve performance and reduce forgetting in comparison to label-supervision? **Q2**: Does VINIL learn incrementally generalizable representations across datasets? **Q3**: What makes VINIL effective against label-supervision?
### How Does VINIL Compare to Label-Supervision?
First, we compare VINIL's performance to label-supervision. The results are presented in Table 2.
**VINIL Yields Competitive Accuracy.** We first compare the accuracies obtained by VINIL vs. label-supervision. We observe that VINIL yields competitive accuracy against label-supervision: In \(4\) out of \(6\) setting, VINIL outperforms label-supervised variants.
**VINIL Mitigates Forgetting.** Secondly, we compare the forget rates of VINIL vs. label-supervision (lower is better). We observe that VINIL consistently leads to much lower forget rates in comparison to label-supervision. On iLab-\(20\)M dataset, VINIL results in _no_ forgetting. On the more challenging dataset of Core-\(50\), the difference across forget rates are even more pronounced: Label-supervision suffers from \(22\%\) forget rate whereas VINIL only by \(4\%\), a relative drop of \(80\%\) with fine-tuning.
**Label-supervision Leverages Memory.** Our last observation is that memory improves the accuracy and reduces forgetfulness of label-supervision. In contrast, the use of memory disrupts self-supervised representations. This indicates that replaying both inputs and labels (\((x_{i},y_{i})\)) as opposed to input-only (\((x_{i})\), as in self-supervision) may lead to imbalanced training due to limited memory size [9, 25, 57].
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Core-\(50\)} & \multicolumn{2}{c}{iLab-\(20\)M} \\ \cline{2-5} Method & \(Acc\) (\(\uparrow\)) & \(For\) (\(\downarrow\)) & \(Acc\) (\(\uparrow\)) & \(For\) (\(\downarrow\)) \\ \hline FT (Label) & 71.450 & 22.436 & 89.340 & 6.500 \\ FT (VINIL) & **74.914** & **4.802** & **90.398** & **0.000** \\ \hline Replay (Label) & **88.180** & **6.741** & 84.464 & 5.696 \\ Replay (VINIL) & 67.677 & 10.095 & **90.543** & **0.000** \\ \hline EwC (Label) & **75.117** & 18.268 & 87.690 & 4.535 \\ EwC (VINIL) & 73.011 & **2.167** & **90.655** & **0.000** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Visual Incremental Instance Learning on Core-\(50\)[35] and iLab-\(20\)M [6]. VINIL outperforms label-supervised variants for \(4\) out of \(6\) settings, while significantly reducing forgetfulness on both datasets. This indicates self-incremental learning is a strong, label-free alternative to label-supervision.
In summary, we conclude that VINIL is an efficient, label-free alternative to label-supervised incremental instance learning. VINIL improves accuracy while reducing forget rate. We also observe that label-supervision closes the gap when an additional memory of past data is present. This motivates further research for improving self-incremental instance learners with memory.
### Can VINIL Generalize Across Datasets?
After confirming the efficacy of VINIL within the same dataset, we now move on to a more complicated setting: Cross-dataset generalization. In cross-dataset generalization, we first perform incremental training on Core-\(50\), and then evaluate on iLab-\(20\)M. Then, we perform incremental training on iLab-\(20\)M and then evaluate on Core-\(50\).
Cross-dataset generalization between Core-\(50\) and iLab-\(20\)M is challenging due to the following reasons: _i). Camera:_ Core-\(50\) is captured with a hand-held camera whereas iLab-\(20\)M is captured on a platform with a turntable camera, _ii). Object Categories:_ Object categories are disjoint, as no common objects are present in each dataset, _iii). Object Types:_ iLab-\(20\)M exhibits toy objects of vehicles whereas Core-\(50\) exhibits hand-interacted daily-life objects.
The results are presented in Table 3. We present iLab-\(20\)M to Core-\(50\), and Core-\(50\) to iLab-\(20\)M results, along with the relative drop w.r.t training and testing on the same dataset (see Table 2).
**VINIL Yields Generalizable Representations.** We first observe that VINIL consistently yields higher accuracy and lower drop rate across all \(6\) settings in both datasets. This indicates that self-supervision extracts more generalizable visual representations from the dataset.
**Label-supervision Overfits with Memory.** Secondly, we observe that label-supervised variants with memory generalizes via overfitting on the training dataset. Replay with label-supervision leads to the biggest drop rate of \(36\%\) on Core-\(50\), when trained with iLab-\(20\)M. This implies the use of the memory drastically reduces generality of visual representations. A potential explanation is that, since replay utilizes the same set of examples within the limited memory repeatedly throughout learning, this forces the network to over-fit to those examples.
We conclude that VINIL extracts generalizable visual representations from the training source to perform instance incremental training. We also conclude that the astounding performance of label-supervision equipped with memory comes with the cost of overfit, leading to drastic drop in case of visual discrepancies across datasets.
### What Factors Affect VINIL's Performance?
**VINIL Mitigates Bias Towards Recent Task.** We present the heatmaps of the performance for all \(5\) main tasks, when each task is introduced sequentially, for label-supervision in Figure 2 and for VINIL in Figure 3 on iLab-\(20\)M [6]. Each row presents the accuracy for each task, as the tasks are introduced sequentially. For example, the entry \((0,2)\) denotes the performance on Task-\(0\) when the Task-\(2\) is introduced.
Considering Figure 2 for label-supervision, observe how the tasks achieve their peak performance when they are being introduced to the model, hence the higher numbers within the diagonal. Then, the performance degrades drastically as more and more tasks are being introduced. This indicates label-supervision fails to leverage more data. We call such phenomenon "recency bias", as the model is biased towards the most recently introduced task.
In contrast, in Figure 3 for VINIL, the performance on each task improves sequentially with the incoming stream of new tasks. This indicates self-supervised representations are less biased towards the recent task, and can leverage data to improve performance. This renders them as a viable option when incremental learning for longer learning steps, such as in incremental instance learning.
**VINIL Focuses on the Object Instance.** We present the activations of the last layer of ResNet, at different incremental time steps, in Figure 4.
Observe how VINIL learns to segment out the target object from the background. This allows the model to accurately distinguish across different instances of the same object sharing identical backgrounds. In contrast, label-supervised variant progressively confuses the object with
\begin{table}
\begin{tabular}{l c c} \hline \hline Train on \(\Longrightarrow\) & iLab-\(20\)M & Core-\(50\) \\ \cline{2-3} Test on \(\Longrightarrow\) & Core-\(50\) & iLab-\(20\)M \\ \cline{2-3} Method & \(Acc_{(\%\Delta(\downarrow))}\) & \(Acc_{(\%\Delta(\downarrow))}\) \\ \hline FT (Label) & \(59.850_{\%16}\) & \(67.249_{\%24}\) \\ FT (VINIL) & \(\mathbf{66.704_{\%10}}\) & \(\mathbf{76.302_{\%15}}\) \\ \hline Replay (Label) & \(55.692_{\%36}\) & \(69.412_{\%17}\) \\ Replay (VINIL) & \(\mathbf{61.857_{\%8}}\) & \(\mathbf{76.125_{\%15}}\) \\ \hline EwC (Label) & \(59.030_{\%21}\) & \(70.087_{\%20}\) \\ EwC (VINIL) & \(\mathbf{70.648_{\%3}}\) & \(\mathbf{75.793_{\%16}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Cross-Dataset Generalization on Core-\(50\) and iLab-\(20\)M. We present: _i)_ Train on iLab-\(20\)M and test on Core-\(50\), _ii)_ Train on Core-\(50\) and test on iLab-\(20\)M. In addition to accuracy, we also present the relative drop w.r.t training and testing on the same dataset (see Table 2). We observe that VINIL is consistently more robust in cross-dataset generalization when compared with label-supervision. The results indicate that self-supervision is able to extract more domain-agnostic representations, which improves the generality of visual representations, for instance-incremental setup.
the background. We call such a phenomenon "attentional deficiency" of label-supervised representations.
**VINIL Stores Instance-level Information.** We present nearest neighbors for three queries in Figure 5. We use the average-pooled activations of the last ResNet layer on Core-\(50\) trained with fine-tuning.
Observe how VINIL retrieves the same instance in different viewpoints, such as for the light bulb and can. In contrast, label-supervision is distracted by the background context, as it retrieves irrelevant objects with identical background. This indicates self-supervision generalizes via storing instance-level information. We present a failure case in the last row, as both models fail to represent an object with holes and un-familiar rotation.
We conclude that VINIL can improve its performance with incoming stream of data, and generalizes via focusing on the target object and storing instance-level details to perform instance-incremental learning.
## 6 Discussion
This paper presented VINIL, a self-incremental visual instance learner. VINIL sequentially learns visual object instances, with no label supervision, via only self-supervision of BarlowTwins [59]. Below, we summarize our main discussion points:
Figure 4: Activations of the last layer of ResNet [22], throughout the incremental learning steps. We compare label-supervision with VINIL (Fine-tuning). Notice how the attention of the label-supervised variant is disrupted after a few learning tasks. Instead, VINIL learns to segment out the target object, successfully suppressing the background context, such as the hand or the background.
Figure 3: Task-level performance of VINIL (Fine-tuning). VINIL improves its performance with incoming data, and is less biased towards recent task.
Figure 2: Task-level performance of Label-supervision (Fine-tuning). Label-supervision is biased towards recent task.
**Self _vs._ Label-supervision?** We demonstrate that self-supervision not only omits the need for labels, but it is also more accurate and less forgetful.
**W/ or W/o Memory?** Our results show that the use of memory boosts label-supervised instance incremental learning, however the improvement comes with the cost of over-fitting on the training source.
**Fine-tuning [47]**_vs._ **Replay [48]**_vs._ **EwC [30]?** We demonstrate that with the use of self-supervision, VINIL closes the gap between simple fine-tuning via SGD and more complicated, compute-intensive techniques like memory replay or regularization via EwC.
**What Makes VINIL Effective?** VINIL retains representations across tasks, and is able to store and focus on instance-level information, which are crucial for instance-incremental learning.
**Limitation.** VINIL is executed with regularization [30] and memory [48]. One can also consider dynamic networks [58] whose architectures are updated with incoming task data. VINIL is a scalable alternative to dynamic incremental network training due to abundant unlabeled data.
Figure 5: Five nearest neighbors for three object instance queries on Core-\(50\)[35] with fine-tuning. Green is a success, red is a failure. Observe how VINIL retrieves object instances in different views. The last column showcases a failure case, where both models fail to represent an object with holes (scissor).
## 7 Acknowledgements
Mert Kilickaya's research is fully funded by ASM Pacific Technology (ASMPT).
|
2301.09258 | EDEFuzz: A Web API Fuzzer for Excessive Data Exposures | APIs often transmit far more data to client applications than they need, and
in the context of web applications, often do so over public channels. This
issue, termed Excessive Data Exposure (EDE), was OWASP's third most significant
API vulnerability of 2019. However, there are few automated tools -- either in
research or industry -- to effectively find and remediate such issues. This is
unsurprising as the problem lacks an explicit test oracle: the vulnerability
does not manifest through explicit abnormal behaviours (e.g., program crashes
or memory access violations). In this work, we develop a metamorphic relation
to tackle that challenge and build the first fuzzing tool -- that we call
EDEFuzz -- to systematically detect EDEs. EDEFuzz can significantly reduce
false negatives that occur during manual inspection and ad-hoc text-matching
techniques, the current most-used approaches. We tested EDEFuzz against the
sixty-nine applicable targets from the Alexa Top-200 and found 33,365 potential
leaks -- illustrating our tool's broad applicability and scalability. In a
more-tightly controlled experiment of eight popular websites in Australia,
EDEFuzz achieved a high true positive rate of 98.65% with minimal
configuration, illustrating our tool's accuracy and efficiency. | Lianglu Pan, Shaanan Cohney, Toby Murray, Van-Thuan Pham | 2023-01-23T04:05:08Z | http://arxiv.org/abs/2301.09258v2 | # Detecting Excessive Data Exposures in Web Server Responses with Metamorphic Fuzzing
###### Abstract.
APIs often transmit far more data to client applications than they need, and in the context of web applications, often do so over public channels. This issue, termed _Excessive Data Exposure_ (EDE), was OWASP's third most significant API vulnerability of 2019. However, there are few automated tools--either in research or industry--to effectively find and remediate such issues. This is unsurprising as the problem lacks an explicit test oracle: the vulnerability does not manifest through explicit abnormal behaviours (e.g., program crashes or memory access violations).
In this work, we develop a metamorphic relation to tackle that challenge and build the first fuzzing tool--that we call EDEFuzz--to systematically detect EDEs. EDEFuzz can significantly reduce false negatives that occur during manual inspection and ad-hoc text-matching techniques, the current most-used approaches.
We tested EDEFuzz against the sixty-nine applicable targets from the Alexa Top-200 and found 33,365 potential leaks--illustrating our tool's broad applicability and scalability. In a more-tightly controlled experiment of eight popular websites in Australia, EDEFuzz achieved a high true positive rate of 98.65% with minimal configuration, illustrating our tool's accuracy and efficiency.
web api testing, excessive data exposure, metamorphic testing +
Footnote †: 100%: _“Automatic tools usually can? detect this type of vulnerability because it’s hard to differentiate between legitimate data returned from the API, and sensitive data that should not be returned without a deep understanding of the application.”_ — The Open Web Application Security Project (OWASP)
+
Footnote †: 100%: _“Automatic tools usually can? detect this type of vulnerability because it’s hard to differentiate between legitimate data returned from the API, and sensitive data that should not be returned without a deep understanding of the application.”_ — The Open Web Application Security Project (OWASP)
## 1. Introduction
Every week, another leak! Server-side APIs of web applications frequently transmit more data than is needed for their corresponding clients. This may not have been an issue, were it not for the fact that these APIs are often publicly accessible. API vulnerabilities of this type are known as _Excessive Data Exposures_ (EDEs). Despite ranking as OWASP's #3 most significant API vulnerability for 2019 (Bartos et al., 2019), technology to detect these vulnerabilities remains underdeveloped.
This motivates us to develop the first automated and systematic fuzzing tool--that we call EDEFuzz--to detect EDEs. As the "gold standard for finding and removing costly, exploitable security flaws", fuzzing is a key tool for cost-effectively detecting and remediating such issues (Bartos et al., 2019).
We posit that the lack of automated tools to detect EDEs is due to their _semantic_ nature. Specifically, EDEs do not manifest through explicit, abnormal behaviours (e.g., program crashes or memory access violations). Detecting them thus requires a model of what constitutes an EDE.
We start with a definition: an API is vulnerable to EDE if it exposes meaningfully more data than what the client legitimately needs (Bartos et al., 2019).
Consider a simple example of an online storefront. When a user views the page for a specific product, an API call may be made to fetch stock levels, informing the user whether the item is in stock. The API returns the stock level, but may also return extraneous data (such as the profit margin on the item) that is not displayed to the user but is nonetheless transmitted. The transmission of the extra data constitutes an "excessive data exposure". This leads to our motivating question:
_How can one automatically detect if a web API exposes more data than it should?_
The question is related to the famous test oracle problem. How can a tester or an automated testing technique distinguish desired, correct behaviour from undesired or incorrect behavior (Bartos et al., 2019). The common wisdom in industry (see Figure 1) is that the test oracle problem renders EDE detection beyond current testing approaches.
We address this challenge with the following key insight:
Figure 1. Industry views on the EDEs. These indicate the prevalence of EDEs and limitations of existing detection tools
Data returned from an API endpoint is more likely excessive if it has no impact on the content displayed to a user.
Specifically, we develop the following novel metamorphic relation1 to side-step this problem. Through the relation, automated testing approaches can check if a data field in an API response is excessive by checking for difference between what a client displays when the field is present in a API response, versus when the field is deleted.
Footnote 1: A metamorphic relation is one that holds between two different program inputs and their corresponding outputs [4]
Formally, assume we have an API response under analysis \(\mathcal{R}_{\text{origin}}\) comprising a set of data fields. A web client (e.g., a web browser) uses \(\mathcal{R}_{\text{origin}}\) to render a page that can be represented by a Document Object Model (DOM) tree \(\mathcal{D}_{\text{origin}}\). A data field \(d\in\mathcal{R}_{\text{origin}}\) is considered non-excessive if the following inequality holds:
\[\text{diff}_{\text{DOM}}(\mathcal{D}_{\text{origin}},\mathcal{D}_{\text{mutated}})\neq 0, \tag{1}\]
where \(\text{diff}_{\text{DOM}}\) calculates the difference between two DOM trees \(\mathcal{D}_{\text{origin}}\) and \(\mathcal{D}_{\text{mutated}}\). \(\mathcal{D}_{\text{mutated}}\) is constructed from \(\mathcal{R}_{\text{mutated}}\) which we obtain by removing the in-question data field \(d\) from \(\mathcal{R}_{\text{origin}}\). If a data field violates Equation (1), it is deemed excessive.
This relation enables us to build a system that significantly reduces the potential for false negatives that can otherwise occur with competing approaches--manual inspection and ad-hoc text-matching. Notably, keyword matching techniques often use a list of terms (such as "key", "token", "password" etc) in order to flag exposures [5]. Therefore under keyword-matching, when an excessive data field does not match any known keywords, it is erroneously ignored.
In contrast to these approaches, our tool EDEFuzz leverages the metamorphic relation to detect EDEs. It does so by mutating and replaying API responses into the client side of a web application and compares the generated DOM tree with the original tree in each fuzzing iteration.
Building the tool required us to surmount two main technical challenges.
First, we needed to build an API fuzzer with _repsonse determinism_. Existing mutation algorithms used in Web API testing/fuzzing [6] focus on mutating API _requests_ which introduces random and untargeted changes in server _responses_. However, our metamorphic relation requires that the responses differ only in a single field.
Second, like other fuzzing tools, the usefulness of our tool depends on its ability to achieve reasonable throughput (represented in tests per second). This challenge is particularly acute in the context of web fuzzing as tools are rate limited by both bandwidth, and server load. For public sites, the challenge is further compounded by both server-side rate-limiting and the need to minimize disruption. These hinder the timely progress of a fuzzing tool.
To address these two challenges, we adopt a "record-replay" model [7]. We combine a web proxy and a custom-built simulated server to minimize interactions with sites under-test. Prior to beginning the fuzzing process, our tool initiates a "record" phase: a web proxy captures all client requests and server responses, including the request sent to the targeted API and the corresponding response. Note that in each fuzzing campaign EDEFuzz targets only one API. Following the record phase, fuzzing begins (i.e., the "replay" phase).
In the replay phase, _no communication with the actual remote server is necessary_. Our lightweight simulated server handles all requests. If a request is sent to the targeted API, the simulated server transmits a mutated version of the original server response. Otherwise, the simulated server merely replays the recorded transmissions.
This architecture yields several benefits. First, test executions (i.e., sending requests and getting responses) are performed locally--leading to much lower latency. Second, changes to the remote server do not impact test results, making them highly deterministic. Maintaining deterministic results is a critical requirement for fuzzing in general because it helps reduce false positives. However, when detecting EDEs, this also helps reduce false negatives. Absent this determinism, an application change that yields a different web page may cause EDEFuzz to incorrectly flag a field as non-excessive-believe the DOM change to be caused by changes in the server-response and not in the application itself. Third, the architecture permits running tests in parallel, which minimizes the burden of scaling the tool.
We evaluate the tool in two different settings. First motivated by a recent massive Web API leak in Australia [8], we run our tool against several comparable web properties in that country. We perform a detailed comparison of the tool's results against a corresponding manual effort to assess the severity and accuracy of the findings. Second, we run our tool against a broader set of sixty-nine web applications--the complete set of applicable targets present in the Alexa Top-200. We use this evaluation to assess the scalability of our tool as well as its applicability to a representative set of global web applications.
Our overall contributions are as follows:
* We identified a novel metamorphic relation that helps to address the test oracle problem in the context of detecting excessive data exposure.
* We designed and developed the first systematic and automated fuzzing tool for detecting excessive data exposure vulnerabilities. To the best of our knowledge, our tool EDEFuzz, is the first of its kind.
* We empirically evaluated the accuracy of our approach, its applicability to popular websites, and its efficiency (both in terms of computational time and human effort). Our results demonstrate EDEFuzz's effectiveness for discovering unknown sensitive data leakage via EDE also, whose prevalence we also investigated. We found that our approach is
* highly accurate: 98.65% of the fields flagged by the tool in a controlled study were true excessive data exposures.
* widely applicable to popular websites, requiring modest computational costs and human effort to employ.
* able to discover zero-day EDE vulnerabilities. Specifically, it found five zero-day EDE vulnerabilities serious enough to merit immediate disclosure.
We structure the remainder of the paper as follows: In Section 2, we provide the necessary preliminaries on Web APIs, Excessive Data Exposures and Metamorphic Fuzzing. In Section 3, we diverge from typical paper structure by motivating our work with several real-world vulnerabilities _discovered by our tool_. In Section 4, we present our automated approach to detect EDEs and our implementation. In Section 5, we report our experimental results and answer our research questions. A survey of related work in Section 6 is followed by a brief discussion in Section 7.
### Research Ethics
We considered both the propriety of our scanning and fuzzing techniques and engaged in vulnerability disclosure.
We discussed our research in a series of conversations with our research ethics office who ultimately deemed it exempt from a full review process. Our research involves scraping and scanning commensurate with ordinary activity by both search engines and the research community. Our methodology minimizes interaction with remote servers by performing all fuzzing offline on a simulated replica of the target server. Given the low impact of capturing the outcome of a limited number of HTTP requests and the potential benefits of our research it was determined that our work adheres to the principle of benefice that is the hallmark of research ethics.
As our work notes, EDEFuzz flags fields for further human analysis (rather than indicating vulnerabilities with certainty). As a result, our assessment of whether a flagged field rises to a reportable level requires human judgement about whether an EDE leaks sensitive information and the potential harms from that leak. In the five instances where we discovered sensitive data leakage we contacted the affected entities. By time of submission, two of the five entities had remediated the issues, while the other three entities claimed it wouldn't pose any security issue on their products.
## 2. Background
We recall elements of web-application design and fuzzing techniques as relevant to excessive data exposure.
### Web APIs & Excessive Data Exposure
Web applications often expose API endpoints to the public internet. Exposing the endpoint allows the application to separate front- and back-end logic. While the front-end components focus on rendering visual elements and their associated interactive components, back-end logic is more closely tied to long-term data storage. The API allows the font-end to query the back-end and in many cases serves a response in either JSON or XML. While an API ought to narrowly tailor the data served in a response to the request, this practice is often ignored. OWASP terms this excessive data exposure (EDE) (Owasp, 2018).
One cause for EDEs is that API developers over-rely on API clients to perform data filtering. This eases the cognitive burden on back-end developers who can avoid determining the specific client needs of the client _a priori_.
When present, EDEs are often trivial to exploit. To obtain the excess (or even sensitive data), it is often sufficient to simply examine response traffic from the target API.
Since technology to scan for and detect EDEs remains underdeveloped, OWASP only provides general advice (Owasp, 2018) on how to prevent them, such as "Never rely on the client to filter data!", "Review all API responses and adapt them to match what the API consumers really need", and "Enforce response checks to prevent accidental leaks of data or exceptions". However, the prevalence of EDEs implies that it is challenging for API developers to strictly follow this advice without effective automation!
### Fuzzing
Fuzzing is a process of repeatedly generating (random) inputs and feeding them to the system under test (SUT) to cover more code and discover bugs. In its traditional use, a fuzzer detects issues through aberrant program behaviour, such as program crashes. This indicates a potential security bug in the SUT. In response, the fuzzer will preserve the bug-triggering input for further analyses (e.g., manual debugging).
While we preserve the input generation phase above, our work notably deviates in that we detect potential errors through _the lack of change_ in program output, rather than a spec violation or crash.
However our work is not the first to address API testing. Web API fuzzing recently garnered increased interest from both industry and academia (Owasp, 2018; Owasp, 2018), RESTler (Owasp, 2018) is the current state-of-the-art approach. RESTler is a stateful black-box REprefentational State Transfer (REST) API fuzzing tool. For a target with an OpenAPI/Swagger specification (Fuzzing, 2018), RESTler analyzes its entire specification, and then generates and executes tests through its REST APIs.
Researchers typically classify fuzzers based on how the level of integration between the SUT and the fuzzer. The most common classification is based on the fuzzer's awareness of the internal structure of the SUT (Fuzzing, 2018). Specifically, a black-box fuzzer knows nothing about the internal structure of the SUT. In contrast, a white-box fuzzer would know everything about the SUT such as its source code, its control flows and data flows. Grey-box fuzzing is in between; it leverages partial knowledge of the SUT, typically via lightweight program instrumentation. Our tool can be classified as a black-box fuzzer because it requires no internal instrumentation.
### Metamorphic Testing/Fuzzing
We focus on fuzzing as our approach to detecting EDEs because of its demonstrated success in discovering security flaws.
Highlighting the effectiveness of fuzzing, as of May 2022, Google's fuzzing infrastructure had detected over 25,000 bugs (Fuzzing, 2018). However, these bugs were detected using explicit test oracles. Bugs with a test oracle either lead to program crashes (e.g., segmentation faults) or are caught by instrumentation-based checkers (e.g., Address Sanitizer, Undefined Sanitizer). In contrast, semantic bugs like EDEs do not manifest through explicit abnormal behaviours, and cannot be reliably detected by observing _single_ program executions.
How do we build tools to detect semantic bugs? Metamorphic testing and fuzzing, which leverage metamorphic relations, are a promising approach. At its core metamorphic fuzzing involves _comparing_ multiple executions of the SUT under different inputs and observing whether some relation (called the _metamorphic relation_) holds between their corresponding outputs.
Consider the following toy example of metamorphic bug finding: we can test a function that reverses a list by testing, for an arbitrary input list \(x\), whether reversing the reverse of \(x\) yields \(x\) itself; or for a function that calculates distance between a pair of points, whether the distance from point \(a\) to \(c\) is always smaller or equal to the sum of the distances from \(a\) to \(b\) and from \(b\) to \(c\), etc.
Metamorphic relations are properties that must necessarily hold with respect to the correct functioning of the SUT. In metamorphic testing the violation of a metamorphic relation indicates a potential bug (Bowden, 2018). Past work has successfully identified and used metamorphic relations to find bugs in a variety of systems (Fuzzing, 2018), (Fuzzing, 2018). In one notable example, He, Meister, and Su (Fuzzing, 2018) successfully applied metamorphic testing to test Machine Translation software. Rigger and Su (Fuzzing, 2018)
identified a novel metamorphic relation and leveraged it to identify 121 unique bugs in popular DBMS.
## 3. Motivating Examples
In this section, we present a selection of real-world EDEs detected by our tool to demonstrate their prevalence, their implications, and the challenges in detecting them using prior approaches. We use the examples to motivate our tool, while providing a comprehensive evaluation in Section 5.
_Vulnerability 1 - Locations and Contact Details._ In this example, we describe a vulnerability EDEFuzz discovered while testing the live delivery tracking service offered by Company-I, an Australian last-mile delivery service. The vulnerability has been reported and fixed. As shown in Figure 2, a customer receives an unique link on the day that an item is on board for delivery. The link opens a web page whose contents includes the name and a photo of the delivery driver, an Estimated Time of Arrival of the delivery, and the position of the item in the deliver driver's queue. The page sends an API request to the server regularly, and updates the contents on the page based on the API response. Part of this API response is shown in Listing 1, with potentially sensitive information removed.
The client-side logic allows the webapp to display the accurate geographic location of the delivery driver _only_ when the item to be delivered is at the front of the queue. It suggests that while the driver is delivering an item to a customer, another customer should not be able to ascertain the location of the driver--which would leak the location of other deliveries. However, EDEFuzz detected that the API response always contains rich information about the delivery driver, including accurate latitude and longitude (the location field), direction of facing (the bearing field), and speed of travelling (the speed field). EDEFuzz also identified the driver's manager's information in the API response.
Knowing the timestamped location of the delivery driver, a customer may recover the route that the delivery driver is travelling, or even be able to identify the address of other customers who receive parcel from the same delivery driver. The leaked information may also be used in conjunction with other vulnerabilities to perform more attacks, such as a social engineering attack (Krishna et al., 2017).
_Vulnerability 2 - Warehouse Stock Levels._ EDEFuzz detected EDE vulnerabilities in several retailer websites in Australia including Company-C, Company-D, Company-J, and Company-E. While the previous vulnerability in Company-I's service exposes obvious sensitive information (e.g., email address, contact numbers and times-tamped locations), this vulnerability exposed detailed stock availability, which might be considered sensitive or not depending on the organization's data classification model.
Many retailers allow potential customers to check their stock availability online before visiting their shops. Some retailers reveal precise stock values on their websites, while other retailers decided to only display categorised values such as "in stock", "low stock" and "out of stock". Interestingly, we noticed a few such online services in which the server transmits the precise stock values but the web application only displays categorised values based on thresholds.
**Listing 2: An API response to a query for stock availability. The authors have redacted potentially sensitive information.**
Knowing the timestamped location of the delivery driver, a customer may recover the route that the delivery driver is travelling, or even be able to identify the address of other customers who receive parcel from the same delivery driver. The leaked information may also be used in conjunction with other vulnerabilities to perform more attacks, such as a social engineering attack (Krishna et al., 2017).
_Vulnerability 2 - Warehouse Stock Levels._ EDEFuzz detected EDE vulnerabilities in several retailer websites in Australia including Company-C, Company-D, Company-J, and Company-E. While the previous vulnerability in Company-I's service exposes obvious sensitive information (e.g., email address, contact numbers and times-tamped locations), this vulnerability exposed detailed stock availability, which might be considered sensitive or not depending on the organization's data classification model.
Many retailers allow potential customers to check their stock availability online before visiting their shops. Some retailers reveal precise stock values on their websites, while other retailers decided to only display categorised values such as "in stock", "low stock" and "out of stock". Interestingly, we noticed a few such online services in which the server transmits the precise stock values but the web application only displays categorised values based on thresholds.
**Listing 2: An API response to a query for stock availability. The authors have redacted potentially sensitive information.**
## 4.
Figure 2. A sample API flow for a package delivery service. A web application requests tracking information, that is returned in a JSON object.
For instance, Company-C is an Australian-based pharmacies retailer giant with hundreds of stores. Its website allows a customer to search for stock availability of an item in nearby shops, displaying results as "available" or "unavailable" for each shop. Upon investigating excessive data fields contained in their API response, we noticed that the exact stock level is exposed. Part of an API response is shown in Listing 2, with certain fields removed.
Here we are interested in the data field named products. We hypothesise that the value of available reflects the actual stock of an item remaining in the given shop: when available is present and equal to zero, "unavailable" is displayed; otherwise the item is displayed as "available". The Listing 2 shows one of a Melbourne-located store has stocked 85 of an item with ID 2632206 (a bottle of Vitamin D tablets).
Similar design was also observed in other retailer companies such as Company-D and Company-E. Company-D claimed that their accurate stock level is non-sensitive, whereas Company-E removed their stock level from their API response before we tried to contact them.
Regarding attack scenarios, if a retailer knows about the stock details of their competitors in different locations, they could adjust their logistic plan accordingly to increase their sales and gain more profit. Individual suppliers might also take advantage of this vulnerability to increase their prices for specific retailers when they know that they are low on stock. Having access to that kind of information, a 3rd-party company (e.g., a shopping suggestion service) could also develop an app and give more precise suggestions to their customer, leading to some monetary benefit.
Vulnerability 3 - Network bandwidthTransmitting excessive data fields reduces performance and consequently the user experience of 2023-01-23-257. Page 5 of 1-14.
For instance, Company-C is an Australian-based pharmacies retailer giant with hundreds of stores. Its website allows a customer to search for stock availability of an item in nearby shops, displaying results as "available" or "unavailable" for each shop. Upon investigating excessive data fields contained in their API response, we noticed that the exact stock level is exposed. Part of an API response is shown in Listing 2, with certain fields removed.
Here we are interested in the data field named products. We hypothesise that the value of available reflects the actual stock of an item remaining in the given shop: when available is present and equal to zero, "unavailable" is displayed; otherwise the item is displayed as "available". The Listing 2 shows one of a Melbourne-located store has stocked 85 of an item with ID 2632206 (a bottle of Vitamin D tablets).
Similar design was also observed in other retailer companies such as Company-D and Company-E. Company-D claimed that their accurate stock level is non-sensitive, whereas Company-E removed their stock level from their API response before we tried to contact them.
Regarding attack scenarios, if a retailer knows about the stock details of their competitors in different locations, they could adjust their logistic plan accordingly to increase their sales and gain more profit. Individual suppliers might also take advantage of this vulnerability to increase their prices for specific retailers when they know that they are low on stock. Having access to that kind of information, a 3rd-party company (e.g., a shopping suggestion service) could also develop an app and give more precise suggestions to their customer, leading to some monetary benefit.
Vulnerability 3 - Network bandwidthTransmitting excessive data fields reduces performance and consequently the user experience of 2023-01-23-257. Page 5 of 1-14.
For instance, Company-C is an Australian-based pharmacies retailer giant with hundreds of stores. Its website allows a customer to search for stock availability of an item in nearby shops, displaying results as "available" or "unavailable" for each shop. Upon investigating excessive data fields contained in their API response, we noticed that the exact stock level is exposed. Part of an API response is shown in Listing 2, with certain fields removed.
Here we are interested in the data field named products. We hypothesise that the value of available reflects the actual stock of an item remaining in the given shop: when available is present and equal to zero, "unavailable" is displayed; otherwise the item is displayed as "available". The Listing 2 shows one of a Melbourne-located store has stocked 85 of an item with ID 2632206 (a bottle of Vitamin D tablets).
Similar design was also observed in other retailer companies such as Company-D and Company-E. Company-D claimed that their accurate stock level is non-sensitive, whereas Company-E removed their stock level from their API response before we tried to contact them.
Regarding attack scenarios, if a retailer knows about the stock details of their competitors in different locations, they could adjust their logistic plan accordingly to increase their sales and gain more profit. Individual suppliers might also take advantage of this vulnerability to increase their prices for specific retailers when they know that they are low on stock. Having access to that kind of information, a 3rd-party company (e.g., a shopping suggestion service) could also develop an app and give more precise suggestions to their customer, leading to some monetary benefit.
Vulnerability 3 - Network bandwidthTransmitting excessive data fields reduces performance and consequently the user experience of the web application. Further, it imposes increases bandwidth requirements which can pose a significant accessiblity issue. EDEs imply a lack of consensus between an application's back- and front-ends. Like other "code smells" EDEs are indicative of poor development practices that may cause other vulnerabilities. The behaviour of the Company-C service examined in Vulnerability 2 is instructive. EDEFuzz identified that the targeted API response contains 104 store objects. Each store object is a dictionary with 26 data fields. However, we observed that the stock information updated on the web page was fully determined by no more than 120 of these fields: those from the eight (closest) stores. The front-end consumed only 4.4% of the data fields transmitted by the server.
The Bugs Escaped DetectionThere are a variety of ways in which these flaws may have escaped notice. Developers, while aware of security flaws, are heavily reliant on tooling to detect bugs. As discussed in Section 2 tools to discover EDEs are of low effectiveness. Both keyword-matching tools (Cheng et al., 2015) and manual approaches are prone to false negatives. Consider Vulnerability-2. Detecting it using keyword matching would require adding the keyword "available" and/or "stock" to the tooling--requiring the same focus developer that might have avoided the bug in the first place, which obviates the usefulness of the keyword approach.
A tool-free approach also requires back-end developers to carefully ascertain if a given field is needed. While this is in-line with development best practices, the realities of software development mean this is not always feasible.
As discussed in the case of Vulnerability-2, while Company-D claimed that their accurate stock level is non-sensitive, Company-E removed that information from their API response following our disclosure. This further indicates the need for a principles based approach that reduces the burden on developers to determine the balance between business need and sensitivity.
In this work, we tackle each of these problems to a different extent, which particular focus on the first (effective tooling to detect EDEs). While our tool EDEFuzz is designed to reduce false negatives; however, it may still flag fields that a business does not consider sensitive (c.f. the divergence between Company-D and Company-E above). The divergence in the severity of a given leak requires us to make reasonable (but principled) judgements in our evaluation of EDEFuzz. Is a given leak a business or ethical problem? Through publication, we advocate for a community driven process to establish clearer guidelines around what data is most sensitive.
## 4. Our Approach
In this Section, we first provide an overview of EDEFuzz and detail the design of its components. We depict the main workflow of EDEFuzz when testing a website (given its URL and a targeted API endpoint) in Figure 3.
### Overview
**Identifying API Endpoints.** Tester who wish to identify API endpoints can identify them in several ways. In the case of in-house testing, the developers identify what API(s) they want to test. A penetration tester can leverage tools like crawlers to automatically explore the website and detect exposed APIs. However, this automated approach could be destructive and potentially illegal without
permission from the website owners. In our experiments (as detailed in Section 5) we identify target APIs by examining browser behavior.
As discussed in Section 1, EDEFuzz follows the "record-replay" model to gain high testing efficiency. Its workflow consists of four main steps which are in turn divided into two phases: recording/preparation phase (Step 0) and relaying/fuzzing phase (Steps 1-3).
**Recording/Preparation Phase.** In this semi-automated phase, the goal is to generate a configuration file denoted as \(C\) that brings the client under test to a baseline state. This file will be used in the subsequent replaying/fuzzing phase. To that end, we use a Web Proxy to capture the traffic between a client app, which is a web browser in our experiments, and the targeted web server. Specifically, we start the client app and capture its initial state \(S_{0}\). After that, we let the client open the given URL, wait for the web page to be fully loaded. The tester then interacts with the page (e.g., fill in text boxes, click buttons) to trigger a request to the target API. We denote \(S_{1}\) as the client state at which the request has just been completely sent. We develop a lightweight browser plugin to capture the interaction steps required to traverse from state \(S_{0}\) to \(S_{1}\) and save them into the configuration file so that they can be played back in subsequent steps.
The standard most common for responses to web API requests is JSON. Under AJAX or similar paradigms, when the API response is received a client uses the JSON response to update the web page (e.g., showing more information) and the update typically alters only some parts of the web page, rather than changing the entire page. We denote \(S_{2}\) as the client state immediately following the update. This state can be typically identified by the existence of certain page elements. The steps required to identify the transition from \(S_{1}\) to \(S_{2}\) (typically of the form "wait until page element \(X\) appears") are also recorded in the configuration file, meaning it now stores all steps required to traverse from state \(S_{0}\) to \(S_{2}\). At \(S_{2}\), a baseline DOM tree (denoted as \(\mathcal{D}_{\text{origin}}\)) of the web page is extracted using the DOM Extractor component. This DOM tree will be compared with other trees to be generated in the fuzzing phase to check for potential excessive data exposures based on the metamorphic relation defined in Equation (1).
In Figure 4, we model the client-server communication using a sequence diagram. As shown, before the targeted request-response pair has been exchanged, the browser and the server might have completed other exchanges for fetching HTML documents and other object files (e.g., images, style-sheets, Javascripts). All the request-response pairs (denoted as \(P\)) and resources are recorded and stored in the local machine for the replaying/fuzzing phase.
**Replaying/Fuzzing Phase.** The input for this phase includes: 1) a configuration file \(C\) that helps EDEFuzz traverse through different client states (i.e., from state \(S_{0}\) to state \(S_{2}\)), 2) the original DOM tree \(\mathcal{D}_{\text{origin}}\), and 3) all request-response pairs (\(P\)) recorded in the recording phase.
```
1TARGET/api/v2/stock/get
2
3LOAD[https://www.exmplee.com/path/page](https://www.exmplee.com/path/page)
4INPUT//input(iidid="text-postcode")3bee
5CLICK//spant(text)="Checkavailability"]
6WAIT_LOCATE//div(iidid="stock-info")/div[2]
7FUZ2
```
Listing 3 shows a sample configuration file. The first line specifies the target API under test. The rest lines in the configuration file define a sequence of user interactions to be performed in order to reach state \(S_{2}\). Specifically, the example interactions include loading a web page, entering the number 3000 into a text box, clicking a button and then waiting for a specific element to appear on the web page before capturing the state. Apart from these actions, EDEFuzz also supports HOVER, SLEEP, and SCROLL for (i) hovering the mouse on a specific element, (ii) waiting for a specified period of time, and (iii) scrolling up and down, respectively.
In each fuzzing iteration, EDEFuzz goes through three steps (Steps 1-3). In the first step (Step-1), the Web Driver component, which is built on top of the Selenium Web Driver [18], uses the configuration file \(C\) to replay all the steps until the client reaches the state \(S_{1}\). Before that state has been reached, the Simulated Server responds to client
Figure 4. Sequence diagram of the recording phase. A proxy captures the set of server responses necessary to achieve a baseline state for accessing the target API.
Figure 3. The workflow of our approach to identify excessive data exposure vulnerabilities in web APIs.
requests with the corresponding recorded responses stored in \(P\) with no modification. Once state \(S_{1}\) is reached and the Simulated Server receives the request sent to the targeted API, it mutates the originally recorded response by deleting a specific data field and transmits to the client. We describe the mutation algorithm in detail in Section 4.3.
After the baseline state is reached, the client uses the mutated response to update the page accordingly. If this leads to any error, EDEFuzz moves to the next fuzzing iteration. Otherwise, EDEFuzz waits until the page is fully updated (i.e. until state \(S_{2}\) is reached) and uses the DOM Extractor to extract the current DOM tree denoted as \(\mathcal{D}_{\text{mutated}}\) (Step-2). In Step-3, EDEFuzz compares \(\mathcal{D}_{\text{mutated}}\) and \(\mathcal{D}_{\text{origin}}\) using a comparison algorithm described in Section 4.4. If the two DOMs are the same (based on our definition of similarity) then we flag the deleted data field. According to the metamorphic relation, the field is excessive. Once all fuzzing iterations are completed, EDEFuzz reports all the potential excessive data fields to the tester for further analysis and confirmation (see Section 4.5).
_Randomness:_ The explanation so far assumes that web relaying is fully deterministic: given a configuration file, EDEFuzz--without applying any mutations on the server response--produces the same DOM tree across all runs. Though uncommon, we identified a few cases in which this assumption does not hold. For instance, a social media platform may randomly insert advertisement between user posts during runtime. This causes the web page to be visually different in each run, even if all contents transmitted from the server were identical. We discuss the issues in details and how we address them in Section 4.2 and Section 4.4.
**Implementation Details.** We implemented EDEFuzz in Python3 using Selenium as a Web Driver to control web browser. We successfully tested our tool on widely used platforms including Ubuntu 18.04, Ubuntu 20.04, macOS Big Sur (version 11) and Windows 10. Currently, EDEFuzz supports two web browsers: Google Chrome and Mozilla Firefox. Our design is modular to support future extensions.
### Simulated Server
Our design decision, to build a simulated server, brings several benefits to EDE testing. The server supplies locally recorded contents with minimal delay--reducing test latency. Secondly, EDEFuzz does not need to communicate with the targeted remote server during the fuzzing phase, allowing testing in parallel without affecting the targeted server. This allows developers to easily test their servers without impacting production services. The design also ensure consistency within a given run, ensuring test results are not affected by any potential changes to the state of the remote server during the testing process. Lastly, we can use the cached/recorded contents as a snapshot of server's states, allowing further testing and studying of detected EDEs in the future even if the server is modified.
Since the simulated server serves as a snapshot of states on the remote server, there is no need to handle cookies or sessions within EDEFuzz. A response is sent via matching the requested URL from the collection of recorded request-response pairs. This also allows us to test sites in which the user needs to first log-in: as long as the remote server sent responses representing the session of a user who was logged-in during the recording phase, our simulated server can reproduce it in the testing stage.
Websites that request resources by randomised URLs pose a challenge under our current design. A typical case is when a web page that, when requesting a resource from the server, generates a random token to be included in the request URL. This is a challenge for our simulated server, since it causes a request to be generated for which the simulated server has no recorded response. Our experimental evaluation in Section 5.2 shows that this limitation affects only a fraction of popular websites (8.7% of our evaluated target set). Therefore, we believe the current design represents an appropriate trade-off. It is worth noting that, in a in-house testing setup, developers could ensure determinism.
### Mutation Engine
The original API response (JSON) produced by the server is assumed to be valid both structurally and semantically. We generate test cases by mutating the original API response.
We represent the server-supplied JSON object using a tree. Each leaf node is potentially an excessive data field. For example, Figure 5 shows the (partial) tree representation of the JSON object from Company-I shown in Section 3. In this tree, all leaf nodes are shaded light yellow.
We generate each mutation (test case) by removing a leaf node from the tree. For example, a valid mutation could remove the key-value pair id: 353 from the driver dictionary, or capacity: 33 from the car dictionary.
Unlike other fuzzing approaches which may generate an infinite number of test cases (e.g by using genetic mutation operators such as bit flips and splicing [19]), our approach produces a fixed number of test cases based on the actual leaf nodes on the tree. We did attempt to study if we can further reduce that number using approaches like binary tree search and delta debugging [20]. However, doing so is non-trivial because there is no way to determine whether a subtree (or set of fields) contains at least _one_ field that constitutes an EDE (if we delete all of them and notice a change in the DOM, then it might mean that none are excessive, or all-but-one are excessive, or something in-between). Instead our metamorphic relation can tell us only if _all_ fields in some set are subject to EDE (when we remove them all and no change is detected in the DOM). It is worth noting that this design decision also ensure more predictable test run times.
Figure 5. Tree representation of the JSON object in Section 3. The mutation engine uses this representation to determine what to mutate. We highlight leaf nodes in yellow.
### Similarity Check for DOM Trees
A web page (an HTML document) has a hierarchical structure that can be represented using a DOM tree. In Section 4.4 and Figure 6 we show a sample HTML document and its DOM tree, respectively. This is a simplified version of one of our testing targets Company-I, which we shared in Section 3.
Recall, a DOM tree has nodes and each tree node represents a tag element in the HTML document. Each tree node has zero or more attributes--which are stored as key-value pairs--representing tag attributes of the corresponding tag element. A tree node may have children. A child could be either a tree node, or simply a string. For instance, in Figure 6 the root node represents the <html> tag and it has two children: a <head> node and a <body> node.
When comparing two web pages, one considers both the DOM tree structure and the content within each tree node. According to our metamorphic relation, if an API response with a particular data field removed would still result in the identical DOM tree, compared to the DOM tree produced using the original API response, that data field is reported as excessive. Two DOM trees are considered identical if and only if _all_ of the following conditions hold:
* **(C1)** Their root nodes have the same tag name
* **(C2)** Their root nodes have the same number of attributes
* **(C3)** Each corresponding pair of attributes have the same key value
* **(C4)** Their root nodes have the same number of children
* **(C5)** Each corresponding pair of children representing a tag element is identical, with respect to conditions **C1-4**
* **(C6)** Each corresponding pair of children representing a string is identical
To check for all of these conditions, the most simple yet effective approach is to compare the string representations of the corresponding HTML documents. Our experiments showed that this works for 47 out of 54 targets. However, we observed that several web pages contain elements which are not affected by the API response. For example, the value of the class attribute within a <div> tag could be randomly generated at run-time. Another common case is when the web page displays the current date and time on it. Apparently, these cases could yield false negatives using the straightforward string-based comparison approach.
To address this issue, we relax the conditions **C3** and **C6**. That is, we accept the differences in string leaf nodes and in attributes' values caused by randomness. To that end, EDEFuzz runs a pre-processing step before the fuzzing phase. In this step, EDEFuzz uses the configuration file \(C\) to replay and generate a few DOM trees, all generated from replaying the same server response. After that, it recursively traverses and compares \(\mathcal{D}_{\text{origin}}\) with each of the newly generated DOM trees to look for those parts that differ due to randomness, and marks those random elements and attributes of \(\mathcal{D}_{\text{origin}}\) that should be ignored in the comparison step of the fuzzing phase. It is worth noting that, in this pre-processing step, if EDEFuzz finds that the generated DOM trees are _structurally_ different from \(\mathcal{D}_{\text{origin}}\), i.e. violating conditions **C1**, **C2**, **C4** and **C5**, it will decide to terminate the testing process. In our experiment, 9 of 69 targets were flagged at this step, meaning that they produced web pages with different structures, even using the same set of responses.
In the fuzzing phase, EDEFuzz compares the DOM tree \(\mathcal{D}_{\text{mutated}}\) produced from each test case, with the pre-processed \(\mathcal{D}_{\text{origin}}\). Basically, EDEFuzz (i) recursively traverses through every node in \(\mathcal{D}_{\text{origin}}\) and its corresponding node in \(\mathcal{D}_{\text{mutated}}\), and (ii) compares each pair of nodes. EDEFuzz will skip all string nodes and attributes on \(\mathcal{D}_{\text{origin}}\) that have been marked as "ignored" in the pre-processing step. Two nodes are deemed structurally different if any of the conditions **C1**, **C2**, and **C4** is violated. Moreover, they are considered different in terms of content if there is a discrepancy in the values of the nodes (in the case of non-ignored string nodes) or the non-ignored attributes (for other types of nodes).
While comparing the entire DOM tree can identify if a web page is different from another, in many cases a response will affect only specific areas of a web page. Our approach can optionally utilise human knowledge to allow the user to specify an _area-of-interest_ on the web page. The area-of-interest is a subtree in the DOM tree that contains contents (that the user believes are) affected by the API response. The area-of-interest in the Figure 6 example could be the subtree rooted at the node <div class="container"> (highlighted in yellow in the tree representation). This helps increase efficiency by narrowing down the tree structure to be processed and compared, and it also avoids other components on the web page affecting the comparison (e.g. consider the web page displays the current time at the top of the page).
### Result Inspection
The final step of our approach is manually inspecting the results of EDEFuzz. This involves inspecting each of the flagged data fields to determine what kind of data it exposes (e.g. is it sensitive or not) and, therefore, whether the web application should be modified to avoid this exposure.
## 5 Experimental Evaluation
We designed our experimental evaluation to answer the following research questions.
1. [label=(RQ0)]
2. **Accuracy.** Of the data fields flagged by EDEFuzz as excessive, what proportion are true excessive data exposures (i.e. are unused by the web page). This evaluates the usefulness of our metamorphic relation.
3. **Applicability.** To what proportion of widely used web sites can EDEFuzz be applied successfully? This helps to understand limitations of our approach, both inherent and those that arise from EDEFuzz's current implementation.
4. **Efficiency.** How much human effort and computational time is required to apply EDEFuzz? This sheds light on the scalability of our approach.
5. **Prevalence of Sensitive Data Leakage.** Of those fields flagged by EDEFuzz as excessive, what proportion contain sensitive data? This helps us understand how prevalent sensitive data leakage is amongst excessive data exposure issues.
Note the distinction between **RQ1** and **RQ4**. Specifically, given some set of data fields reported by EDEFuzz as excessive, the former measures the proportion of _true positives_ whereas the latter measures the proportion that are _sensitive_. These notions are entirely orthogonal: a flagged data field is a true positive precisely when it is really excessive (i.e. unused by the web page), regardless of whether
the data it contains is sensitive or not. Likewise, a field contains sensitive data precisely when that data should not be revealed by the web application, regardless of whether the field is excessive (i.e. is unused by the web page) or not. Flagged fields that are not true positives are _false positives_. False positives can exist, for instance, when a field contained in a response is used to affect the DOM only after subsequent user interaction with the web page. Importantly, whether a data field is a true or false positive is a property of the behaviour of the web page. That is, distinguishing between true and false positives requires carefully understanding _all_ behaviours of the web application, including by reading its code and interacting with it, to determine whether the data field contained in the response is ever used in future by the page. Whether a field is sensitive or not is simply a property of the data that field contains, and can be relatively quickly ascertained by inspecting just the field itself.
This means that accurately evaluating **RQ1** requires a set of web sites whose behaviour is (or can be) well-understood by the humans inspecting EDEFuzz's results. Carefully understanding the behaviour of an individual site can be very time consuming. Therefore, for **RQ1** we assembled a set (see Table 1) of eight (8) popular websites within a single country (Australia) that were familiar and whose function and the individual behaviours of the web page were therefore able to be well-understood. This purposeful restriction was necessary to ensure that the proportion of true positives could be accurately evaluated.
In contrast, adequately evaluating the remaining research questions requires a data set that comprises widely used web sites. Therefore the remaining RQs were evaluated on a data set drawn from the Alexa Top-200 list of web sites. While not applicable to **RQ1** (since enumerating all of their possible behaviours to accurately determine true positives is infeasible), this set forms a representative best-of sample, suitable for assessing EDEFuzz's performance in general (**RQ2** and **RQ3**), as well as to understand the prevalence of sensitive data leakage via EDE among common web sites (**RQ4**).
### Procedure
We follow the 4-step procedure below to fuzz test a given website using EDEFuzz. The procedure is well-aligned with the workflow of the tool, as discussed in Section 4.1.
(Step-1) Identifying the target API endpoint.Given a primary domain, we identify API endpoint(s) for inclusion in the testing regime. Each domain accessed a variety of APIs, some internal to the domain (eg., a shopping website accessing the site's stock inventory and pricing endpoint) and some external (eg., an analytics endpoint to retrieve recent visitor counts). Of these, we manually identify the internal API most relevant to the function of the site in question. For instance, in Table 1 we list the selected Australian sites and endpoints used in our evaluation.
\begin{table}
\begin{tabular}{l l l} \hline \hline Target & Rank & Used when \\ \hline Company-A & 37 & Load tracking history of parcel \\ Company-B & – & List members of a school subject \\ Company-C & 121 & Check stock availability of item \\ Company-D & – & Check stock availability of item \\ Company-E & 59 & Check stock availability of item \\ Company-F & 166 & Get flight prices for next 30 days \\ Company-G & 123 & Check stock availability of item \\ Company-H & 2770 & List vehicles available for sale \\ \hline \hline \end{tabular}
\end{table}
Table 1. Australian websites tested. We include short descriptions of the purpose of the targeted APIs on which we performed deeper evaluation. Where available we provide Alexa ranking’s within Australia (extracted 29th April 2022).
Figure 6. The tree representation of a simple HTML document. Each rounded node represents a tag element while each square node represents a string. The hierarchical structure allows a human to easily specify a subtree (in yellow) for the tool to evaluate.
**(Step-2) Writing a configuration file.** We compose a configuration file that specifies how to correctly trigger the selected endpoint--this includes the sequences of user interactions that preclude the execution of a request to the API.
This process is semi-automated via our custom-built web browser plugin. However the plugin still requires the user to manually perform interactions. After the file is generated, the operator reviews the file to make any necessary changes.
Completely automating this step poses a more significant research challenge that we leave for future work. It represents an existing trade-off in fuzzing: more manual annotation and configuration generally reduces the running time of tools and expands their flexibility, while increasing human labor.
We specified an area-of-interest (see Section 4.4) on all our evaluation targets, at the cost of a few seconds per target.
**(Step-3) Running EDEFuzz.** After devising an appropriate configuration, we then execute EDEFuzz, time its execution, and collate the results for analysis. We chose not to repeat the fuzzing process for each target because, unlike in traditional fuzzing, EDEFuzz's mutation process is deterministic by design.
**(Step-4) Analysing results.** This involves the manual classification step, discussed in Section 4.5.
### Results
#### RQ1. Accuracy
We evaluated the accuracy of our metamorphic relation, as implemented in EDEFuzz, for identifying excessive data exposures against the eight sites listed in Table 1. The TP column of Table 2 summarises the results by recording the true positive rate, namely the proportion of reported excessive fields (Reported) that were actually excessive (Confirmed). This was determined by manually inspecting the web pages and carefully understanding their behaviours, including via manual interaction, to check whether, for each reported data field, they had any behaviours that made use of the data field. If no such behaviours were identified, the field was classified as a true positive; otherwise it constitutes a false positive.
The average true positive rate was 98.65%, confirming the exceptionally high degree of accuracy of our approach. (Later, **RQ4** investigates which of these EDEs actually leaked sensitive data which, as argued in Section 5, is a separate concern.)
#### RQ2. Applicability
We evaluated the applicability of EDEFuzz on a subset of the top 200 sites (as recorded by the Alexa ranking). Of the 200 sites, we excluded (see Figure 7) from analysis those that had no web APIs (and, hence, no possibility for EDE); those requiring payment; those in a language that none of our authors understand; those comprising adult or illegal content; those that were geoblocked; and those that required solving a CAPTCHA. Doing so excluded around 60% of the 200 sites, after deduplication. None of these exclusions represent limitations of our approach or implementation. Of the remaining sites, we additionally excluded 12 sites that used HTTP_POST requests to query their APIs with query parameters included in the request body, since EDEFuzz's simulated server currently relies solely on the request URL to supply the response, though this implementation-level limitation could be resolved with modest future work. This left 69 sites against which we evaluated EDEFuzz.
Of these 69 sites, EDEFuzz successfully applied to 53 targets (76.8%). Of the 16 unsuccessful targets, all but one were the result of nondeterminism aka randomness: nine (13.0%) didn't pass the pre-processing step (as explained in Section 4.4), as these websites populated elements of their page non-deterministically; six (8.7%) used requests that included non-deterministic tokens required to load resources. The final unsuccessful target used shadow roots within its web page, preventing EDEFuzz from accessing its complete DOM tree. Comprehensive results for each target are in Appendix A.
We further assessed applicability by performing an additional validation step on those 53 successful targets. This was done to test the implicit assumption of our mutation strategy that, given any two fields of a response, whether one is excessive is independent of the presence or absence of the other, i.e. that each field of a response can be assessed independently of the others. We tested this assumption for each of the 53 successful targets by taking the entire set of fields flagged by EDEFuzz as excessive and removing _all_ of them simultaneously from the recorded response, and replaying that mutated response to see whether the resulting DOM passed the similarity check (Section 4.4) to the original. Surprisingly we found that for three (3) of the sites, removing multiple fields at once caused a difference in the DOM even when removing each of the fields individually did not (behaviour we confirmed via manual testing for each of them). For these sites (indicated with a V in the Reason column of the table in Appendix A) one might reasonably debate whether those flagged fields really are excessive or not, a question we leave for future work.
Overall we find that EDEFuzz is highly applicable; however sites that employ nondeterminism, either in their DOM contents or the requests they generate, need to be made deterministic for EDEFuzz to be successfully applied to them (as already noted in Section 4.2).
#### RQ3. Efficiency
Efficiency measures not only the amount of computational time to employ EDEFuzz, but also the amount of human effort both to configure the tool and to inspect its results to determine which reported excessive data fields contain sensitive data. We evaluated efficiency across both the Australian sites (Table 2) and the Top 200 data set (Appendix A). Experiments for the Australian sites were carried out on a commodity PC with an Intel Core i7-9600K, 32GB of RAM, running Windows 10 21H1. Those for the Top 200 data set were carried out on an AWS VPS with a 16-core Intel CPU and 32 GB of memory, running Ubuntu 20.04.
The time spent on test execution was roughly linear in the number of data fields included in the response from the target API, as expected since our mutation strategy must necessarily mutates one field at a time. On average, about 8 test cases were executed per minute and this figure is consistent between the two data sets. However, computation time does not present a significant bottleneck, especially since our approach is trivially parallelised.
Regarding human effort, it took a maximum of twenty minutes per web site for us to identify an appropriate API endpoint that it made use of and to then compose a configuration file, representing minimal overhead.
While naturally some human effort is required to inspect the flagged fields, to determine which are sensitive, this again took no
longer than 20 minutes per web site, even when EDEFuzz reported many thousands of fields as excessive. This was because many API responses contained large numbers of repeated structures, which allowed us to quickly classify thousands of flagged fields. However, we found certain flagged data fields required lengthier and more comprehensive analysis. We hypothesise that this is a fundamental limitation on automation (for the near future) as deciding whether a field is indeed sensitive is an exercise of human judgement, involving considering the application's function, what data is already publicly available, and privacy expectations, etc.
Overall we conclude therefore that our approach requires only a modest amount of human effort.
**RQ4. Prevalence of Sensitive Data in EDEs**
Finally, our results allow us to draw conclusions about the prevalence of sensitive data leakage via excessive data exposure. Of course such conclusions necessarily _underestimate_ the true extent of sensitive data leakage, even on the sites used in our evaluation, since we applied our tool on each site to only one--highly-visible and, hence, likely to be widely-tested--API.
Among the Australian websites, we find that sensitive data leakage is much more prevalent (present in 3 out of the 8 cases evaluated) than among the Alexa Top 200 sites (where it is present in only one of the 53 cases successfully evaluated). We conjecture that this should be expected, since popular sites are more widely used (and thus tested) by definition. Yet even among very popular sites, EDEFuzz still found sensitive data leakage.
We already discussed sensitive data leakage discovered by EDEFuzz in the Australian websites Company-C, Company-D, and Company-E in Section 3. (We also discussed vulnerabilities it found in Company-I and Company-J during testing, which are not part of our evaluation set). EDEFuzz also identified sensitive data leakage in Company-B, the learning management system with the largest market share (\(\approx\) 34%). Company-B has a feature to create student groups within a subject. Instructors of a subject can view and assign group members of each group. Our tool flagged the API that lists group members. While the web page only displayed a list of names in a group, the API response contained the full list of subjects each student is enrolled into. We further found that this API is accessible from a student account as well, allowing any student to gain knowledge about into which subjects their classmates have ever enrolled.
The one instance of sensitive data leakage found by EDEFuzz in the Alexa Top 200 dataset (Appendix A, Rank 91) affected an API called by a web page for downloading device drivers, which inadvertently exposed employee names as excessive data, as well as device hardware IDs.
**Summary.**
We conclude overall that EDEs appear prevalent, but many EDEs are relatively harmless. However, much like memory corruption vulnerabilities (whose severity can range from simple denial-of-service via crashes through to remote code execution), they can also be severe and leak very sensitive information. EDEFuzz is effective at diagnosing such vulnerabilities via its highly accurate metamorphic relation, requiring modest human effort and computational cost, while being widely applicable.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Target & Data fields & Reported & Confirmed & TP & Preparation & Execution & Classification & Sensitive & Non-sensitive \\ & & & & & (min) & (min) & (min) & & \\ \hline Company-A & 189 & 124 & 124 & 100.00\% & 10 & 11 & 5 & 0 & 124 \\ Company-B & 18 & 16 & 14 & 87.50\% & 20 & 2 & 2 & 2 & 12 \\ Company-C & 2600 & 2580 & 2504 & 97.05\% & 5 & 306 & 3 & 104 & 2400 \\ Company-D & 545 & 506 & 479 & 94.66\% & 15 & 43 & 10 & 9 & 470 \\ Company-E & 4249 & 4147 & 4127 & 99.52\% & 10 & 755 & 15 & 0 & 4127 \\ Company-F & 778 & 749 & 749 & 100.00\% & 15 & 103 & 5 & 0 & 749 \\ Company-G & 120 & 100 & 100 & 100.00\% & 5 & 12 & 3 & 0 & 100 \\ Company-H & 1465 & 1066 & 1066 & 100.00\% & 15 & 79 & 20 & 19 & 1047 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary statistics from the Australian sites. Data fields reports the total number of fields contained in the API response of each target, Reported is the number of fields flagged by EDEFuzz as excessive; Confirmed is the number of fields manually confirmed to be excessive, i.e. true positives, TP. The time taken to configure EDEFuzz for each target is reported in Preparation, as is the duration of test execution (Duration) and the human effort required to manually classify the flagged fields as sensitive or not (Classification), all measured in minutes. We also report (Sensitive) the number of fields we classified as containing sensitive data, after manual inspection.
Figure 7: Determining applicable websites from Alexa Top-200. Of the top 200, we found 69 websites (34.5%) appropriate for our testing-set. We excluded domains for the following reasons: duplication of a single service across domains, adult and illegal content, geoblocking, payment required for access, lack of an API, foreign language and encoding of parameters in POST requests (discussed in main text).
## 6. Related Work
**Detecting Vulnerabilities in Web Applications** Researchers have made satisfying progress in detecting and preventing _certain classes_ of webup vulnerabilities. Much work [21; 22; 23; 24; 25] exists for detecting cross-site scripting [26]. The techniques vary, including black-box [22; 25], grey-box [23] and white-box [21] approaches. Very recently, Trickel, et al. [27] proposed a novel grey-box fuzzing approach to detecting SQL and command injections.
However, excessive data exposure has received comparatively little attention despite being one of the most common vulnerabilities [28]. To the best our knowledge, there is no published research focusing on an automated approach to detect this class of vulnerability in web applications: our tool EDEFuzz is the first of its kind.
Koch, Chaabane, Egele, Robertson, and Kirda [29] studied a white-box and semi-automated mechanism to identify EDE vulnerabilities in Android applications--not in web applications. Unlike EDEFuzz which runs the program on two different outputs from the web server (one with a data field deleted) and looks for the absence of difference in the DOM to detect leakage, their approach requires (decompiled) source code of the applications to do instrumentation and static data-flow analysis. Its static analysis identifies potential EDEs by flagging data received by the app over the network (source) that is then serialised to a Java object but that then never propagates to the user interface (sink). A subsequent dynamic analysis that relies on program instrumentation and manual app interaction is used to confirm potential vulnerabilities, wherein the human analyst must manually generate tests that attempt to trigger the EDE. EDEFuzz also requires manual effort to interact with a web application to trigger the web API under test, and like Koch et al. also to confirm the sensitivity of leaked information. So [29] and our approach are complementary. As demonstrated in fuzzing research [13], combinations of complementary approaches could yield better results and we leave that for future work.
**Metamorphic Testing/Fuzzing** As discuss in Section 2, one of the most important steps of metamorphic testing is to identify metamorphic relation(s). This requires creativity and a good understanding of the system under test. To ease this crucial step, Segura, Parejo, Troya, and Ruiz-Corteys [30] proposed six abstract relations from which concrete relations can be defined. Specifically, the authors identified 60 API-specific metamorphic relations in their work. Their relations specify how related web requests should produce related responses, and so are inapplicable to detecting EDE. The fundamental insight of EDEFuzz that shows how metamorphic fuzzing is applicable to detecting EDE is not to mutate the _request_ (as in all prior web fuzzing work, including those cited below), but to mutate the _response_ instead.
**RESTful Web API Testing** RESTler [6]--the state-of-the-art RESTful API fuzzing approach--used server states, relying on response codes to identify server crashes on APIs used by cloud services. Their tool infers dependencies among requested APIs to guide the generation of new test cases. Atlidakis, Godefroid, and Polishchuk [11] suggested an extension to RESTler to report the violation of four rules commonly applied to REST services, in addition to server crashes. [31] improved RESTler's test generation algorithm by representing the payloads in a tree structure on which structure-aware mutation operators can be applied. Pythia [10] augmented RESTler with coverage-guided feedback and it implemented a learning-based mutation strategy. Specifically, it utilised a statistical model to gain knowledge about frequent ordering of calling APIs from seed inputs. It used a regular grammar to encode an API request and perform mutation by adding noise to the request. Neither RESTler nor its follow-up works can detect EDE vulnerabilities. Moreover, this line of work focuses on mutating the API requests while EDEFuzz modifies the API responses.
**Record-replay Mechanism in Web Application Testing** Record-replay models are popular in testing web applications and services. We use two types of record-replay in our work. The first bears similarity to WaRR [32], which records the interaction between a user and a web application. It allows the recorded traces to be later replayed to simulate the user interacting with the web application. Another similar work is Timelapse [33], in which researchers log unexpected behaviours in web applications to help developers visualise, demonstrate and better understand bugs. We follow their mechanism, automating interactions with a web application through a headless web-driver. The second form of record-replay tools capture communications between a server and client, to be replayed at a later time [7]. While existing work focuses on producing an exact replication during a replay stage, EDEFuzz uses a simulated server to instead supply mutated server responses.
**Web Change Detection** The components of a web application may change over time, hindering research and testing that relies on consistency. Researchers have looked into different strategies to compare two pages and identify their differences. One strategy was proposed over twenty years ago and relies on the HTML DOM tree to monitor structural changes on the web page [34]. Other relevant work includes X-Diff [35] which aimed at detecting changes in an XML document, and [36] which improved the efficiency of Hungarian algorithm in detecting web page changes. Modern web applications have increased in complexity and often break from prior design paradigms. As a result, past approaches for page comparison are less effective than they once were. To address this challenge, Waterfall [37] uses two versions of the same web application, in detecting locator changes and applying fixes. WebEvo [38] attempts to identify evolution of a web application through detecting semantic structure changes. Both approaches aim at matching contents between two structural different web pages, while our work focus on identifying both structural difference and content difference.
## 7. Discussion and Future Work
Simple ideas are often the best. From our evaluation we conclude that EDEs appear prevalent, and that EDEFuzz is effective at finding them, including sensitive data leakage, with acceptable efficiency and requiring a modest amount of human work. Its metamorphic relation yields precise results in practice (a TP rate of 98.65%). It is also generally applicable, and it can be parallelized easily.
At the same time, our results suggest avenues for improvement. One trivial improvement would be to handle HTTP POST requests, by parsing request bodies, which affected 6% of the evaluation sample from the Top-200.
More interesting avenues for future work include:
**Improving Efficiency.** Even though our Simulated Server helps EDEFuzz achieve a reasonable fuzzing throughput, we can improve it
further by leveraging the recent advancement in the topic of snapshot-based fuzzing [39]. As the design of EDEFuzz is modular, the change could be minimal. Specifically, we can take a snapshot of the client at the state \(S_{1}\) when a request to the target API has just departed (See Section 4). We can then restore the snapshot for each fuzzing iteration instead of replaying requests using the Web Driver.
**Mutation Algorithm.** In this work, we only apply data field deletion to mutate the server responses. It would be interesting to study other mutation strategies and update/improve our metamorphic relation accordingly. For instance, consider a web site that queries the available stock of an item and converts the numeric response into one of two answers it displays to the user: "out-of-stock", and "in stock". Suppose deleting this field causes neither answer to be displayed. While this case didn't arise in our evaluation, EDEFuzz would consider this a false negative, even though the response leaks more information than is displayed to the user. A mutation strategy that modifies this response value without deleting it would detect this form of EDE.
|
2307.02083 | Leveraging multilingual transfer for unsupervised semantic acoustic word
embeddings | Acoustic word embeddings (AWEs) are fixed-dimensional vector representations
of speech segments that encode phonetic content so that different realisations
of the same word have similar embeddings. In this paper we explore semantic AWE
modelling. These AWEs should not only capture phonetics but also the meaning of
a word (similar to textual word embeddings). We consider the scenario where we
only have untranscribed speech in a target language. We introduce a number of
strategies leveraging a pre-trained multilingual AWE model -- a phonetic AWE
model trained on labelled data from multiple languages excluding the target.
Our best semantic AWE approach involves clustering word segments using the
multilingual AWE model, deriving soft pseudo-word labels from the cluster
centroids, and then training a Skipgram-like model on the soft vectors. In an
intrinsic word similarity task measuring semantics, this multilingual transfer
approach outperforms all previous semantic AWE methods. We also show -- for the
first time -- that AWEs can be used for downstream semantic query-by-example
search. | Christiaan Jacobs, Herman Kamper | 2023-07-05T07:46:54Z | http://arxiv.org/abs/2307.02083v1 | # Leveraging Multilingual Transfer for Unsupervised Semantic Acoustic Word Embeddings
###### Abstract
Acoustic word embeddings (AWEs) are fixed-dimensional vector representations of speech segments that encode phonetic content so that different realisations of the same word have similar embeddings. In this paper we explore semantic AWE modelling. These AWEs should not only capture phonetics but also the meaning of a word (similar to textual word embeddings). We consider the scenario where we only have untranscribed speech in a target language. We introduce a number of strategies leveraging a pre-trained multilingual AWE model--a phonetic AWE model trained on labelled data from multiple languages excluding the target. Our best semantic AWE approach involves clustering word segments using the multilingual AWE model, deriving soft pseudo-word labels from the cluster centroids, and then training a Skipgram-like model on the soft vectors. In an intrinsic word similarity task measuring semantics, this multilingual transfer approach outperforms all previous semantic AWE methods. We also show--for the first time--that AWEs can be used for downstream semantic query-by-example search.
Semantic embeddings, acoustic word embeddings, semantic retrieval, query-by-example search.
## I Introduction
Word embedding models such as Word2Vec [1, 2] and GloVe [3] revolutionised natural language processing (NLP) by mapping written words to continuous fixed-dimensional vectors. These models learn from co-occurrence information in large unlabelled text corpora. As a result, words that are related in meaning end up having similar embeddings. This has led to improvements in a wide range of NLP tasks [4, 5, 6, 7]. However, limited efforts have been made to generate such semantic representations for spoken words.
While acoustic word embedding (AWE) models [8, 9] map variable-duration speech segments to fixed-dimensional vectors, these models do not aim to capture meaning. The goal is rather to map different realisations of the same word to similar embeddings, i.e. the embedding space encodes phonetic rather than semantic similarity. Several unsupervised AWE modelling techniques have been explored [10, 11, 12, 13, 14]. Recently, multilingual AWE models have been introduced as an alternative [15, 16, 17, 18]: a single AWE model is trained on labelled data from multiple well-resourced languages and then applied to an unseen target low-resource language.
While phonetic AWEs have proven useful in several downstream applications [19, 20], there are also many cases where semantics would be beneficial. In semantic AWE modelling the goal would be to map speech segments to vector representations that not only capture whether two segments are instances of the same word, but also the semantic relationship between words. E.g., we want an AWE space where different instances of "red" are close to each other, but also close to instances of "blue". And all these embeddings should be far from unrelated words, such as "group". An example is given in Fig. 1, which visualises actual AWEs from our approach.
Learning semantic AWEs from speech is challenging due to channel variability, noise, and speaker-specific information that are not present in written text. Some studies, therefore, use another modality as a grounding signal, e.g. using images [21, 22, 23] or text labels [24] as a weak form of supervision. Only a handful of studies have looked at learning semantic AWEs from unlabelled speech alone [25, 26]. To overcome these challenges, we propose leveraging the recent improvements in multilingual modelling for phonetic AWEs.
We specifically propose using transfer learning from a phonetic multilingual AWE model to obtain a semantic AWE model in a target language where we only have unlabelled speech. Since the multilingual model already captures phonetics, this should simplify the semantic learning problem. We present three approaches. Our best approach involves using a multilingual AWE model to cluster unlabelled word segments from the target language. For each segment, we derive a soft pseudo-word label vector based on the proximity to the cluster centroids. Finally, we get semantic AWEs by training a Skipgram-like model on these soft vectors.
In an intrinsic word similarity task, this approach outperforms previous methods learning from scratch [25, 26] and also our other multilingual transfer methods. We also show that this method can be used downstream in an extrinsic semantic query-by-example search task.
Fig. 1: PCA projection of semantic AWEs (averaged), produced by our Cluster+ Skipgram model on development data. We highlight the five nearest neighbours for “several” (pink), “bike” (green), and “orange” (blue).
## II Phonetic Acoustic Word Embeddings
Most existing AWE methods map speech segments to a vector space where instances of the same word class are located near each other. We call these _phonetic AWEs_, because the space should capture whether input segments are phonetically similar rather than related in meaning. Formally, a speech segment \(X=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T})\) is projected to a vector \(\mathbf{z}\), with each \(\mathbf{x}_{t}\) a speech frame. Two phonetic AWE models have proven to be particularly effective: the correspondence autoencoder RNN (CAE-RNN) [17] and the ContrastiveRNN [14].
**CAE-RNN.** This model uses an encoder RNN to map a word segment \(X\) to a latent embedding \(\mathbf{z}\). The embedding \(\mathbf{z}\) is then given to a decoder RNN to reconstruct a target word segment \(X^{\prime}\), where \(X^{\prime}\) is a different instance of the same class as the input. The model is optimised to minimise the reconstruction loss, \(\sum_{t=1}^{T}\lVert\mathbf{x}_{t}^{\prime}-\mathbf{f}_{t}(X)\rVert^{2}\), where \(\mathbf{f}_{t}(X)\) is the \(t\)th decoder output conditioned on embedding \(\mathbf{z}\). During inference, the encoder generates a unique AWE \(\mathbf{z}\) for every new input segment.
**ContrastiveRNN.** This model explicitly minimises the distance between embeddings from speech segments of the same word class while maximising the distance between words of a different class. Formally, given speech segments \(X_{\text{anc}}\) and \(X_{\text{pos}}\) containing instances of the same word class and multiple negative examples \(X_{\text{neg}_{1}},\ldots,X_{\text{neg}_{N}}\), the ContrastiveRNN produces embeddings \(\mathbf{z}_{\text{anc}},\mathbf{z}_{\text{pos}},\mathbf{z}_{\text{neg}_{1}}, \ldots,\mathbf{z}_{\text{neg}_{N}}\). The loss is then defined as [27]:
\[J=-\text{log}\frac{\exp\left(\text{sim}(\mathbf{z}_{\text{anc}},\mathbf{z}_{ \text{pos}})/\tau\right)}{\sum_{j\in\{\text{pos},\text{neg}_{1},\ldots,\mathbf{ neg}_{N}\}}\exp\left(\text{sim}(\mathbf{z}_{\text{anc}},\mathbf{z}_{j})/ \tau\right)} \tag{1}\]
where \(\text{sim}(\cdot)\) denotes cosine similarity and \(\tau\) is a temperature parameter, tuned on development data.
From this point onwards we add the subscript p to the AWEs described in this section, i.e. \(\mathbf{z}_{\text{p}}\) indicates that the embedding preserves phonetic information related to word class only.
Previous studies showed the advantage of using these models in a multilingual transfer setup where a single AWE model is trained on labelled data from multiple well-resourced languages before transferring and applying it to an unseen target language [14, 17]. This allows for AWEs to be obtained even in languages for which we do not have any labelled data.
## III Semantic AWEs (Trained from Scratch)
Two approaches have been proposed that adapt the framework above to obtain semantic embeddings \(\mathbf{z}_{\text{s}}\), where the embeddings not only reflect phonetic similarity but also capture meaning. In both cases [25, 26], the problem is simplified by assuming that we know where words start and end (but the word classes are still unknown), i.e. we have an unlabelled speech corpus \(\{X^{(n)}\}_{n=1}^{N}\) of \(N\) segmented word tokens. We also make this assumption.
**Speech2Vec**[25] is a variant of the CAE-RNN where, instead of using pairs of instances of the same word class, the positive pairs are now context word pairs \((X_{\text{tag}},X_{\text{ctx}})\). \(X_{\text{rg}}\) is a target centre word segment while \(X_{\text{ctx}}\) is a context word appearing somewhere in a window around the centre. These context pairs are constructed without word labels by only considering the relative position of words within an utterance. Speech2Vec was inspired by the Skipgram model for text data [1], where an input word is fed to a log-linear classifier that predicts words within a context window. By using the CAE-RNN reconstruction loss, Speech2Vec similarly tries to reconstruct a word segment that appears near its input. Ideally the resulting embeddings \(\mathbf{z}_{\text{s}}\) should therefore be similar for words that co-occur in the speech data. There have recently been concerns about the original Speech2Vec implementation [28], and we therefore use our own version here (but still refer to it as Speech2Vec).
By similarly modifying the model presented in Sec. II, a **semantic ContrastiveRNN** can be trained on target (anchor), context (positive), and out-of-context word segments (negatives) to learn semantic embeddings using the loss in (1). This approach is similar to [26], where they include a trainable network to remove speaker information.
In both these methods, a semantic AWE model is trained from scratch, therefore requiring the models to learn to capture phonetic and semantic similarity simultaneously.
## IV Our Approach: Using Multilingual Transfer for Semantic AWEs
Our new proposal is to utilise a pre-trained multilingual AWE model (end of Sec. II) to assist semantic AWE modelling. Three specific strategies are proposed.
**ContrastiveRNN with multilingual initialisation.** Instead of training semantic models from scratch (III), we can warm-start them using the learned weights of a pre-trained multilingual AWE model. In our experiments, we use the learned weights of a multilingual AWE model's encoder to initialise the encoder RNN of the ContrastiveRNN. The model is then updated on context pairs from the target language using (1).
**Projecting multilingual AWEs.** Alternatively, we can project an existing phonetic AWE space to a new semantic AWE space. First, we apply the multilingual model to the unlabelled speech segments \(\{X^{(n)}\}\) to get a set of phonetic AWEs \(\{\mathbf{z}_{\text{p}}^{(n)}\}\). Then we train a projection network that maps the phonetic AWEs to semantic embeddings \(\{\mathbf{z}_{\text{s}}^{(n)}\}\). The projection network is trained using the contrastive loss (1), optimising the distances between the output embeddings \(\mathbf{z}_{\text{s}}\).
**Cluster+Skipgram.** This approach is based on the Skipgram Word2Vec model [1]. Instead of using a fixed dictionary of discrete word class labels to construct input and output vectors to train a Skipgram model on text, we use the phonetic similarities in the original AWE space to derive a soft pseudo-word label for each speech segment. This is illustrated in Fig. 2. In more detail, a multilingual AWE model is applied to the segmented speech corpus \(\{X^{(n)}\}\), producing a set of phonetic AWEs \(\{\mathbf{z}_{\text{p}}^{(n)}\}\). Next we apply \(K\)-means clustering to the phonetic embedding space, producing a set of centroids \(\{\mathbf{c}_{k}\}_{k=1}^{K}\). The idea is that these clusters should resemble distinct word classes. We then calculate a soft vector label of an AWE belonging to each cluster:
\[v_{k}^{(n)}=\frac{\exp\left(-\text{sim}(\mathbf{z}_{\text{p}}^{(n)},\mathbf{c} _{k})/\sigma^{2}\right)}{\sum_{j=1}^{K}\exp\left(-\text{sim}(\mathbf{z}_{\text{ p}}^{(n)},\mathbf{c}_{j})/\sigma^{2}\right)} \tag{2}\]
where sim\((\cdot)\) denotes cosine similarity and \(\sigma\) is a hyperparameter controlling the influence of distant centroids. Each segment is represented by a unique vector \(\mathbf{v}^{(n)}\), with segments from the same word class ideally having similar representations. This is different from Word2Vec, where a single one-hot vector represents a unique word class. Finally, a linear classifier model is trained with these continuous vectors as input and target outputs using the negative log-likelihood loss (as in the original Skipgram). We also experimented with hard clustering, but this gave very poor performance on development data.
## V Experimental Setup
**Data.** We perform experiments using the Flickr8k Audio Captions Corpus (FACC) [21]. This corpus contains 40k spoken captions in English describing the content of a Flickr image [29]. This is useful for measuring semantics: the images come from a fairly narrow domain, and the semantic concepts, therefore, reoccur in different utterances. We do not use the images during training: the spoken captions are treated as our unlabelled target speech corpus. We use the default train, development, and test splits containing 30k, 5k, and 5k spoken utterances, respectively. Speech audio is parametrised as 13-dimensional static mel-frequency cepstral coefficients (MFCCs). We also perform experiments using self-supervised speech features: we use the 12th transformer layer of the multilingual XLSR model [30] to get 1024-dimensional features. Previous work has shown that self-supervised speech features (obtained in an unsupervised way) can be useful as the frame-level input to AWE models [31, 32]. Utterances are normalised per speaker and segmented using true word boundaries from forced alignments [33]. We use these word segments to construct context word pairs as described in Sec. III. For all the semantic models, we use a context window of three words before and after a centre word.
**Semantic models trained from scratch (III).** The encoder and decoder of our Speech2Vec implementation each consist of three unidirectional RNNs with 400-dimensional hidden vectors and an embedding size of 100. The model is trained on roughly two million word pairs occurring in the same context window in our training data. The semantic ContrastiveRNN uses the same encoder structure. It is also trained on the same context pairs together with out-of-context word segments serving as negatives; for each positive, we sample 20 negatives.
**Semantic models using multilingual transfer (IV).** We train a CAE-RNN multilingual AWE model [14, 17] on five different Common Voice [34] languages: Italian, Dutch, Russian, Czech. We pool the data from all languages and extract 300k training pairs. The CAE-RNN model structure is the same as that of our Speech2Vec model. This multilingual CAE-RNN is used to initialise a ContrastiveRNN; we freeze the weights of the first two encoder layers while training on context pairs. For the projection network, we use a feed-forward network of two linear layers with an inner dimension of 1024 and input and output dimensions of 100. Again we sample 20 negatives for each positive and train the network to optimise the contrastive loss (1). For the Cluster+Skipgram approach, we use the multilingual CAE-RNN to obtain phonetic embeddings. For \(K\)-means clustering, we use \(K=5000\) clusters and set \(\sigma=0.01\) in (2). We use the same linear network as the Skipgram model [1], with a word embedding size of 100 and optimise the network with the negative log-likelihood loss.
**Intrinsic evaluation.** We evaluate the quality of semantic embeddings by measuring similarity scores between isolated word pairs. We compare these scores to word similarity scores of textual word embeddings generated by an off-the-shelf Skipgram model trained on the transcribed utterances. Spearman's \(\rho\) is used to quantify the similarity between the two sets of word-pair similarities [35, 36]. To obtain a single semantic embedding for each word class, we calculate the average of all AWEs from the same class and report \(\rho_{\text{avg}}\). Given that we are particularly interested in obtaining semantic embeddings for individual word segments, single-sample performance is also measured by randomly selecting one instance of each spoken word. This is repeated ten times and averaged to get a single score \(\rho_{\text{single}}\).
**Extrinsic evaluation.** We use the setup as [23] to evaluate downstream semantic query-by-example (QbE) search performance. Semantic labels for 1000 test utterances from FACC were collected from human annotators, using a set 67 keyword classes. Specifically, each of the 1000 utterances was labelled by five annotators, indicating whether a particular keyword is semantically relevant to that utterance (regardless of whether the word instance appears verbatim in the utterance). We use the majority decision to assign a hard label for whether a query keyword is relevant to an utterance. Using these hard labels, we calculate semantic \(P@10\), \(P@N\), EER, and Spearman's \(\rho\). Here, \(\rho\) measures the correlation between a system's ranking and the actual number of annotators who deemed a query keyword relevant to an utterance. To simplify the QbE task, we still assume that ground truth word boundaries are known: a query AWE is therefore compared to AWEs for the word segments in an unlabelled search utterance.
## VI Results
### _Intrinsic Evaluation: Semantic AWEs_
Table I presents the intrinsic scores of embeddings from the semantic AWE models, trained either from scratch (top
Fig. 2: Our Cluster+Skipgram semantic AWE approach. Speech segments \(X^{(n)}\) are represented by soft pseudo-word label vectors \(\mathbf{v}^{(n)}\) which are then used to train a Skipgram-like model.
section) or using multilingual transfer (bottom). The benefit of multilingual transfer is evident in the scores of the projection and Cluster+Skipgram approaches, with the latter outperforming all other models regardless of the input features used or whether single or averaged embeddings are evaluated. The single-sample performance \(\rho_{\text{single}}\) is particularly significant as it shows that individual representations can be compared accurately--a useful property for downstream applications such as semantic QbE (VI-B).
The ContrastiveRNN is the one exception that does not show a clear gain from initialising with multilingual weights compared to training from scratch. As a sanity check, we evaluate the phonetic multilingual AWEs before semantic training (i.e. the foundation model used for transfer in the bottom section), obtaining a \(\rho_{\text{single}}=0.59\%\) and \(\rho_{\text{avg}}=-0.13\%\). As expected, this indicates that phonetic multilingual AWEs do not capture semantic information. The table also shows the benefit of using self-supervised speech representations as input to AWEs instead of conventional features, as also in previous work [31, 32]; we use XSLR features from this point onwards.
Fig. 1 visualises the semantic embedding space of the Cluster+Skipgram model. It is clear that the acoustic realisations of semantically related words end up in similar areas in the space. E.g. the model learned that spoken instances of "orange", "red", "blue", "yellow", and "green" should be close to each other.
### _Extrinsic Evaluation: Semantic QbE_
Table II compares the Cluster+Skipgram (semantic) and multilingual AWE (phonetic) models when used in a downstream QbE system. We evaluate both exact and semantic QbE, where the latter gets awarded for retrieving exact query matches as well as utterances labelled as semantically related to the search query. To situate results, we use a random baseline model that assigns a random relevance score to each utterance. (The relatively high scores of the random approach are due to the narrow domain of the evaluation data, Sec. V.)
Looking at the EER and Spearman's \(\rho\) for semantic QbE, we see that the Cluster+Skipgram model achieves the highest score, outperforming the purely phonetic AWEs from the multilingual AWE model. The reason why the phonetic multilingual AWE model outperforms the semantic model in \(P@10\) and \(P@N\) is due to its proficiency in detecting exact matches (which are also correct semantic matches).
To get a better sense of the ability of a model to retrieve non-verbatim semantic matches, we construct a difficult artificial semantic QbE task where we mask out all exact occurrences of the query word class in the search collection. The results are shown in Table III. Now we see a clear benefit in using the Cluster+Skipgram model, with the phonetic multilingual AWE model becoming close to random search.
Our core goal was semantic QbE, but it is worth briefly touching on the exact QbE performance of the Cluster+Skipgram model in Table II. Although trained for semantics, this model still achieves reasonable exact retrieval performance, with only a drop of between 5% and 10% in scores compared to the multilingual AWE model. It is therefore clear that this semantic model is able to retain phonetic properties while also capturing semantic information related to context.
## VII Conclusion
We presented several semantic AWE modelling strategies. We specifically promoted transferring knowledge from a pre-trained multilingual AWE model trained for word-class discrimination. Our best semantic AWE approach involves a soft clustering on the original multilingual AWEs, serving as input to a Skipgram-like model. Through intrinsic and extrinsic evaluations, we demonstrated the effectiveness of our strategies in learning semantic representations from unlabelled speech data. The main shortcoming of our work (as also in others [25]), is that the word segmentation is assumed to be known. This was reasonable given our goal of comparing different semantic AWE approaches on a sensible benchmark, but future work should look into incorporating unsupervised word segmentation methods [37, 38, 39] in order to do fully unsupervised semantic AWE modelling. |
2303.10170 | Generic axion Maxwell equations: path integral approach | Using the path integral approach, we derive the low energy interactions
between axions and electromagnetic field that arise in models with heavy dyons
charged under a spontaneously broken global axial $U(1)$ symmetry. Hence, we
obtain generic axion-Maxwell equations relevant for experimental searches. We
find that the structure of the axion Maxwell equations could be significantly
different compared to what is normally assumed in the literature, as the
derived equations feature new axion-dependent terms including CP-violating
ones. The new terms can reconcile the Peccei-Quinn solution to the strong CP
problem with astrophysical axion hints, as well as give unique signatures in
light-shining-through-wall and haloscope experiments. Moreover, via the latter
signatures, these experiments can indirectly probe the existence of heavy
dyons. | Anton V. Sokolov, Andreas Ringwald | 2023-03-17T17:59:31Z | http://arxiv.org/abs/2303.10170v2 | # Generic axion Maxwell equations: path integral approach
###### Abstract
Using the path integral approach, we derive the low energy interactions between axions and electromagnetic field that arise in models with heavy dyons charged under a spontaneously broken global axial \(U(1)\) symmetry. Hence, we obtain generic axion-Maxwell equations relevant for experimental searches. We find that the structure of the axion Maxwell equations could be significantly different compared to what is normally assumed in the literature, as the derived equations feature new axion-dependent terms including CP-violating ones. The new terms can reconcile the Peccei-Quinn solution to the strong CP problem with astrophysical axion hints, as well as give unique signatures in light-shining-through-wall and haloscope experiments. Moreover, via the latter signatures, these experiments can indirectly probe the existence of heavy dyons.
**Keywords:** axions, magnetic monopoles, path integral, modified Maxwell equations, dark matter
## 1 Introduction
Axions as hypothetical new particles are well motivated candidates for physics beyond the Standard Model (SM). In particular, axions provide a straightforward solution to the strong CP problem [1; 2; 3; 4] and give a simple explanation for the inferred dark matter abundance and properties [5; 6; 7; 8]. An advantage of the axion hypothesis is that it can be relatively easy probed by various experiments, most of which aim to detect the coupling of axions to the electromagnetic field. Due to the fact that most experiments focus on the electromagnetic coupling, it is essential to understand how the Maxwell equations change in the presence of axions, i.e. to derive the most general form of the so-called axion Maxwell equations. The widely accepted form of these equations is [9]:
\[\mathbf{\nabla}\!\times\!\mathbf{B}_{a}-\dot{\mathbf{E}}_{a}=-g_{a \gamma\gamma}\left(\mathbf{E}_{0}\!\times\!\mathbf{\nabla}a-\dot{a}\mathbf{B}_{0} \right)\,, \tag{1}\] \[\mathbf{\nabla}\!\times\!\mathbf{E}_{a}+\dot{\mathbf{B}}_{a}=0\,,\] (2) \[\mathbf{\nabla}\!\cdot\!\mathbf{B}_{a}=0\,,\] (3) \[\mathbf{\nabla}\!\cdot\!\mathbf{E}_{a}=-g_{a\gamma\gamma}\,\mathbf{B }_{0}\!\cdot\!\mathbf{\nabla}a\,, \tag{4}\]
where \(\mathbf{E}_{a}\) and \(\mathbf{B}_{a}\) are axion-induced electric and magnetic fields, while \(\mathbf{E}_{0}\) and \(\mathbf{B}_{0}\) are background electric and magnetic fields created in the detector.
It has recently been shown [10] by the authors of this article that the axion Maxwell equations (1)-(4) represent the special case of a more general construction which involves three axion coupling
parameters \(g_{a\text{EE}}\), \(g_{a\text{MM}}\) and \(g_{a\text{EM}}\) instead of the only one \(g_{a\gamma\gamma}\) normally considered:
\[\mathbf{\nabla}\!\times\!\mathbf{B}_{a}-\dot{\mathbf{E}}_{a}=-g_{a \text{EE}}\left(\mathbf{E}_{0}\!\times\!\mathbf{\nabla}\!a-\dot{a}\mathbf{B}_{0} \right)-g_{a\text{EM}}\left(\mathbf{B}_{0}\!\times\!\mathbf{\nabla}\!a+\dot{a} \mathbf{E}_{0}\right)\,, \tag{5}\] \[\mathbf{\nabla}\!\times\!\mathbf{E}_{a}+\dot{\mathbf{B}}_{a}=g_{a \text{MM}}\left(\mathbf{B}_{0}\!\times\!\mathbf{\nabla}\!a+\dot{a}\mathbf{E}_{0} \right)+g_{a\text{EM}}\left(\mathbf{E}_{0}\!\times\!\mathbf{\nabla}\!a-\dot{a} \mathbf{B}_{0}\right)\,,\] (6) \[\mathbf{\nabla}\!\cdot\!\mathbf{B}_{a}=g_{a\text{MM}}\,\mathbf{E}_{0 }\!\cdot\!\mathbf{\nabla}\!a-g_{a\text{EM}}\,\mathbf{B}_{0}\!\cdot\!\mathbf{\nabla}\! a\,,\] (7) \[\mathbf{\nabla}\!\cdot\!\mathbf{E}_{a}=-g_{a\text{EE}}\,\mathbf{B}_{0 }\!\cdot\!\mathbf{\nabla}\!a+g_{a\text{EM}}\,\mathbf{E}_{0}\!\cdot\!\mathbf{\nabla}\! a\,. \tag{8}\]
It was found that such new electromagnetic couplings of axions arise in KSVZ-like models [11; 12] where the new heavy particles charged under the Peccei-Quinn (PQ) \(U(1)_{\text{PQ}}\) group carry magnetic charges [13; 14]. As this construction generalizes the conventional axion models to the case where there exist heavy dyons, we dub these models as dyon-philic. The latter models are well motivated from the theoretical viewpoint as there is no reason to expect a new heavy particle to carry only an electric, but no magnetic charge [15]; moreover, the existence of heavy magnetically charged particles seems to be a necessity stemming from the quantization of the electric charge observed in nature [16; 17; 18].
While in Ref. [10] we focused primarily on explaining the loophole in the previous theoretical works on the axion-photon coupling and investigated the new electromagnetic couplings of axions in the Effective Field Theory (EFT) approach, in this article we would like to give a detailed derivation of the general axion Maxwell equations (5)-(8) in the path integral framework. The path integral approach is very instructive when dealing with the Quantum Field Theory (QFT) that includes both electric and magnetic charges: indeed, the only exhaustive proof of the Lorentz-invariance of such QFT was obtained in the path integral framework [19; 20]. Moreover, in the path integral approach, it is easier to understand how the peculiarities of the corresponding classical theory, such as Dirac vetos, originate and why they do not signal any inconsistencies.
This article is structured as follows: in sec. 2, we review the path integral formulation of the QFT describing the interactions between electric and magnetic charges as well as discuss the general distinctive features of this QFT; in sec. 3, we introduce the dyon-philic axion models in the path integral approach, transform the integral over the heavy dyon field into the integral over the dyon proper time parameter, and calculate the effective low energy Lagrangian that describes the interactions of dyon-philic axions with the electromagnetic field by performing an exact non-perturbative calculation; in sec. 4, we derive the general axion Maxwell equations (5)-(8) and briefly discuss their experimental implications; finally, in sec. 5, we conclude.
## 2 QFT with magnetic charges and its path integral formulation
There exist several equivalent formulations of the QFT with magnetic charges, see e.g. Refs. [21; 22], and the exhaustive review [23]. All these formulations necessarily share a common feature which drastically distinguishes them from the theory of Quantum Electrodynamics (QED). This feature is non-locality1: any QFT with both electric (\(e_{i}\)) and magnetic (\(g_{j}\)) charges is actually a theory of the interaction of two-particle irreducible states [24; 25; 26]. Indeed, an asymptotic state of two particles (\(i,j\)) for which \(e_{i}g_{j}-e_{j}g_{i}\neq 0\) constitutes an entangled "pairwise" state. Each such state is characterized by a pairwise helicity variable which corresponds to the asymptotically non-vanishing angular momentum
of the electromagnetic field. Note that the Dirac-Schwinger-Zwanziger (DSZ) quantization condition follows naturally from the quantization of this angular momentum. Asymptotic irreducible two-particle states obviously violate the cluster decomposition principle, which is another way to understand why any QFT with magnetic charges is fundamentally different from QED, for which the cluster decomposition is a well established property (see e.g. Ref. [27], pp. 252-254). While one can introduce magnetic monopoles in QED as external sources, to study the creation or annihilation of real and virtual monopoles one has to work in a substantially different framework.
The QFT of electric and magnetic charges which we choose to work with is Zwanziger theory [22]. Zwanziger's formulation has the advantage of featuring a local Lagrangian and, as a consequence, of being the most extensively studied QFT with magnetic charges. The Lagrangian of the Zwanziger theory is:
\[\mathcal{L}_{Z}\ =\ \frac{1}{2}\left\{[n\!\cdot\!(\partial\wedge B)] \cdot[n\!\cdot\!(\partial\wedge A)^{d}]\ -\ [n\!\cdot\!(\partial\wedge A)]\cdot[n\!\cdot\!(\partial\wedge B)^{d}]\ -\right.\] \[\left.[n\!\cdot\!(\partial\wedge A)]^{2}\ -\ [n\!\cdot\!(\partial\wedge B)]^{2} \right\}\ -\ j_{e}\!\cdot\!A\ -\ j_{m}\!\cdot\!B\;, \tag{1}\]
where \(j_{e}\) and \(j_{m}\) are electric and magnetic currents, respectively; \(A_{\mu}\) and \(B_{\mu}\) are four-potentials; \(n_{\mu}\) is a fixed four-vector (\(n^{2}=1\)). We use the following simplified notations: \(a\cdot b=a_{\mu}b^{\mu}\), \((a\wedge b)^{\mu\nu}=a^{\mu}b^{\nu}-a^{\nu}b^{\mu},\ (a\cdot G)^{\nu}=a_{\mu}G^{\mu\nu}\), and for any tensor \(A_{\mu\nu}\) its Hodge dual is defined as \(A_{\mu\nu}^{d}=\epsilon_{\mu\nu\lambda\rho}A^{\lambda\rho}/2\), where \(\epsilon_{0123}=1\). Note that contrary to the case of QED, the Zwanziger theory features two four-potentials instead of one. Still, the corresponding dynamical system is highly constrained, that is why one has only four phase space degrees of freedom describing the electromagnetic field, similar to QED [22; 28]. The presence of the fixed four-vector \(n_{\mu}\) in the Lagrangian is another very important feature of the Zwanziger theory. This feature is tightly connected to the non-locality property discussed in the previous paragraph, as the components of the field strength tensors \((\partial\wedge A)_{\mu\nu}\) and \((\partial\wedge B)_{\mu\nu}\) corresponding to the two four-potentials differ from the physical electric and magnetic fields by non-local \(n_{\mu}\)-dependent terms:
\[\partial\wedge A =F+(n\!\cdot\!\partial)^{-1}(n\wedge j_{m})^{d}\,, \tag{2}\] \[\partial\wedge B =F^{d}-(n\!\cdot\!\partial)^{-1}(n\wedge j_{e})^{d}\,, \tag{3}\]
where \(F_{\mu\nu}\) is the electromagnetic field strength tensor and \((n\!\cdot\!\partial)^{-1}\) is the integral operator satisfying \(n\cdot\partial\left(n\!\cdot\!\partial\right)^{-1}\!(\vec{x})=\delta(\vec{x})\). The major advantage of the two four-potentials of the Zwanziger theory is their regularity everywhere in space-time, i.e. \(\forall x_{\mu}:[\partial_{\rho},\partial_{\nu}]\,A_{\lambda}(x_{\mu})=[ \partial_{\rho},\partial_{\nu}]\,B_{\lambda}(x_{\mu})=0\). This property is satisfied only due to the field decompositions (2) and (3).
The dependence of the Lagrangian (1) on the fixed four-vector \(n_{\mu}\) means that a special attention should be paid to the Lorentz-invariance of the theory. One can check straight away that the classical equations of motion for the electromagnetic field do not depend on \(n_{\mu}\) and are thus Lorentz-invariant by simply varying the Lagrangian (1) with respect to the four-potentials \(A_{\mu}\) and \(B_{\mu}\) and using Eqs. (2) and (3). One of course obtains the classical Maxwell equations this way. It is a bit more difficult to see that the classical equations of motion for the charged particles do not depend on \(n_{\mu}\). Writing the
currents in terms of point-particle trajectories:
\[j_{e}^{\nu}(x)=\sum_{i}e_{i}\int\delta^{4}(x-x_{i}(\tau_{i}))\,dx_{i }^{\nu}\,, \tag{4}\] \[j_{m}^{\nu}(x)=\sum_{i}g_{i}\int\delta^{4}(x-x_{i}(\tau_{i}))\,dx_ {i}^{\nu}\,, \tag{5}\]
and varying with respect to these trajectories, one obtains:
\[\frac{d}{d\tau_{i}}\bigg{(}\frac{m_{i}u_{i}}{(u_{i}^{2})^{1/2}} \bigg{)}= \Big{(}e_{i}F(x_{i})+g_{i}F^{d}(x_{i})\,\Big{)}\!\cdot\!u_{i}\] \[-\sum_{j}(e_{i}g_{j}-g_{i}e_{j})\,n\!\cdot\!\!\int(n\!\cdot\! \partial)^{-1}(x_{i}-x_{j})\,\left(u_{i}\wedge u_{j}\right)^{d}d\tau_{j}\,. \tag{6}\]
The second term on the right-hand side seems to spoil both the Lorentz-invariance of the classical theory and the agreement with the conventional expression for the Lorentz force. However, it is easy to see that this term does not contribute to the dynamics: the support of the kernel \((n\!\cdot\!\partial)^{-1}(x_{i}-x_{j})\) is given by the condition \(\vec{x}_{i}(\tau)-\vec{x}_{j}(\tau)=\vec{n}s\), which has more equations than free parameters and is thus satisfied only for exceptional trajectories. As such exceptional trajectories form a measure zero subset of all possible trajectories, one can safely omit them in the variational procedure: indeed, the latter procedure originates from calculating the path integral over all the trajectories where no measure zero subset can contribute. One can also rigorously exclude the exceptional trajectories from consideration within the classical theory itself by redefining the action functional as suggested in Ref. [29].
We explained why the classical equations of the Zwanziger theory are \(n_{\mu}\)-independent. It remains to be shown that the full quantum theory shares this property. The respective proof was given in Refs. [19; 20]. We will briefly return to their arguments after presenting the path integral formulation of the theory. Before this, let us note that at each finite order of the formal perturbation theory applied to the Zwanziger theory, the resulting approximation is _not_\(n_{\mu}\)-independent and is therefore ill-defined. This means that the theory is essentially non-perturbative. A very important point is that this non-perturbativity is associated to the \(n_{\mu}\)-dependence and thus to the non-locality feature described earlier, but not necessarily to the presence of a large expansion parameter. Indeed, the effective expansion parameter can be made small for some particular processes [30; 31; 32], however this does not justify applying perturbation theory: the latter is still ill-defined and can give wrong results as explained in Ref. [33].
Zwanziger theory was originally quantized using the canonical formalism, either by adding a special gauge-fixing term [22] or by invoking the full Dirac's method for the quantization of constrained systems [28]. In this work, we are however interested in the path integral approach. A thorough path integral formulation of the Zwanziger theory was given in Ref. [34], developed further in Refs. [19; 20] and refined by using lattice regularization in Ref. [35]. Choosing the gauge-fixing functions to be
\[G_{1}(\alpha)=n\!\cdot\!A-\alpha\,, \tag{7}\] \[G_{2}(\beta)=n\!\cdot\!B-\beta\,, \tag{8}\]
one obtains the following generating functional of the Green's functions of the theory:
\[\mathcal{Z}(\tilde{a}_{\mu},\tilde{b}_{\mu})\;=\;\mathcal{N}\int \prod_{\mu}\mathcal{D}A_{\mu}\mathcal{D}B_{\mu}\prod_{x}\delta(n\!\cdot\!A- \alpha)\,\delta(n\!\cdot\!B-\beta)\exp\bigg{\{}i\int d^{4}x\,\left(\mathcal{L} _{Z}+j_{e}\!\cdot\!\tilde{a}+j_{m}\!\cdot\!\tilde{b}\right)\bigg{\}}\, \tag{9}\]
where \({\cal L}_{Z}\) is the Zwanziger Lagrangian (1); \({\cal N}\) is the normalization factor including the Faddeev-Popov determinant [36] and the determinant associated to the second-class constraints [34], both of which are independent of the fields in this case; \(\tilde{a}_{\mu}\) and \(\tilde{b}_{\mu}\) are arbitrary functions. Note that we omitted the obvious part of the generating functional containing only charged matter fields and the functional integrations over them. Integrating over the parameters \(\alpha\) and \(\beta\) with suitable weights [37], one can as usual trade the gauge-fixing conditions for the gauge-fixing terms in the Lagrangian:
\[{\cal Z}(\tilde{a}_{\mu},\tilde{b}_{\mu})\ =\ {\cal N}\int\prod_{\mu}{\cal D}A_{ \mu}{\cal D}B_{\mu}\exp\left\{i\int d^{4}x\,\left({\cal L}_{Z}+{\cal L}_{G}+j_ {e}\!\cdot\!\tilde{a}+j_{m}\!\cdot\!\tilde{b}\right)\right\}\,, \tag{10}\]
where
\[{\cal L}_{G}\ =\ \frac{1}{2}\left\{\left[\partial\left(n\!\cdot\!A\right) \right]^{2}+\left[\partial\left(n\!\cdot\!B\right)\right]^{2}\right\}\,. \tag{11}\]
The known proof of the Lorentz-invariance of the Zwanziger QFT [19; 20] relies essentially on the path integral representation of the theory. Let us briefly explain the main ideas behind this proof. First of all, to establish the Lorentz-invariance of the theory, it is sufficient to show the Lorentz-invariance of the generating functional (10). Second, it is easy to notice that the Lorentz-invariance in this case is equivalent to \(n_{\mu}\)-independence. Finally and most importantly, one has to remember that the functional integrals over the charged matter fields can be represented as series involving integrals over point-particle trajectories [38; 39; 40; 20]. It then turns out that the only \(n_{\mu}\)-dependence remains in the interactions between particles (real or virtual) of a different electric-magnetic type and that the \(n_{\mu}\)-dependent part basically counts the number of times the trajectory of one particle intersects some oriented \(n_{\mu}\)-dependent three-surface associated to the trajectory of another particle, which is simply an integer but for some exceptional trajectories that form a measure zero subset and can therefore be omitted in the integral over all the trajectories. The part associated to the \(n_{\mu}\)-dependent integer does not contribute to the generating functional after imposing the DSZ quantization condition \(e_{i}g_{j}-e_{j}g_{i}=2\pi m\), \(m\in\mathbb{Z}\) on the charges of all the possible \((i,j)\) pairs of dyons, since in this case the \(n_{\mu}\)-dependent contribution to the action is always equal to \(2\pi k\), \(k\in\mathbb{Z}\) which obviously does not contribute to the path integral (10).
Note that both in the classical case and in the full quantum relativistic case, the \(n_{\mu}\)-independence crucially depends on the point-particle representation of charged matter, as opposed to the usual continuum approximation in field theory where the distribution of charged matter is continuous. In fact, the classical field theory of magnetic charges, i.e. the theory where charges and currents are by definition continuously distributed in space, is always inconsistent, as the Jacobi identity for the gauge covariant derivatives is necessarily violated in this case [20]. While the failure of the continuum approximation seems to be against the usual local-field-theoretic intuition, one has to remember that one of the key features of the QFT with magnetic charges is its non-locality. As \(n_{\mu}\)-vector in the Zwanziger theory is responsible for capturing the non-locality, it is not surprising that the point-particle, as opposed to the continuous, distribution of charge is crucial for \(n_{\mu}\)-independence of the gauge-invariant observables. Finally, let us note that the same feature of non-locality invalidates the conventional decoupling principle applied to the theories with both electric and magnetic charges: a given heavy charged particle cannot be fully integrated out at the energy scales below its mass since it contributes a non-local angular momentum to the electromagnetic field, which is felt by other particles even in the deep infrared (IR), cf. Eqs. (2) and (3). This means that even if all the magnetically charged particles are very heavy, and their effect on the low energy interactions is indirect, the low energy theory describing the
interactions of light electrically charged particles is _not_ given by a QED-like theory, but still by a theory the structure of which is similar to the QFT with magnetic charge. Simply put, in this case, we still need Zwanziger-like (or any equivalent) description of the electromagnetic field even at low energies.
## 3 Electromagnetic interactions of dyon-philic axions
### Outline of the model
Let us now consider the dyon-philic axion models [10; 13; 14] in the path integral framework of the previous section. In these models, similarly to the KSVZ axion model [11; 12], one introduces at least one new heavy vector-like quark \(\psi\) charged under the global \(U(1)_{\rm PQ}\) symmetry, as well as the PQ complex scalar field \(\Phi\) which gives mass to the new quark(s) in the phase where the \(U(1)_{\rm PQ}\) is spontaneously broken due to a non-zero vacuum expectation value \(\langle\Phi\rangle=v_{a}/\sqrt{2}\). A new heavy quark can in general be charged under the electromagnetic \(U(1)_{\rm EM}\) subgroup of the gauge group of the Standard model. Moreover, there is no reason to assume that it carries only an electric, but no magnetic charge, given that the existence of heavy magnetically charged particles is currently regarded as a necessity for obtaining a consistent quantum gravity theory [16; 17; 18]. According to the discussion of the previous section, to describe all the effects of magnetically charged particles, one has to work in the framework of the QFT with magnetic charges. In particular, we chose to work with a particular realization of the QFT with magnetic charges given by the Zwanziger formalism quantized via the path integral methods outlined in the previous section.
The part of the Lagrangian which includes interactions of the new heavy quark \(\psi\) with the electromagnetic field and the PQ field \(\Phi\) is:
\[\mathcal{L}_{\psi,\Phi}\ =\ i\bar{\psi}\gamma^{\mu}D_{\mu}\psi+y\left(\Phi\, \bar{\psi}_{L}\psi_{R}+{\rm h.c.}\,\right)-\lambda_{\Phi}\left(|\Phi|^{2}- \frac{v_{a}^{2}}{2}\right)^{\!\!2}, \tag{10}\]
where \(y\) and \(\lambda_{\Phi}\) are some \(O(1)\) constants and \(D_{\mu}=\partial_{\mu}-ie_{\psi}A_{\mu}-ig_{\psi}B_{\mu}\) with \(e_{\psi}\) and \(g_{\psi}\) being the electric and magnetic charges of \(\psi\), respectively. Let us decompose \(\Phi=(v_{a}+\sigma+ia)/\sqrt{2}\), where \(a\) is a pseudo Goldstone axion field. In the symmetry broke phase, the field \(\sigma\) gets a mass \(m_{\sigma}\sim v_{a}\) and decouples from the physics of low energy processes, i.e. the processes for which the square of the center-of-mass energy \(s\ll v_{a}^{2}\). Assuming \(v_{a}\) is sufficiently large, which is indicated by experimental results and cosmology, the field \(\sigma\) is then irrelevant for experiments. The light axion field \(a\) on the contrary can be probed by low energy experiments and in this work we are interested in its electromagnetic interactions mediated by the field \(\psi\). The relevant part of the Lagrangian in the symmetry broken phase can then be written as follows:
\[\mathcal{L}_{\psi}\ =\ i\bar{\psi}\gamma^{\mu}D_{\mu}\psi+\frac{yv_{a}}{\sqrt{2}} \,\bar{\psi}\psi+\frac{iy}{\sqrt{2}}\,a\bar{\psi}\gamma_{5}\psi\,. \tag{11}\]
Using the equations (10) and (11) from the previous section as well as the Lagrangian for the heavy field \(\psi\) given by Eq. (11), one can now write the generating functional of Green's functions of the theory in the symmetry broken phase:
\[\mathcal{Z}(\tilde{a}_{\mu},\tilde{b}_{\mu})\ =\ \mathcal{N}\int\prod_{\alpha,\mu} \mathcal{D}\psi_{\alpha}\mathcal{D}\bar{\psi}_{\alpha}\mathcal{D}A_{\mu} \mathcal{D}B_{\mu}\exp\left\{i\int d^{4}x\,\left(\mathcal{L}_{Z}+\mathcal{L} _{G}+\mathcal{L}_{\psi}+j_{e}\!\cdot\!\tilde{a}+j_{m}\!\cdot\!\tilde{b}\right) \right\}\,, \tag{12}\]
where the parts associated to the axion field alone, i.e. kinetic energy and self-interaction, as well as the functional integration over the axion field, are omitted for sake of brevity; \(j_{e}\) and \(j_{m}\) are currents of any light charged particles that are used in our axion detectors. We omitted the functional integration over the corresponding light fermion fields as well as the kinetic terms associated to them. Practically, one has of course \(j_{m}=0\).
### Proper time representation of the heavy fermion path integral
Using the generating functional (17), we would like to derive the low energy (\(s\ll v_{a}^{2}\)) interactions of axions with the electromagnetic field sourced by light charged particles. The calculation of the functional integral over \(\psi\) yields:
\[\mathcal{N}\int\prod_{\alpha}\mathcal{D}\psi_{\alpha}\mathcal{D} \bar{\psi}_{\alpha}\exp\left(i\!\int\!d^{4}x\,\mathcal{L}_{\psi}\right)\;=\; \exp\left\{\mathrm{Tr}\ln\left(i\!\not{D}+m+\frac{iy}{\sqrt{2}}\,a\gamma_{5}+i \epsilon\right)\;-\right.\] \[\left.\mathrm{Tr}\ln\!\left(i\!\not{\partial}+m+i\epsilon\right) \right\}\;=\;\exp\!\left\{\mathrm{Tr}\ln\left(i\!\not{D}+m+\frac{iy}{\sqrt{2} }\,a\gamma_{5}+i\epsilon\right)\;-\mathrm{Tr}\ln\!\left(i\!\not{D}+m+i \epsilon\right)\right\}\times\] \[\exp\!\left\{\mathrm{Tr}\ln\!\left(i\!\not{D}+m+i\epsilon\right) \;-\mathrm{Tr}\ln\!\left(i\!\not{\partial}+m+i\epsilon\right)\right\}. \tag{18}\]
where \(m\equiv yv_{a}/\sqrt{2}\), the traces are over all the possible states, and we normalized the integral by its value for the free fermion. We are interested in the first exponent on the right-hand side of Eq. (18) since it describes interactions involving the axion field. We will transform the expression under this exponent by introducing the Schwinger proper time parameter. We choose the basis of states to be represented by position eigenstates, and sum over the spinor indices. Besides, as we are interested in low energy dynamics of axion, we omit the terms containing \(\partial_{\mu}a\), as these terms are suppressed by \(\omega_{a}/m\) and \(|\mathbf{k}_{a}|/m\) compared to the others, where \(\omega_{a}\) and \(\mathbf{k}_{a}\) are the energy and momentum of the axion field, respectively. After we introduce the integration over the parameter \(y\), the low energy approximation gives:
\[\mathrm{Tr}\ln\left(i\!\not{D}+m+\frac{iy}{\sqrt{2}}\,a\gamma_{5 }+i\epsilon\right)\;-\mathrm{Tr}\ln\!\left(i\!\not{D}+m+i\epsilon\right)\;=\] \[\mathrm{tr}_{\gamma}\int d^{4}x\,\int\limits_{0}^{y/\sqrt{2}}d \tilde{y}\;\left\langle x\,\bigg{|}\,\frac{ia\gamma_{5}}{i\!\not{D}+m+i \tilde{y}\,a\gamma_{5}+i\epsilon}\,\bigg{|}\,x\right\rangle\,, \tag{19}\]
where \(\mathrm{tr}_{\gamma}\) denotes the trace over spinor indices, and the order of the Dirac matrices is unambiguous due to the trace operator. The resulting integral depends on the following dimensionful parameters: the axion field \(a\), the electromagnetic field \([D_{\mu},D_{\nu}]\) and the mass \(m\) of the heavy fermion. Taking into account gauge and Lorentz symmetries, it is clear that any terms describing the interaction of axions with the electromagnetic field are suppressed by some powers of \(m\), and that the dominant terms are linear in the axion field \(a\). This allows us to keep track only of the terms linear in \(a\) in the low energy approximation. Working in the latter approximation, we rationalize the Dirac operator and introduce
the integration over the Schwinger proper time parameter as follows:
\[\mathrm{tr}_{\gamma}\int d^{4}x\,\int\limits_{0}^{y/\sqrt{2}}d\tilde{ y}\,\left\langle x\,\bigg{|}\,\frac{ia\gamma_{5}}{i\not{D}+m+i\tilde{y}\,a\gamma_{5}+i \epsilon}\,\bigg{|}\,x\right\rangle = \frac{1}{2}\,\mathrm{tr}_{\gamma}\int d^{4}x\,\int\limits_{0}^{y /\sqrt{2}}d\tilde{y}\int\limits_{0}^{\infty}d\tau\,ia\gamma_{5}\times\] \[\left\langle x\right|(i\not{D}-m)\,e^{-i\tau(\not{D}^{2}+m^{2})/2 }|x\rangle = -\mathrm{tr}_{\gamma}\int d^{4}x\,\frac{ijm}{2\sqrt{2}}\,a\gamma_ {5}\int\limits_{0}^{\infty}d\tau\,\langle x|e^{-i\tau(\not{D}^{2}+m^{2})/2}|x \rangle\,. \tag{11}\]
### Role of the non-local terms
The proper time integral on the right-hand side of Eq. (11) can be calculated for certain electromagnetic field configurations, including a constant homogeneous field, using the Schwinger method [40]. Since we are interested in dynamics at low energies, the scale of the variation of the electromagnetic field is negligible compared to the mass \(m\) of the heavy fermion, so that the field can indeed be considered constant and homogeneous. However, due to the non-local terms in Eqs. (2) and (3), the constant homogeneous electromagnetic field is not automatically synonymous with constant homogeneous \([D_{\mu},D_{\nu}]\). To find out how to deal with the non-local terms, we will rewrite the functional integral Eq. (10) in terms of the integrals over classical particle paths, as first suggested in Refs. [19; 20]. The matrix element in the integrand on the right-hand side of Eq. (11) corresponds to the following integral over trajectories:
\[\left\langle x|e^{-i\tau(\not{D}^{2}+m^{2})/2}|x\right\rangle\;= \;e^{-i\tau m^{2}/2}\int\limits_{z(0)=x}^{z(\tau)=x}\mathcal{D}z(\tau)\,T\exp \Biggl{\{}-i\int\limits_{0}^{\tau}d\tau^{\prime}\left(\frac{\dot{z}^{2}}{2}+e _{\psi}A\!\cdot\!\dot{z}+g_{\psi}B\!\cdot\!\dot{z}+\right.\] \[\left.\frac{1}{4}\,\gamma^{\mu}\gamma^{\nu}\left[D_{\mu},D_{\nu} \right]\right)\Biggr{\}}\;=\;e^{-i\tau m^{2}/2}\int\limits_{z(0)=x}^{z(\tau)= x}\mathcal{D}z(\tau)\,\exp\Biggl{\{}-i\int\limits_{0}^{\tau}d\tau^{\prime} \left(\frac{\dot{z}^{2}}{2}+e_{\psi}A\!\cdot\!\dot{z}+g_{\psi}B\!\cdot\!\dot{z} \right)\Biggr{\}}\times\] \[\int d\Gamma(\tau)\,c(\tau)\otimes c^{*}(0)\exp\Biggl{\{}-\frac{i }{4}\int\limits_{0}^{\tau}d\tau^{\prime}\sigma_{c}^{\mu\nu}\left[D_{\mu},D_{ \nu}\right]\Biggr{\}}\,, \tag{12}\]
where
\[d\Gamma(\tau)\;=\;\prod\limits_{\tau^{\prime}}\left(\frac{dc^{*} dc}{2\pi i}\right)\exp\Biggl{\{}-c^{*}(\tau)\cdot c(\tau)-\int\limits_{0}^{ \tau}d\tau^{\prime}c^{*}(\tau^{\prime})\cdot\dot{c}(\tau^{\prime})\Biggr{\}}\,, \tag{13}\] \[\sigma_{c}^{\mu\nu}=c^{*}\,\frac{1}{2}\left[\gamma^{\mu},\gamma^ {\nu}\right]c\,, \tag{14}\]
\(c_{i}\) and \(c_{i}^{*}\) are spinor variables of integration. In the exponents on the right-hand side of Eq. (12), one recognizes the action for the motion of a charged particle with some electric and magnetic dipole moments in the field \([D_{\mu},D_{\nu}]\):
\[S_{\psi}\;=\;\int\limits_{0}^{\tau}d\tau^{\prime}\left(\frac{\dot{z}^{2}}{2}+e _{\psi}A\!\cdot\!\dot{z}+g_{\psi}B\!\cdot\!\dot{z}-\frac{1}{4}\,\sigma_{c}^{ \mu\nu}\left[D_{\mu},D_{\nu}\right]\right)\,. \tag{15}\]
Let us now show that the non-local terms from Eqs. (2) and (3), which arise whenever one rewrites \([D_{\mu},D_{\nu}]\) in terms of physical electric and magnetic fields, contribute nothing more than an additional \(2\pi N\) (\(N\in\mathbb{Z}\)) term to the action (25). Such additional term of course does not contribute to the dynamics of the system, since \(\exp{(-2\pi Ni)}=1\) in the expression (13) and thus there is no change to the functional integrals (11) and (12).
We start by considering the second and the third terms in the action (25). As one can see from Eq. (13), \(z(0)=z(\tau)\), which allows us to transform the integral using the Stokes' theorem:
\[\oint{(e_{\psi}A+g_{\psi}B)\cdot dz}=\int\limits_{\Sigma_{\psi}}d\Sigma^{\mu \nu}\left(e_{\psi}(\partial\wedge A)_{\mu\nu}+g_{\psi}(\partial\wedge B)_{\mu \nu}\right), \tag{26}\]
where the integral in the right-hand side is taken over any surface \(\Sigma_{\psi}\) enclosed by the loop \(z(\tau)\). Next, we use Eqs. (2) and (3) to single out the non-local terms in the integrand. The integral over these terms is given by the following expression:
\[\int\limits_{\Sigma_{\psi}}d\Sigma^{\mu\nu}\ (n\!\cdot\!\partial)^{-1}\left(e_{ \psi}(n\wedge j_{m})^{d}_{\mu\nu}-g_{\psi}(n\wedge j_{e})^{d}_{\mu\nu}\right), \tag{27}\]
where we took into account that the non-local terms featuring the currents associated to the heavy fermion \(\psi\) itself cancel each other. Let us use an antisymmetric representation for the kernel of the \((n\!\cdot\!\partial)^{-1}\) operator [41; 22]:
\[(n\!\cdot\!\partial)^{-1}(x)=\frac{1}{2}\int\limits_{0}^{\infty}dv\left(\delta ^{4}(x-nv)-\delta^{4}(x+nv)\right). \tag{28}\]
As we are interested in this work in axion Maxwell equations, which are used to describe the behaviour of the axion in classical electromagnetic fields, we assume that the currents of light charged particles \(j_{e}\) (and hypothetically \(j_{m}\)) creating and probing these fields in the axion detector are given by the classical expressions (4) and (5).2 Using Eqs. (4), (5) and (28), we rewrite the integral Eq. (27) as follows:
Footnote 2: This assumption can in fact be lifted: the results of this section hold for the fully quantum currents as well, since one can always convert the functional integrals over the charged fermion fields into series involving integrals over point-particle trajectories [38; 39; 40; 20], so that the currents in Eq. (27) are represented by their classical counterparts (4) and (5).
\[\int\limits_{\Sigma_{\psi}}d\Sigma^{\mu\nu}\ (n\!\cdot\! \partial)^{-1}\left(e_{\psi}(n\!\wedge\!j_{m})^{d}_{\mu\nu}-g_{\psi}(n\wedge j _{e})^{d}_{\mu\nu}\right)\ =\] \[\sum_{i}\left(e_{\psi}g_{i}-g_{\psi}e_{i}\right)\int\limits_{ \Sigma_{\psi}^{d}}d\Sigma_{\mu\nu}^{d}\int dx_{i}^{\nu}\,n^{\mu}\int\limits_{0 }^{\infty}dv\left(\delta^{4}(x-x_{i}-nv)-\delta^{4}(x-x_{i}+nv)\right)\ =\] \[\sum_{i}2\pi m_{i}\int\limits_{\Sigma_{\psi}^{d}}d\Sigma_{\mu\nu} ^{d}\int dx_{i}^{\nu}\,n^{\mu}\int\limits_{0}^{\infty}dv\left(\delta^{4}(x-x_ {i}-nv)-\delta^{4}(x-x_{i}+nv)\right),\ \ m_{i}\in\mathbb{Z}\,, \tag{29}\]
where in the last step, we used the DSZ quantization condition. The integral on the right-hand side of Eq. (29) counts the number of times the trajectory of the \(i\)th light charged particle intersects the
oriented three-surface \(\Sigma_{\psi}^{d}\!\times\!\pm nv\), \(0\leq v<\infty\). This number always equals some integer \(L_{i}\in\mathbb{Z}\) except for the measure zero subset of trajectories which are locally tangent to the latter three-surface. The measure zero subset does not contribute to the path integral (put another way, the trajectory of a particle can never be known with an infinite accuracy and thus one can always slightly modify the definition of the action functional, using the method outlined in Ref. [29], so that a given trajectory is no longer tangent to \(\Sigma_{\psi}^{d}\!\times\!\pm nv\)). Therefore, we obtain the following result:
\[\int\limits_{\Sigma_{\psi}}d\Sigma^{\mu\nu}\left(e_{\psi}(\partial\!\wedge\!A)_ {\mu\nu}+g_{\psi}(\partial\!\wedge\!B)_{\mu\nu}\right)\;=\;\int\limits_{\Sigma _{\psi}}d\Sigma^{\mu\nu}\left(e_{\psi}F_{\mu\nu}+g_{\psi}F_{\mu\nu}^{d}\right)+ 2\pi N\,,\;\;N\in\mathbb{Z}\,. \tag{3.15}\]
From Eq. (3.15), we see that the non-local parts of the second and third terms of the action (3.10) do not contribute to the path integral (3.7). Let us now show that the same holds for the last term of this action as well. We use
\[[D_{\mu},D_{\nu}]=-i\Big{(}e_{\psi}\left(\partial\!\wedge\!A\right)_{\mu\nu}+ g_{\psi}\left(\partial\!\wedge\!B\right)_{\mu\nu}\Big{)}\,, \tag{3.16}\]
and Eqs. (2.2), (2.3) to single out the non-local contribution to the integrand:
\[-\frac{1}{4}\int\limits_{0}^{\tau}d\tau^{\prime}\,\sigma_{c}^{\mu \nu}\left[D_{\mu},D_{\nu}\right]\;=\;\frac{i}{4}\int\limits_{0}^{\tau}d\tau^{ \prime}\,\sigma_{c}^{\mu\nu}\left(e_{\psi}F_{\mu\nu}+g_{\psi}F_{\mu\nu}^{d} \right)\;+\] \[\qquad\qquad\qquad\qquad\frac{i}{4}\int\limits_{0}^{\tau}d\tau^{ \prime}\,\sigma_{c}^{\mu\nu}\left(n\!\cdot\!\partial\right)^{-1}\Big{(}e_{ \psi}(n\!\wedge\!j_{m})_{\mu\nu}^{d}-g_{\psi}(n\!\wedge\!j_{e})_{\mu\nu}^{d} \Big{)}\,. \tag{3.17}\]
Using Eqs. (2.4), (2.5) and (3.13), we obtain for the non-local term:
\[\frac{i}{4}\int\limits_{0}^{\tau}d\tau^{\prime}\,\sigma_{c}^{\mu \nu}\left(n\!\cdot\!\partial\right)^{-1}\Big{(}e_{\psi}(n\!\wedge\!j_{m})_{\mu \nu}^{d}-g_{\psi}(n\!\wedge\!j_{e})_{\mu\nu}^{d}\Big{)}\;=\] \[\quad\frac{i}{4}\,\sum\limits_{i}\left(e_{\psi}g_{i}-g_{\psi}e_{ i}\right)\int\limits_{0}^{\tau}d\tau^{\prime}\,\sigma_{c\,\mu\nu}^{d}\,n^{\mu} \int dx_{i}^{\nu}\int\limits_{0}^{\infty}dv\left(\delta^{4}\big{(}z(\tau^{ \prime})-x_{i}-nv\big{)}-\delta^{4}\big{(}z(\tau^{\prime})-x_{i}+nv\big{)} \right). \tag{3.18}\]
The latter expression is non-zero only if a given trajectory \(z(\tau)\) hits any of the strings \(x_{i}\!\pm nv\,,\;0\leq v<\infty\), emanating from the charged particles. The set of all such trajectories is of measure zero in the space of all the possible trajectories, over which we integrate in Eq. (3.7). Thus, the non-local term (3.18) does not contribute to the path integral and therefore can be omitted while calculating the matrix element (3.7).
### Integration over the heavy fermion intermediate state
We have just showed that the field \([D_{\mu},D_{\nu}]\) entering Eq. (3.7) can be redefined by continuity, as the non-local string-like terms have zero contribution to the matrix element we are interested in. As discussed before, the low energy approximation (\(s\ll v_{a}^{2}\)) allows us to treat the latter field as constant and homogeneous, in which case the matrix element (3.7) can be calculated exactly using the Schwinger method [40]. For this, we introduce the effective Hamiltonian
\[\mathcal{H}=\not{D}^{2}/2=-(\not{p}-e_{\psi}\not{A}-g_{\psi}\not{B})^{2}/2\,, \tag{3.19}\]
and solve the Heisenberg equations of motion in a constant field \(C_{\mu\nu}\equiv\left([D_{\mu},D_{\nu}]\right)_{c}\), where the subscript \(c\) means that the field \([D_{\mu},D_{\nu}]\) is redefined by continuity. The further calculation closely follows the one performed by Schwinger in Ref. [40], apart from some numerical factors, and we obtain the following result:
\[\langle x|e^{-i\tau p_{c}^{2}/2}|x\rangle\ =\ -\frac{i}{4\pi^{2}}\,\frac{\text{pf} \left(C_{\alpha\beta}/2\right)}{\text{pf}\sinh\left(\tau C_{\alpha\beta}/2 \right)}\cdot\exp\left(-\frac{i\tau}{4}\sigma_{\mu\nu}C^{\mu\nu}\right), \tag{3.20}\]
see also Ref. [13], where we considered a more general case of a non-Abelian monopole. Using the result for the matrix element Eq. (3.20), we can now calculate the trace and the proper time integral in Eq. (3.6):
\[-\text{tr}_{\gamma}\int d^{4}x\,\frac{iym}{2\sqrt{2}}\,a\gamma_{5 }\int\limits_{0}^{\infty}d\tau\,\langle x|e^{-i\tau({\not{D}_{c}}^{2}+m^{2})/ 2}|x\rangle\ =\ -\frac{ym}{8\pi^{2}\sqrt{2}}\int d^{4}x\,a\int\limits_{0}^{\infty}d\tau\,e^ {-i\tau m^{2}/2}\times\\ \frac{\text{pf}\left(C_{\alpha\beta}/2\right)}{\text{pf}\sinh \left(\tau C_{\alpha\beta}/2\right)}\cdot\text{tr}_{\gamma}\,\gamma_{5}\exp \left(-\frac{i\tau}{4}\sigma_{\mu\nu}C^{\mu\nu}\right)\ =\ -\frac{ym}{64\pi^{2}\sqrt{2}}\int d^{4}x\,a\, \epsilon_{\mu\nu\lambda\rho}C^{\mu\nu}C^{\lambda\rho}\int\limits_{0}^{\infty} d\tau\,e^{-i\tau m^{2}/2}\ =\\ \frac{iy}{32\pi^{2}\sqrt{2}m}\int d^{4}x\,a\,\epsilon_{\mu\nu \lambda\rho}C^{\mu\nu}C^{\lambda\rho}\ =\ \frac{i}{16\pi^{2}v_{a}}\int d^{4}x\,a\,C^{\mu\nu}C^{d}_{\mu\nu}\;, \tag{3.21}\]
where we used the following identity which holds for any skew-symmetric four-by-four matrix:
\[\text{tr}_{\gamma}\,\gamma_{5}\exp\left(-\frac{i\tau}{4}\sigma_{\mu\nu}C^{\mu \nu}\right)\ =\ 4\,\text{pf}\sinh\left(\tau C_{\alpha\beta}/2\right), \tag{3.22}\]
as well as the expression for the Pfaffian of such a matrix: \(\text{pf}\,C_{\alpha\beta}=\epsilon_{\mu\nu\lambda\rho}C^{\mu\nu}C^{\lambda \rho}/8\). We then rewrite the result of Eq. (3.21) in terms of the four-potentials:
\[\frac{i}{16\pi^{2}v_{a}}\int d^{4}x\,a\,C^{\mu\nu}C^{d}_{\mu\nu} \ =\ \frac{i}{16\pi^{2}v_{a}}\int d^{4}x\,a\left([D^{\mu},D^{\nu}]\right)_{c} \left([D_{\mu},D_{\nu}]\right)^{d}_{c}\ =\\ -\frac{i}{16\pi^{2}v_{a}}\int d^{4}x\,a\left(e^{2}_{\psi}\left( \partial\wedge A\right)^{\mu\nu}_{c}\left(\partial\wedge A\right)^{d}_{c\,\mu \nu}+g^{2}_{\psi}\left(\partial\wedge B\right)^{\mu\nu}_{c}\left(\partial \wedge B\right)^{d}_{c\,\mu\nu}+2\,e_{\psi}g_{\psi}\left(\partial\wedge A \right)^{\mu\nu}_{c}\left(\partial\wedge B\right)^{d}_{c\,\mu\nu}\right). \tag{3.23}\]
Note that in the low energy approximation, the contribution from the second exponent in Eq. (3.4) is an analog of the Euler-Heisenberg Lagrangian, as it describes the influence of the heavy charged particle on the low energy electromagnetic field. This contribution is known to be suppressed by integer powers of the small parameter \(s^{2}/m^{4}\)[42]. Thus, the leading order term in the effective Lagrangian stemming from the integration over the heavy fermion is given by Eq. (3.23). This means that in the low energy approximation, the result for the integral over \(\psi\) is:
\[\mathcal{N}\int\prod_{\alpha}\mathcal{D}\psi_{\alpha}\mathcal{D} \bar{\psi}_{\alpha}\exp\left(i\!\int\!d^{4}x\,\mathcal{L}_{\psi}\right)\ =\\ \exp\left\{\frac{i}{16\pi^{2}v_{a}}\int d^{4}x\,a\,\text{Tr}\! \left(e^{2}_{\psi}\left(\partial\wedge A\right)_{c}\!\left(\partial\wedge A \right)^{d}_{c}+g^{2}_{\psi}\left(\partial\wedge B\right)_{c}\!\left( \partial\wedge B\right)^{d}_{c}+2e_{\psi}g_{\psi}\left(\partial\wedge A\right)_{ c}\!\left(\partial\wedge B\right)^{d}_{c}\!\left(\partial\wedge B \right)^{d}_{c}\!\right)\right\}\;. \tag{3.24}\]
Axion Maxwell equations
### Derivation of the axion Maxwell equations
In the previous section, we found the effective Lagrangian describing low energy interactions between axion and electromagnetic field:
\[\mathcal{L}_{\text{aEM}}\;=\;\frac{1}{16\pi^{2}v_{a}}\,a\,\text{Tr}\Big{(}e_{ \psi}^{2}\left(\partial\wedge A\right)_{c}\!\left(\partial\wedge A\right)_{c}^{ d}+g_{\psi}^{2}\left(\partial\wedge B\right)_{c}\!\left(\partial\wedge B \right)_{c}^{d}+2e_{\psi}g_{\psi}\left(\partial\wedge A\right)_{c}\!\left( \partial\wedge B\right)_{c}^{d}\Big{)}\,. \tag{10}\]
An important point is that in such low energy description, the non-local string-like parts should be excluded from tensors \(\left(\partial\wedge A\right)\) and \(\left(\partial\wedge B\right)\), as we found out previously using the full path integral formulation. In this way, the decoupling principle fails: the presence of a heavy fermion intermediate state is felt in the IR through a continuity prescription for the fields, i.e. the fermion cannot be fully integrated out. Of course, this is not unexpected as the theory is essentially non-local. For instance, the non-local string-like terms in Eqs. (2) and (3) clearly show that, no matter how heavy the charged particle is, it leaves its imprint on the long range electromagnetic field, see also the discussion at the end of sec. 2.
The classical equations of motion are obtained by varying the full Lagrangian \(\mathcal{L}=\mathcal{L}_{Z}+\mathcal{L}_{G}+\mathcal{L}_{\text{aEM}}\) derived from the path integral (11):
\[\frac{n\cdot\partial}{n^{2}}\left(n\cdot\partial A^{\mu}\;-\; \partial^{\mu}n\cdot A\;-\;n^{\mu}\partial\cdot A\;-\;\epsilon^{\mu}_{\nu\rho \sigma}n^{\nu}\partial^{\rho}B^{\sigma}\right)\;+\] \[\frac{e_{\psi}^{2}}{4\pi^{2}v_{a}}\,\partial_{\nu}\Big{\{}a\left( \partial\wedge A\right)_{c}^{d}\Big{\}}^{\nu\mu}\;+\;\frac{e_{\psi}g_{\psi}}{ 4\pi^{2}v_{a}}\,\partial_{\nu}\Big{\{}a\left(\partial\wedge B\right)_{c}^{d} \Big{\}}^{\nu\mu}\;=\;j_{e}^{\,\mu}\;, \tag{11}\] \[\frac{n\cdot\partial}{n^{2}}\left(n\cdot\partial B^{\mu}\;-\; \partial^{\mu}n\cdot B\;-\;n^{\mu}\partial\cdot B\;-\;\epsilon^{\mu}_{\nu\rho \sigma}n^{\nu}\partial^{\rho}A^{\sigma}\right)\;+\] \[\frac{g_{\psi}^{2}}{4\pi^{2}v_{a}}\,\partial_{\nu}\Big{\{}a\left( \partial\wedge B\right)_{c}^{d}\Big{\}}^{\nu\mu}\;+\;\frac{e_{\psi}g_{\psi}}{ 4\pi^{2}v_{a}}\,\partial_{\nu}\Big{\{}a\left(\partial\wedge A\right)_{c}^{d} \Big{\}}^{\nu\mu}\;=\;j_{m}^{\,\mu}\;,\] (12) \[\left(\partial^{2}+m_{a}^{2}\right)a\;=\;\frac{1}{16\pi^{2}v_{a} }\,\text{Tr}\Big{(}e_{\psi}^{2}\left(\partial\wedge A\right)_{c}\!\left( \partial\wedge A\right)_{c}^{d}\;+\] \[g_{\psi}^{2}\left(\partial\wedge B\right)_{c}\!\left( \partial\wedge B\right)_{c}^{d}\;+\;2\,e_{\psi}g_{\psi}\left(\partial\wedge A \right)_{c}\!\left(\partial\wedge B\right)_{c}^{d}\Big{)}\,. \tag{13}\]
where we also took into account the kinetic and mass terms for the axion which were omitted in the previous equations. The first rows in the Eqs. (11) and (12) are standard for the Zwanziger theory and simply convert to the well-known expressions \(\partial^{\nu}F_{\nu\mu}\) and \(\partial^{\nu}F_{\nu\mu}^{d}\) respectively, after one takes advantage of the Eqs. (2), (3) and the gauge-fixing conditions. Eqs. (2) and (3) also imply \(\left(\partial\wedge A\right)_{c}^{d}=\left(\partial\wedge B\right)_{c}=F^{d}\) and \(\left(\partial\wedge A\right)_{c}=-\left(\partial\wedge B\right)_{c}^{d}=F\).
It is important that the low energy approximation we used to derive the effective Lagrangian (10) works only for sufficiently weak fields \(F\). On the other hand, as we mentioned in the end of sec. 2, the classical field theory featuring magnetic charges is inconsistent for continuous distribution of charges and currents due to the violation of the Jacobi identity for the gauge covariant derivatives. Therefore, the classical charges must be modeled as point-like, which means that the classical electromagnetic field \(F\) necessarily becomes large in the neighbourhood of each of the charges. Thus, the only way to keep
the classical weak field approximation consistent is to restrict the support of the axion-dependent terms in Eqs. (4.2) and (4.3) so that it excludes the short-distance neighbourhoods of the charges. Note that such prescription is no more than a mathematical formality: the low energy experiments for which the classical Eqs. (4.2)-(4.4) are written probe only long-distance physics, anyway. Importantly, however, the latter prescription implies \(\partial^{\mu}\left(\partial\wedge A\right)^{d}_{c\,\mu\nu}=\partial^{\mu} \left(\partial\wedge B\right)^{d}_{c\,\mu\nu}=O(\sqrt{s}/v_{a})\) in Eqs. (4.2) and (4.3). As the terms featuring these derivatives are multiplied by another power of \(v_{a}^{-1}\), we can safely omit them in our low energy approximation.
We can now rewrite Eqs. (4.2)-(4.4) in terms of the electromagnetic field strength tensor \(F\):
\[\partial_{\mu}F^{\mu\nu}+\frac{e_{\psi}^{2}}{4\pi^{2}v_{a}}\, \partial_{\mu}a\,F_{r}^{d\,\mu\nu}-\frac{e_{\psi}g_{\psi}}{4\pi^{2}v_{a}}\, \partial_{\mu}a\,F_{r}^{\mu\nu}\;=\;j_{e}^{\,\nu}\,, \tag{4.5}\] \[\partial_{\mu}F^{d\,\mu\nu}-\frac{g_{\psi}^{2}}{4\pi^{2}v_{a}}\, \partial_{\mu}a\,F_{r}^{\mu\nu}+\frac{e_{\psi}g_{\psi}}{4\pi^{2}v_{a}}\, \partial_{\mu}a\,F_{r}^{d\,\mu\nu}\;=\;j_{m}^{\,\nu}\,,\] (4.6) \[\left(\partial^{2}+m_{a}^{2}\right)a\;=\;-\frac{1}{16\pi^{2}v_{a} }\left(\left(e_{\psi}^{2}-g_{\psi}^{2}\right)F_{r}^{\mu\nu}F_{r\,\mu\nu}^{d}\; -\;2\,e_{\psi}g_{\psi}F_{r}^{\mu\nu}F_{r\,\mu\nu}\right). \tag{4.7}\]
where the subscript \(r\) denotes the restriction of the support discussed in the previous paragraph. Note that the axion Maxwell equations (4.5) and (4.6), representing physics to order \(O(\sqrt{s}/v_{a})\), are fully consistent with the conservation of the electric and magnetic currents to this order: \(\partial_{\mu}j_{e}^{\mu}=\partial_{\mu}j_{m}^{\mu}=O(s/v_{a}^{2})\). As the Eqs. (4.5) and (4.6) are linear in \(F\), it is convenient to decompose the electromagnetic field \(F\) into the terms that are zeroth (\(F_{0}\)) and first (\(F_{a}\)) order in the small parameter \(\sqrt{s}/v_{a}\). The zeroth order equations are then simply the usual Maxwell equations with magnetic charges, \(\partial_{\mu}F_{0}^{\mu\nu}=j_{e}^{\nu}\) and \(\partial_{\mu}F_{0}^{\mu\nu}=j_{m}^{\nu}\), while the first order equations describe the interaction of the external field \(F_{0\,r}\) with axions:
\[\partial_{\mu}F_{a}^{\mu\nu}+\frac{e_{\psi}^{2}}{4\pi^{2}v_{a}} \,\partial_{\mu}a\,F_{0\,r}^{d\,\mu\nu}-\frac{e_{\psi}g_{\psi}}{4\pi^{2}v_{a}} \,\partial_{\mu}a\,F_{0\,r}^{\mu\nu}\;=\;0\,, \tag{4.8}\] \[\partial_{\mu}F_{a}^{d\,\mu\nu}-\frac{g_{\psi}^{2}}{4\pi^{2}v_{a} }\,\partial_{\mu}a\,F_{0\,r}^{\mu\nu}+\frac{e_{\psi}g_{\psi}}{4\pi^{2}v_{a}} \,\partial_{\mu}a\,F_{0\,r}^{d\,\mu\nu}\;=\;0\,. \tag{4.9}\]
Note that after we perform a similar decomposition for the axion field: \(a=a_{0}+a_{1}\), where \(a_{1}=O(\sqrt{s}/v_{a})\), Eq. (4.7) decouples from Eqs. (4.8) and (4.9), as the latter two equations depend only on \(a_{0}\), which is a free field, \(\left(\partial^{2}+m_{a}^{2}\right)a_{0}=0\).
If one considers a model with several heavy fermions \(\psi\), it is clear that the corresponding axion Maxwell equations include the sum of the interaction terms analogous to those of the Eqs. (4.5)-(4.7) with coefficients determined by the charges \((e_{\psi},g_{\psi})\) of fermions \(\psi\). If the heavy fermions carry also colour charge, which is required to solve the strong CP problem, then we have to also sum over all colour states, i.e. for each \(\psi\) the coefficient gets multiplied by the dimension of the corresponding colour representation \(d(C_{\psi})\). The general form of the Eqs. (4.5)-(4.7) is thus:
\[\partial_{\mu}F^{\mu\nu}+g_{a\text{\tiny EE}}\,\partial_{\mu}a\,F _{r}^{d\,\mu\nu}-g_{a\text{\tiny EM}}\,\partial_{\mu}a\,F_{r}^{\mu\nu}\;=\;j_{ e}^{\,\nu}\,, \tag{4.10}\] \[\partial_{\mu}F^{d\,\mu\nu}-g_{a\text{\tiny MM}}\,\partial_{\mu} a\,F_{r}^{\mu\nu}+g_{a\text{\tiny EM}}\,\partial_{\mu}a\,F_{r}^{d\,\mu\nu}\;=\;j_{m}^{\, \nu}\,,\] (4.11) \[\left(\partial^{2}+m_{a}^{2}\right)a\;=\;-\frac{1}{4}\Big{(}(g_{a \text{\tiny EE}}-g_{a\text{\tiny MM}})\,F_{r}^{\mu\nu}F_{r\,\mu\nu}^{d}\;-\;2 \,g_{a\text{\tiny EM}}\,F_{r}^{\mu\nu}F_{r\,\mu\nu}\Big{)}\,, \tag{4.12}\]
where
\[g_{\text{\tiny{aEE}}}=\frac{E}{4\pi^{2}v_{a}}\,,\quad E=\sum_{\psi}e _{\psi}^{2}\cdot d(C_{\psi})\, \tag{4.13}\] \[g_{\text{\tiny{aMM}}}=\frac{M}{4\pi^{2}v_{a}}\,,\quad M=\sum_{\psi }g_{\psi}^{2}\cdot d(C_{\psi})\,\] (4.14) \[g_{\text{\tiny{aEM}}}=\frac{D}{4\pi^{2}v_{a}}\,,\quad D=\sum_{ \psi}e_{\psi}g_{\psi}\cdot d(C_{\psi})\, \tag{4.15}\]
and we introduced the anomaly coefficients \(E\), \(M\) and \(D\)[14]. The first order \(O(\sqrt{s}/v_{a})\) equations generalizing the Eqs. (4.8) and (4.9) are:
\[\partial_{\mu}F_{a}^{\mu\nu}+g_{\text{\tiny{aEE}}}\,\partial_{\mu }a\,F_{0\,r}^{d\,\mu\nu}-g_{\text{\tiny{aEM}}}\,\partial_{\mu}a\,F_{0\,r}^{ \mu\nu}\ =\ 0\,, \tag{4.16}\] \[\partial_{\mu}F_{a}^{d\,\mu\nu}-g_{\text{\tiny{aMM}}}\,\partial_ {\mu}a\,F_{0\,r}^{\mu\nu}+g_{\text{\tiny{aEM}}}\,\partial_{\mu}a\,F_{0\,r}^{d \,\mu\nu}\ =\ 0\,. \tag{4.17}\]
Rewritten in terms of the electric and magnetic fields, these equations become:
\[\mathbf{\nabla}\!\times\!\mathbf{B}_{a}-\dot{\mathbf{E}}_{a}=-g_{ \text{\tiny{aEE}}}\left(\mathbf{E}_{0}\!\times\!\mathbf{\nabla}\!a-\dot{a}\mathbf{ B}_{0}\right)-g_{\text{\tiny{aEM}}}\left(\mathbf{B}_{0}\!\times\!\mathbf{\nabla}\!a+ \dot{a}\mathbf{E}_{0}\right)\,, \tag{4.18}\] \[\mathbf{\nabla}\!\times\!\mathbf{E}_{a}+\dot{\mathbf{B}}_{a}=g_{ \text{\tiny{aMM}}}\left(\mathbf{B}_{0}\!\times\!\mathbf{\nabla}\!a+\dot{a}\mathbf{ E}_{0}\right)+g_{\text{\tiny{aEM}}}\left(\mathbf{E}_{0}\!\times\!\mathbf{\nabla}\!a- \dot{a}\mathbf{B}_{0}\right)\,,\] (4.19) \[\mathbf{\nabla}\!\cdot\!\mathbf{B}_{a}=g_{\text{\tiny{aMM}}}\, \mathbf{E}_{0}\!\cdot\!\mathbf{\nabla}\!a-g_{\text{\tiny{aEM}}}\,\mathbf{B}_{0} \!\cdot\!\mathbf{\nabla}\!a\,,\] (4.20) \[\mathbf{\nabla}\!\cdot\!\mathbf{E}_{a}=-g_{\text{\tiny{aEE}}}\, \mathbf{B}_{0}\!\cdot\!\mathbf{\nabla}\!a+g_{\text{\tiny{aEM}}}\,\mathbf{E}_{0} \!\cdot\!\mathbf{\nabla}\!a\,, \tag{4.21}\]
where we omitted the subscript \(r\) in order to conform with the conventional notation for external fields.
### Implications of the axion Maxwell equations
The phenomenological consequences of Eqs. (4.18)-(4.21) were discussed in Refs. [43; 44; 45; 10]. Here, we would like to review the main results, without going into details.
First, let us note that Eqs. (4.19) and (4.20) show that there can appear effective magnetic charges and currents in the presence of axions in external electromagnetic fields. This means that Faraday's law and the no-magnetic-monopoles law can be violated in a given experiment with external \(\mathbf{E_{0}}\) or \(\mathbf{B_{0}}\) field assuming there exists some cosmic abundance of axion-like particles. As these laws cannot be violated in the case where no magnetic monopoles exist3, one can experimentally test the existence of heavy magnetic monopoles and dyons. Such indirect probe of heavy dyons would complement the numerous experiments searching for cosmic magnetic monopoles [47], as it does not depend on the cosmic abundance of monopoles and dyons.
Footnote 3: While this is certainly true in the flat space-time case we consider here, the situation can be different in curved space-time [46]; the corresponding effects in Earth-based experiments are however expected to be much smaller than the effects discussed in this paper.
Second, note that due to the DSZ quantization condition applied to Eqs. (4.13)-(4.15), one expects \(g_{\text{\tiny{aMM}}}\gg g_{\text{\tiny{aEM}}}\gg g_{\text{\tiny{aEE}}}\). Because of this, the effective axion-induced magnetic current is expected to dominate over the effective axion-induced electric current in a given axion detection experiment, which significantly changes the response of the system to the axion dark matter signal compared to the normally
considered case described by Eqs. (1)-(4). For instance, this means that in the case where the axion wavelength is much larger than the size of the detector, the dominant effect is given by the axion-induced electric field \({\bf E}_{a}\), as opposed to \({\bf B}_{a}\). This represents a clear distinction from the conventional case, cf. Eqs. (1)-(4), where the dominant axion-induced field in the long-wavelength case is \({\bf B}_{a}\). In the case of resonant haloscope experiments, it turns out that the DC magnetic field \({\bf B}_{0}\) normally used yields a system which is not sensitive to the dominant \(g_{a_{\rm MM}}\) and \(g_{a_{\rm EM}}\) couplings in the non-relativistic limit \(|{\bf k}_{a}|/\omega_{a}\to 0\). On the contrary, applying DC electric field \({\bf E}_{0}\) to a cavity resonator, one can achieve sensitivity to the latter two couplings. In general, the new couplings \(g_{a_{\rm MM}}\) and \(g_{a_{\rm EM}}\) provide a lot of unique signatures; the corresponding effects can be easily distinguished from the effects of the conventional \(g_{a\gamma\gamma}=g_{a_{\rm EE}}\) coupling in a wide range of experiments.
Third, the hierarchy \(g_{a_{\rm MM}}\gg g_{a_{\rm EM}}\gg g_{a_{\rm EE}}\) has important implications for the searches for the QCD axion, i.e. the axion which solves the strong CP problem. In this case, the axion decay constant \(f_{a}=v_{a}/2N\), \(2N\in\mathbb{Z}\), is fixed in terms of the axion mass \(m_{a}\), which means that the axion couplings (42)-(43) can be plotted as functions of \(m_{a}\), with uncertainties given by the anomaly coefficients \(N\), \(E\), \(M\) and \(D\). The hierarchy of the couplings implies that at a given mass \(m_{a}\), the effects of the \(g_{a_{\rm MM}}\) and \(g_{a_{\rm EM}}\) couplings are much stronger compared to the effects of the conventional \(g_{a\gamma\gamma}=g_{a_{\rm EE}}\) coupling. This means that our current experiments, if adapted to look for the new couplings, can be much more sensitive to the electromagnetic couplings of the QCD axion than previously thought. Moreover, it turns out that the \(g_{a_{\rm MM}}\) coupling of the QCD axion can explain the anomalous TeV transparency of the Universe [48, 49, 50] - an issue which was hypothesised to be resolved by an axion-like particle in many investigations, but which could not be explained in the framework of conventional QCD axion models with only the \(g_{a\gamma\gamma}=g_{a_{\rm EE}}\) coupling. Even more, the required \(g_{a_{\rm MM}}\) coupling can also account for another astrophysical axion hint, which was derived from studying the cooling of horizontal branch stars in globular clusters [51].
Fourth, the coupling \(g_{a_{\rm EM}}\) violates CP. Note that the conventional coupling \(g_{a\gamma\gamma}=g_{a_{\rm EE}}\) is necessarily CP-conserving, since in general, QED preserves CP. In the QFT with dyons, however, there exists a possible source of CP violation associated to the charge spectrum of dyons of the theory. This means that the CP violation in the electromagnetic interactions of axions is a clear signature of the existence of heavy dyons. Moreover, such CP violation would imply that the spectrum of heavy dyons is CP-violating, i.e. \(D\neq 0\) in Eq. (43). Experimentally, one can probe the \(g_{a_{\rm EM}}\) coupling in light-shining-through-wall experiments by varying the polarization of the incoming light [10], as well as in haloscope experiments [44, 52].
Finally, let us note that the axion Maxwell equations (45)-(46) become trivial in the case of a constant and homogeneous axion field \(a/v_{a}=\theta\). This means that there is no Witten-effect induced interaction [53] between the axion and the currents of charged particles \(j_{e}\) and \(j_{m}\) in the model we consider. In particular, charged particles do not obtain extra charges proportional to \(\theta\) from their interaction with the axion field. Contrary to the misconception which sometimes appears in the literature, the Witten-effect induced interactions of axions are _not_ a general feature of axion electrodynamics.4 The Witten-effect induced interactions arise only in particular ultraviolet (UV) models which feature an extra rotor (instanton) degree of freedom in the IR, see Ref. [10] for a detailed discussion.
Summary
Using the path integral approach, we gave a detailed step-by-step derivation of the axion Maxwell equations in dyon-philic axion models, i.e. in hadronic axion models where heavy PQ-charged quarks are allowed to carry magnetic charges. The form of the derived axion Maxwell equations (4.18)-(4.21) fully agrees with the one derived by us in the EFT approach in a previous publication [10]. In particular, we confirmed that there can exist additional axion-photon couplings \(g_{a\text{mm}}\) and \(g_{a\text{em}}\) along with the normally considered axion-photon coupling \(g_{a\gamma\gamma}=g_{a\text{EE}}\). As these new couplings change the structure of the axion Maxwell equations significantly, we predict unique signatures in haloscope and light-shining-through-wall experiments. In particular, the detection of the effective axion-induced magnetic charges or currents in a haloscope would represent an indirect evidence for the existence of magnetically charged matter. Moreover, the new electromagnetic couplings of axions can reconcile the Peccei-Quinn solution to the strong CP problem with astrophysical axion hints, such as the anomalous TeV transparency of the Universe and the anomalous energy loss of horizontal branch stars in globular clusters.
Through an example of a particular UV-complete dyon-philic axion model, the path integral approach allowed us to illustrate and explain in detail some peculiarities of the low energy description of QFTs with magnetic charges, namely the impossibility to fully integrate out heavy dyons and the ensuing continuity prescriptions for the fields in the IR. Also, by analyzing the electromagnetic interactions of axions in the dyon-philic axion models, we found that in these models, there are no Witten-effect induced interactions between axions and charged particles. Thus, we confirmed that the axion-photon couplings and the Witten-effect induced couplings need not coincide: an important fact discussed in detail in Ref. [10]
**Acknowledgements**
A.S. is funded by the UK Research and Innovation grant MR/V024566/1. A.R. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306. This work has been partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491245950.
|
2305.14098 | Balancing Explainability-Accuracy of Complex Models | Explainability of AI models is an important topic that can have a significant
impact in all domains and applications from autonomous driving to healthcare.
The existing approaches to explainable AI (XAI) are mainly limited to simple
machine learning algorithms, and the research regarding the
explainability-accuracy tradeoff is still in its infancy especially when we are
concerned about complex machine learning techniques like neural networks and
deep learning (DL). In this work, we introduce a new approach for complex
models based on the co-relation impact which enhances the explainability
considerably while also ensuring the accuracy at a high level. We propose
approaches for both scenarios of independent features and dependent features.
In addition, we study the uncertainty associated with features and output.
Furthermore, we provide an upper bound of the computation complexity of our
proposed approach for the dependent features. The complexity bound depends on
the order of logarithmic of the number of observations which provides a
reliable result considering the higher dimension of dependent feature space
with a smaller number of observations. | Poushali Sengupta, Yan Zhang, Sabita Maharjan, Frank Eliassen | 2023-05-23T14:20:38Z | http://arxiv.org/abs/2305.14098v1 | # Balancing Explainability-Accuracy of Complex Models
###### Abstract
Explainability of AI models is an important topic that can have a significant impact in all domains and applications from autonomous driving to healthcare. The existing approaches to explainable AI (XAI) are mainly limited to simple machine learning algorithms, and the research regarding the explainability-accuracy tradeoff is still in its infancy especially when we are concerned about complex machine learning techniques like neural networks and deep learning (DL). In this work, we introduce a new approach for complex models based on the co-relation impact which enhances the explainability considerably while also ensuring the accuracy at a high level. We propose approaches for both scenarios of _independent features_ and _dependent features_. In addition, we study the uncertainty associated with features and output. Furthermore, we provide an upper bound of the computation complexity of our proposed approach for the dependent features. The complexity bound depends on the order of logarithmic of the number of observations which provides a reliable result considering the higher dimension of dependent feature space with a smaller number of observations.
## 1 Introduction
Artificial Intelligence (AI) is envisioned to play an important role in various technical fields from the energy market to clinical domains[3; 9; 10]. The machine learning algorithm, especially with the recent development in deep learning, has proven to be an indispensable technology to any system including critical infrastructure [12; 13; 14; 15]. As a consequence, there is a growing interest in understanding the execution and performance of these algorithms[22; 23]. Explainable AI (XAD)[1; 2] can be a promising research field to understand and interpret the "black box" behaviour of machine learning (ML) algorithms[24; 25; 26].
Figure 1: The graph shows the trade-off between explainability-accuracy; More complex models are hard to explain.
SHAP: Shapley Additive Explanations[18]and LIME[19] are the most used algorithms for the explainability of an ML model. The SHAP-based approach calculates the weighted marginal contribution called Shapley value for each feature. The Shapley value of a feature represents its contribution to one or several sets of features. On the other hand, LIME focuses on the local faithfulness of a model. Although LIME[19] has the desirable property of additivity[18], it has weaknesses regarding the lack of consistency[16], missingness[17], and stability[20, 27]. SHAP fulfils these and hence is commonly used. The sum of SHAP values associated with all features is equal to the final contribution. LIME assumes that the local model is linear, whereas SHAP can be applied to any simple nonlinear model[21]. However, for complex ML models, the SHAP algorithm simplifies the model first and then calculates the feature importance[28]. SHAP considers all possible combinations of features, which becomes hard to apply with high dimensional dependent feature space. To this end, while more complex ML/DL models yield better accuracy, the explainability of such models is a challenging and largely unexplored problem (see figure 1 for better understanding). In particular, enhancing the explainability of a model while also ensuring a high level of accuracy is not a trivial task[29]. Also, most of the existing algorithms including SHAP, are hard to apply when features are dependent [30]. Furthermore, existing XAI models including SHAP-based approaches interpret the result without measuring the uncertainty[32, 33] regarding the feature's contributions to the output. On this background, the three main challenges addressed by us, **1)**_The explainability-Accuracy tradeoff in a complex model_, **2)**_Large number of dependent input features_, and **3)**_Uncertainty associated with each feature and output_.
Motivated by these considerations, we propose a new approach _ExCIR: Explainability through Correlation Impact Ratio_ that can maintain an accuracy-explainability trade-off for complex ML models, considering both dependent and independent features in higher dimensional feature space. ExCIR also considers the uncertainty associated with a feature contribution by measuring Shannon entropy [31] for each feature. ExCIR adjusts the distance between features and output vectors in hyper-dimensional space for both the original and the lightweight model environment to ensure the accuracy of the model. Instead of SHAPLY values[21, 23], ExCIR calculates the correlation impact ratio (CIR) to explain the relation distance of a target feature on the output variable. The main contributions of this work are:
1. **Novel framework for Accuracy-Explainability tradeoff:** We create a lightweight model to explain the feature impacts on the output by creating a suitable environment such that the lightweight model works with sample data having less input instead of choosing different sets of features.
2. **Introducing new metric to calculate the uncertainty of explainability:** To calculate this metric we will consider the notion of uncertainty [6] which measures the uncertainty between the features and output.
3. **Novel metrics of Explainability for both feature independencies and dependencies:** We propose different measures to compute the relation impact for the cases of independent features and dependent features. Both the metrics can be directly applied to the nonlinear model and as a result, the explainability is enhanced while ensuring the same level of accuracy.
## 2 ExCIR: System Model and Design
We consider a data set with \(n\) rows and \(k\) features, denoted as \(\underset{\sim}{f_{1},f_{2},...,f_{k}};\underset{\sim}{f_{i}\epsilon}||F||^{(k \times n)}\) where \(||F||^{(k\times n)}\) is the \((k\times n)\) dimensional feature space having \(k\) different features with the \(n\) observations. Here, the \(i^{th}\) feature is denoted as \(\underset{\sim}{f_{i};\ }\ i=1:k\) and \(\underset{\sim}{f_{i}=(f_{i1},f_{i2},.....,f_{in})}\). The machine learning model \(M(f)\) learns all data history and predicts the output vector \(\underset{\sim}{Y}=(y_{1},y_{2},...,y_{2})\). We are interested in finding how the features are related to each other and how it affects the result. The correct and detailed explanation can help the model to perform better and reduce the loss in future predictions. We derive a lightweight explanation model that works with the sample of the original data. We create a suitable environment for our XAI model where the accuracy of the lightweight model is almost the same as the original model. We achieve the model output accuracy by equating the projection and the embedding distance[37]. This strategy helps to create the twin environment by securing the same positions of input data distributions in both high-dimensional spaces. As the
environments are similar for the original and lightweight models, the explainability of the lightweight model is good enough to explain the original model. The reason for using a lightweight model is that a higher dimension of input data with bigger feature space can make ExCIR more complicated to apply in real-life practice. Also, we calculate mutual information and uncertainty between features and the output to explain the impact of the dependent feature because lower dimensional input space provides lesser bias in computing the uncertainty associated with the features. The main difference between our ExCIR model with the surrogate model[39] is, the ExCIR model is a copy of the original model which works with the same data set and chooses multiple numbers of combinations of features to calculate feature importance. On the other hand, our model is equivalent to the original model but works with less data and considers all features for the calculation of the feature impact ratio. In ExCIR, we do not need to consider different combinations of feature sets, whereas we calculate Shannon entropy considering the uncertainty of the features, and based on that we introduce Conditional Multivariate Mutual Information CMMI. The main reason for achieving the same environment is,
1. Both the lightweight model and the original model's environments are equivalent, so we are securing the accuracy guarantee at first.
2. We apply our new XAI approach to the lightweight model and as the lightweight model works with a lower-dimension environment, it is more flexible to calculate complex metrics like CMMI for each feature.
3. As we prove that both models have the same environment (see next section), we can claim the impact of the feature in the lightweight model will be the same as the original one, i.e., the feature's contributions will be the same for both models.
Let \(M^{\prime}\) denote the lightweight model, trained over the lower dimensional data set having the same number of features \(k\) as \(\begin{subarray}{c}f_{1},f_{2},...,f_{k};\ f_{i}\epsilon||F||^{k\times n^{ \prime}}\end{subarray}\) and \(f_{i}=(f_{i1},f_{i2},.....,f_{in^{\prime}})\), where \(i=1:k\) and \(n^{\prime};n^{\prime}<n\) is the number of rows in the data set. Let \(\overset{\sim}{\sim}\) denote the output variable.
\(||F||^{(k\times n)}\) is an \((k\times n)\) dimensional feature space that contains each feature's distributions. A feature distribution refers to a distribution that is followed by all the data points of specific features. So we can consider unified \(([k+1]\times n)\) dimensional super space \(\mathcal{U}\) such that \(||F||^{(k\times n)}\subset\mathcal{U}\). \(\mathcal{U}\) contains all feature distributions as well as target output distribution. We assume that \((k+1)\)th distribution is the output distribution. Let, \(\mathcal{D}_{i};\ i=1:[k+1]\) denote the distribution of the \(i^{th}\) feature, and \(\mathcal{D}(Y)\) is the output distribution of the original model. Then we can have, \(\mathcal{U}=\begin{bmatrix}\mathcal{D}_{1}(f_{1})\cup\mathcal{D}_{2}(f_{2}) \underset{\sim}{\sim}\cup\mathcal{D}_{k}(f_{k})\cup\mathcal{D}(Y))\end{bmatrix}\). On the other hand, \(\mathcal{D}(Y^{\prime})\) is the output distribution from the lightweight explainable model. Without loss of generality, \(\mathcal{U}^{\prime}\) is the superspace for the lightweight model where \(\mathcal{U}^{\prime}\subset\mathcal{U}\) and we can have, \(\mathcal{U}^{\prime}=\begin{bmatrix}\mathcal{D}_{1}(f_{1})\cup\mathcal{D}_{2} (f_{2})\cup...\cup\mathcal{D}_{k}(f_{k})\cup\mathcal{D}(Y^{\prime}))\end{bmatrix}\). Our main idea is to find the relation among features and how it affects the output. These relations can be non-linear and dependent. ExCIR will work directly with the non-linear model for dependent as well as independent features.
## 3 Accuracy of ExCIR
To maintain the lightweight model's accuracy, the environment of the lightweight model must be almost the same as the original model. Because within the same input-output environment both the models should behave in the same manner. Here, the environment refers to the features' distributions and their positions in the superspace. Every feature has some impact on generating the output. Keeping this in mind, we equate the distance between each feature and output distribution for both spaces \(\mathcal{U}\) and \(\mathcal{U}^{\prime}\). If the distance of the same feature to the output is the same in both spaces, we can claim that the positions of features in both spaces are similar. So, \(\mathcal{U}^{\prime}\) will create a twin environment of \(\mathcal{U}\) with a lower dimension. Once we secure the feature and output distribution position, in the next step, we use projection and embedding distance [37] through f-divergence so that the lightweight model output distribution becomes a mirror image of the original model output. More specifically, we can claim that the lightweight model environment is the same as the original model when
1. The distances between features and output distributions are the same in both spaces.
2. Output distributions in both spaces are mirror images of each other.
Let \(\overset{\sim}{\sim}=(\mathcal{F}_{1},\mathcal{F}_{2},....,\mathcal{F}_{n})^{\prime}\) denote the input column vector, where \(\mathcal{F}_{j};j=1:n\) is the \(jth\) input containing all \(k\) features. That means, \(\mathcal{F}_{j}=[f_{1j},f_{2j},.....,f_{kj}]\); \(j=1:n\). The output vector is
\(Y=(y_{1},y_{2},....,y_{n})^{\prime}\). Now, each of \(y_{i}\) s is explained by the \(k\) features of the \(jth\) input vector. that means, \(y_{j}\)'s are explained by \(\mathcal{F}_{j}\)s. Now if we consider a \(k\) dimensional input space, then by the definition of Euclidean distance [38] we define the local distance for a single output \(y_{i}\) as follows:
\[D_{ij}^{2}=\sum_{j=1}^{k}(y_{i}-f_{ji})^{2} \tag{1}\]
The average \(k\) dimensional distance between \(Y\) and \(\mathcal{F}\) can provide the exact position of the distribution of \(Y\) in a super space. It will help us build our lightweight model environment to produce a highly accurate output space with similar influences of features as the original data. So to calculate the distance between \(Y\) and \(\mathcal{F}\), we have to consider all local distances. The final average distance for the original and sample space can be defined as respectively:
\[D_{\text{final}}^{2}=\frac{1}{n}\sum_{j=1}^{k}\sum_{i=1}^{n}(y_{i}-f_{ji})^{2},\text{ and }D_{\text{final}}^{\prime 2}=\frac{1}{n^{\prime}}\sum_{j=1}^{k}\sum_{i=1}^{n^{ \prime}}(y_{i}^{\prime}-f_{ji})^{2} \tag{2}\]
The idea is to equalize two distances so that the output distribution for two data sets has the same distance from their feature i.e., exact same positions in the high dimensional space. We will have to achieve the same output environment in lightweight space so that we can get the exact accurate relation impact of each feature on the output vector. To create such a situation we have to minimize the difference between two distance measurements which means \(|D_{\text{final}}^{2}-D_{\text{final}}^{\prime 2}|\to 0\). We define the following condition for achieving an equivalent environment for the lightweight explainable model:
**Definition 1**:: _For an ML model, let \(\mathcal{F}\) be the input vector having \(k\) number of features, \(Y\) be the output vector and for a lightweight model, \(\mathcal{F}^{\prime}\) be the input vector having the same \(k\) features and \(Y^{\prime}\) be the output vector. In a multidimensional space, the average distances from the feature space to the output vector for the original ML model and for the lightweight model are denoted as \(D_{\text{final}}^{2}\) and \(D_{\text{final}}^{\prime 2}\) respectively. Then we can claim that both the spaces' features and output distribution have similar positions with the exact same amount of relation impact of each feature if and only if :_
\[\lim_{y\to y^{\prime}}|D_{\text{final}}^{2}-D_{\text{final}}^{\prime 2}|=0 \tag{3}\]
More fundamentally, when two spaces have the equivalent environment and the output distributions are in similar positions i.e., the average distance from one local output to the corresponding feature input is approximately the same for both spaces and given the fact that two distributions are exactly mirror image of each other (proved later in this section), we can claim that the impact of the \(ith\) feature on the output vector is exactly the same in both multi-dimensional spaces. So, if we consider the lightweight space to calculate the relation impact of each feature for generating the report, it will be able to accurately explain the original model without loss of quality in the result. So, in the next step, we will minimize the distance between the original and the lightweight model output distribution in different dimensions to achieve the mirror effect.
In ExCIR, after passing the sample dataset to the lightweight model, the model will generate an initial output, defined as \(Y^{\prime}\). Then we will minimize the loss function \(\mathcal{L}(Y,Y^{\prime})\) with respect to the distance from output distribution to feature distributions. Now the lightweight model super space \(\mathcal{U}^{\prime}\) and original model super space \(\mathcal{U}\) can be of different dimensions, as we are working with the low dimensional sample data set. In this case, we need to consider calculating the output distribution distance which belongs to a different dimension space that makes it challenging to compute distances. At first, we will use two different approaches Projection distance and Embedded distance [37] to measure the distance of output distributions of the original and lightweight model.
let, \(m(\Omega)\) denotes all sets of borel probability measures where \(\Omega\)\(\epsilon\)\(\mathbb{R}^{n}\), and let \(m^{p}(\Omega)\subseteq m(\Omega)\) refer those finite \(pth\) moment, where \(p\)\(\epsilon\)\(\mathbb{N}\). Then for any \(n^{\prime}\), \(n\)\(\epsilon\)\(\mathbb{N}\) and \(n^{\prime}\)\(\leq\)\(n\), we can write
\[O(n^{\prime},n)=\{P\epsilon\mathbb{R}^{n^{\prime}\times n}:PP^{T}=I_{n^{\prime}}\} \tag{4}\]
the \(n^{\prime}\times n\), metrics consists of orthonormal rows and we can write \(O(n)=O(n,n)\) for the orthonormal group. \(P^{T}=\text{Transpose of }P\) where for any \(P\)\(\epsilon\)\(O(n^{\prime},n)\) and \(b\)\(\epsilon\)\(\mathbb{R}^{n^{\prime}}\),
\[\Phi_{P,b}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n^{\prime}},\Phi_{P,b(x)}=Px+b; \tag{5}\]
and for any \(\mu\)\(\epsilon\)\(m(\mathbb{R}^{n})\), \(\Phi_{P,b(\mu)}=\mu\circ\Phi_{P,b}^{-1}\) is the pushforward measure where \(\Phi_{P}=\Phi_{P,0}\) if \(b=0\). More specifically, for any measurable map \(\Phi:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n^{\prime}}\), \(\Phi(\mu)=\mu\circ\Phi\) is the pushforwad measure.
Here for our method, without loss of generality, we assume that \(\mu=\mathcal{D}(Y)\) is the lightweight model output probability measure and \(\delta=\mathcal{D}(Y)\) is the original model output probability measure. We define a distance \(d(\mu,\delta)\) where, \(\mu\;\epsilon\;m(\Omega_{1})\) and \(\delta\;\epsilon\;m(\Omega_{2})\); \(\Omega_{1}\subseteq\mathbb{R}^{n^{\prime}}\), and \(\Omega_{2}\subseteq\mathbb{R}^{n}\). Here, \(n,n^{\prime}\;\epsilon\;\mathbb{N}\) and \(n^{\prime}\leq n\). We consider here the notion of f-divergence to measure distance function. According to [37] any method of f-divergence (kl divergence [42], Jensen- Shanon divergence [41] etc ) will satisfy the algorithm. Without loss of generality, we can consider,
\[\Omega_{1}=\mathbb{R}^{n^{\prime}},\Omega_{2}=\mathbb{R}^{n} \tag{6}\]
Now we are interested in calculating the distance with projection and embedding measures [37].
**Definition 2:** let \(n^{\prime},n\in\mathbb{N}\), \(n^{\prime}\leq n\). For any \(\mu\;\epsilon\;m(\mathbb{R}^{n^{\prime}})\) and \(\delta\;\epsilon\;m(\mathbb{R}^{n})\), the embedding of \(\mu\) into \(\mathbb{R}^{n}\) are the set of \(n\)-dimensional measures
\[d^{+}(\mu,n)=\{\alpha\;\epsilon m(\mathbb{R}^{n}):\Phi_{P,b}(\alpha)=\mu\; \text{for some}\;PeO(n^{\prime},n),b\;\epsilon\;\mathbb{R}^{n^{\prime}}\}; \tag{7}\]
and,
**Definition 3:** let \(n^{\prime},n\in\mathbb{N}\), \(n^{\prime}\leq n\). For any \(\mu\epsilon\;m(\mathbb{R}^{n^{\prime}})\) and \(\delta\epsilon m(\mathbb{R}^{n})\). The projection of \(\delta\) onto \(\mathbb{R}^{\times^{\prime}}\) are the \(n^{\prime}\)- dimensional measures,
\[d^{-}(\delta,n^{\prime})=\{\beta\;\epsilon\;m(\mathbb{R}^{n^{\prime}}):\Phi_{ P,b}(\beta)=\delta\;\text{for some}\;\;P\;\epsilon\;O(n^{\prime},n),b\;\epsilon\;\mathbb{R}^{n^{\prime}}\} \tag{8}\]
Let \(d\) be any notion of distance on \(m(\mathbb{R}^{n})\) for any \(n\in\mathbb{N}\). Then the projection distance will be :
\[d^{-}(\mu,\delta)=\inf_{\beta\in d^{+}(\delta,n^{\prime})}\text{d}(\mu,\beta) \tag{9}\]
and the embedding distance will be,
\[d^{+}(\mu,\delta)=\inf_{\alpha\in d^{+}(\mu,n)}\text{d}(\delta,\alpha) \tag{10}\]
Both \(d^{-}(\mu,\delta)\) and \(d^{+}(\mu,\delta)\) calculate the distance between two probability measures \(\mu\) and \(\delta\) of different dimensions. In [37] it is shown that if \(d\) is an f-divergence, then \(d^{-}(\mu,\delta)=d^{+}(\mu,\delta)=\hat{d}(\mu,\delta)\). The authors in [37] also generalized the theorem and proved that \(d^{-}(\mu,\delta)=d^{+}(\mu,\delta)=\hat{d}(\mu,\delta)=0\) if and only if \(\Phi_{P,b}(\delta)=\mu\) for some \(P\;\epsilon\;O(n^{\prime},n)\), and \(b\;\epsilon\;\mathbb{R}^{n}\). This means if the expected distance \(\hat{d}(\mu,\delta)\) can be minimized to \(0\), then the necessary and sufficient condition for claiming that two probability measures \(\mu\) and \(\delta\) are rotated and translated copies of each other, modulo embedding in a higher-dimensional ambient space where \(n^{\prime}\neq n\). Here, our loss function is the expected distance and the goal is to minimize the expected distance between the two output distributions \(\mu=\mathcal{D}(Y^{\prime})\) and \(\delta=\mathcal{D}(Y)\) as close as \(0\) through a risk generator. This will help to boost the accuracy of the model. So, we can define the loss function as :
\[\mathcal{L}(Y,Y^{\prime})=E_{\mu em(\mathbb{R}^{n}),\delta em(\mathbb{R}^{n^{ \prime}})}(\hat{d}(\mu,\delta))=E_{Y\epsilon d,Y^{\prime}dt^{\prime}}(\hat{d} (\mathcal{D}(Y),\mathcal{D}(Y^{\prime}))) \tag{11}\]
\[\mathcal{L}(Y,Y^{\prime})=E[\hat{d}(\mathcal{D}(Y),\mathcal{D}(Y^{\prime}))|Y \epsilon d,Y^{\prime}\epsilon d\mathcal{U}^{\prime}] \tag{12}\]
To minimize the loss function, we introduce a risk generator function, say \(\mathcal{R}(d*)\epsilon\mathcal{H}\) which will give us the desired result for higher accuracy. Here, \(\mathcal{\tilde{H}}\) is a hypothesis class that contains all possible risk generators. We will choose the final risk generator \(\mathcal{R}(d*)\) when \(\hat{d}(\mathcal{D}(Y),\mathcal{D}(Y^{\prime}))\to 0\). We can define \(\mathcal{R}(d*)\) as:
\[\mathcal{R}(d*)=\operatorname*{argmin}_{\mathcal{R}(d)\hat{n}\in\mathcal{H}, \hat{d}\to 0}\int_{\mathcal{D}(Y)\epsilon d\ell}\int_{\mathcal{D}(Y^{\prime}) \epsilon d\ell^{\prime}}E[\mathcal{L}(Y,Y^{\prime})]+\lambda(.) \tag{13}\]
Here \(\lambda\) is the regularized parameter. Clearly from the equation, we can see when \(E[\mathcal{L}(Y,Y^{\prime})]=\mathcal{L}(Y,Y^{\prime})\to 0\), then \(\hat{d}\to 0\) implies \(Y\to Y^{\prime}\). So, it will achieve the necessary and sufficient condition where two probability measures \(\mathcal{D}(Y)\) and \(\mathcal{D}(Y^{\prime})\) is rotated and translated copies of each other, and \(n^{\prime}\neq n\). As a result, the accuracy of the output is the same as the original model.
## 4 Explainability of ExCIR
We use CIR to measure the feature relation impact on the final output and show how it works in two different environments regarding _independent_ and _dependent_ features. To check the relation between a feature \(\underset{\sim}{f_{1},f_{2},...,f_{k}}\) and
Figure 2: The graph reflects the behaviour of the model in equation 14 considering the feature’s uncertainty to con
output (linear or nonlinear) we need to hold one parameter constant and also hold input features \(\underset{\sim}{(f_{1}...f_{k})}\) constant. Then we vary the remaining parameters and watch how \(\underset{\sim}{Y^{\prime}}\) changes. If the change in \(\underset{\sim}{Y^{\prime}}\) is non-linear with the change in the parameter which is varied, and if this is true for all the parameters in the model, then that model is said to be non-linear."Non-Linear" describes the model, not the graph of \(\underset{\sim}{(f_{1}...f_{k})}\) vs. \(\underset{\sim}{Y^{\prime}}\). An example of non-linear regression would be something like this:
\[M(f_{(f,\beta)})=\frac{\beta_{1}f_{1}+\beta_{2}f_{2}}{\beta_{3}f_{3}+\beta_{4} f_{4}+.......+\beta_{n}f_{n}} \tag{14}\]
The equation 14, may behave like a linear model depending on parameter choosing in the general case. In our work, as we consider the uncertainty associated with the feature's contribution i.e., a feature may or may not contribute to generating an output, equation 14 should behave like a nonlinear model. For a set of empirical outputs, figure 2 demonstrates the curve of how the model in equation 14 would behave considering the feature's uncertainty. For our model, we also assume that at least two features (one is conversely and another is directly related to the output) are present.
### Partial correlation Impact ratio for independent features
We have a data set with \(n^{\prime}\) rows and \(k\) features, denoted as \(\underset{\sim}{f_{1},f_{2},...,f_{k}};\underset{\sim}{f_{i}\epsilon}||F||^{k \times n^{\prime}}\). The machine learning model \(M^{\prime}(f)\) learns all data history and predicts the output vector \(\underset{\sim}{Y^{\prime}}=(y^{\prime}_{1},y^{\prime}_{2},...,y^{\prime}_{n^ {\prime}})\). Changing the notation for more easy understanding and without loss of generality, let \(f_{ij}\) denote the \(jth\) observation of the \(ith\) feature; where \(i=1:k\) and \(j=1:n^{\prime}\) and let \(y^{\prime}_{j}\) denote the jth observation of the output vector \(\underset{\sim}{Y^{\prime}}\); \(j=1:n^{\prime}\). then the mean of the feature \(f_{i};i=1:k\), the mean of the output vector \(Y\) will be, and the joint weighted mean of feature \(f_{i}\) ; \(i=1:k\) and \(Y\) will be respectively,
\[\hat{f}_{i}=\frac{\sum_{j}f_{ij}}{n^{\prime}},\ \hat{y^{\prime}}=\frac{\sum_{j}y _{j}}{n^{\prime}},\ \text{and}\ \hat{f_{i}}\hat{y^{\prime}}=\frac{\hat{f}_{i}+\hat{y^{ \prime}}}{2} \tag{15}\]
To measure the impact of each feature on the output, we calculate the relation impact for each feature separately. Then we define the PCIR of feature \(\underset{\sim}{f_{i}}\) on the output vector \(\underset{\sim}{Y^{\prime}}\) by a ratio \(\eta_{f_{i}}=\beta_{f_{i}}{}^{2}\) which is formulated as:
\[\eta_{f_{i}}=\frac{n^{\prime}[(\hat{f_{i}}-\hat{f_{i}}\hat{y^{\prime}})^{2}+( \hat{y^{\prime}}-\hat{f_{i}}\hat{y^{\prime}})^{2}]}{\sum_{j}(f_{ij}-\hat{f_{i}} \hat{y^{\prime}})^{2}+\sum_{j}(y^{\prime}_{j}-\hat{f_{i}}\hat{y^{\prime}})^{2}} \tag{16}\]
i.e., the weighted variance of the \(ith\) feature and output variable with respect to joint mean \(\hat{f_{i}}\hat{y^{\prime}}\) divided by the variance of all values with respect to joint mean \(\hat{f_{i}}\hat{y^{\prime}}\). This correlation impact ratio \(\eta_{f_{i}}\); \(i=1:k\) takes values between \(0\) and \(1\). For \(\eta_{f_{i}}=1\), the dispersion is the same for both the output and features. That means a small change in feature observation leads to changes in output observation. So, we can say the \(ith\) feature has a great impact on output. Whereas, \(\eta_{f_{i}}=0\) refers to the case when \(ith\) features have almost no impact on the output. We divide the features into two groups, the first group is directly related to output; i.e., changes in the same direction. On the other hand, another group of features is conversely related to the output; i.e., (changes in a different direction). For the sake of simplicity, we take \((f_{1},f_{2},...,f_{m});0<m\leq k\) directly related to output and belongs to numerator and \((f_{(m+1)},f_{(m+2)},...,f_{k})\) conversely related to output and belongs to denominator. Then we can write ExCIR model as:
\[M^{\prime}(f_{(f,\beta^{2})})=\frac{\beta_{f_{1}}^{2}f_{1}+\beta_{f_{2}}^{2}f_{ 2}+......+\beta_{f_{m}}^{2}f_{m}}{\beta_{f_{m+1}}^{2}f_{m+1}+\beta_{f_{m+2}}^{ 2}f_{m+2}+.......+\beta_{f_{k}}^{2}f_{k}} \tag{17}\]
where \(0<m\leq k\) and we can modify it to the following form :
\[M^{\prime}(f_{(f,\eta)})=\frac{\eta_{f_{1}}f_{1}+\eta_{f_{2}}f_{2}+......+\eta_{f_ {m}}f_{m}}{\underset{\sim}{\sim}} \tag{18}\]
For the \(jth;j=1:n\) observations of features \(f_{1j},f_{2j},.....,f_{kj}\), the local output \(y_{j}\) can be expressed as :
\[y_{j}^{\prime}=M^{\prime}(f_{j(f,\eta)})=\frac{\eta_{f_{1}}f_{1j}+\eta_{f_{2}} f_{2j}+......+\eta_{f_{m}}f_{mj}}{\eta_{f_{m+1}}f_{m+1,j}+\eta_{f_{m+2}}f_{m+2,j}+......+ \eta_{f_{k}}f_{kj}} \tag{19}\]
Here, \(f_{ij};i=1:k,j=1:n\) is the binary variable. \(f_{ij}=1\), refers to the presence of the \(ith\) feature in the \(jth\) entry. If we can prove that the changes in the output by changing a single input in the \(ith\) feature are actually dependent on its corresponding correlation ratio, i.e., \(\eta_{f_{i}}\), we can claim that our proposed correlation ratio can reassure the feature importance.
**Theorem 1**: _The changes in a particular feature input can affect the changes in local output. So the changes must either depend on the corresponding correlation ratio of that feature or is constant, given that all the features are independent of each other. that is:_
\[\begin{array}{c}\frac{dy}{df_{j}}=c_{1}.\eta_{f_{j}};\;\text{when}\;j<m,\; \text{and}\;\;\frac{dy}{df_{j}}=\frac{c_{2}}{(2K_{2}-\eta_{f_{j}}^{2})};\text{ when}\;j\geq m\\ \sim\end{array} \tag{20}\]
Here, \(c_{1},c_{2},\) and \(k_{2}\) are constants. proof is in supplementary subsection 6.2.
**Corollary 1**: _When the particular feature is positively related to the output, the expected change in the output due to the change in the input feature is directly proportional to its correlation ratio impact. On the other hand, if the particular feature is negatively related to the output, the expected change in output due to the change in input of that feature is inversely proportional to its correlation ratio impact. We can express it mathematically as follows:_
\(E(\frac{dy^{\prime}}{df_{j}})\varpropto\eta_{f_{j}};\text{if}\;j\leq m\;\text {, and}\;E(\frac{dy^{\prime}}{df_{j}})\varpropto\frac{1}{\eta_{f_{j}}}\;\text{if} \;j>m\\ \sim\end{array}\)
### Mutual Correlation Impact Ratio for dependent features
In section 4.1, we introduced the correlation impact ratio for the nonlinear model assuming that the features are independent of each other. Though the proposed PCIR is easy to compute and able to keep the balance between interpretability and accuracy, PCIR is only suitable for the environment where the features are independent. So, the current section is only dedicated to the interpretability theorem when the features are dependent. For the multivariate dependent feature space, it is challenging to compute the relation impact of a targeted feature on output due to the influence of other features. To address the issue, we propose a new metric called Mutual Correlation Impact Ratio (MCIR). The state-of-the Conditional Mutual Information CMI [45] considers the situation where more than one feature depends on the same feature. But, in our case, every feature is dependent on multiple features and for this reason, we can not use the existing CMI concept. We also propose a new concept called _Conditional Multivariate Mutual Information_ (CMMI) considering the case when every feature is dependent on other features. MCIR is calculated based on (CMMI) assuming that the given features \((\underset{\sim}{f_{1}},\underset{\sim}{f_{2}},...,\underset{\sim}{f_{k}} \epsilon||F||(k\times n^{\prime}))\) follow either multivariate probability density function (pdf) or multivariate probability mass function(pmf); features can be both discrete and continuous. Our proposed metric CMMI represents the mutual dependency among the targeted feature and the output variable considering the targeted feature is dependent on the rest of the features belonging to a multidimensional feature space. When two non-identical conditional probability distributions are taken, there should be a divergence between their cross-entropy and their individual entropies [43]. In a multivariate environment, this divergence can be called Joint Mutual Information (JMI). MCIR is calculated based on (JMI), and CMMI. MCIR measures how the changes in the targeted feature's input affect the output of the model. It also considers the measurement of uncertainty associated with the feature's contributions. At first, for the sake of simplicity, we consider \(f_{i}\) depends on \(f_{j}\), while \(f_{i}\) is independent of the rest of the features; \(i=1(1)k,j=1(1)k\), and \(i\neq j\). Then, we find the impact of \(f_{i}\) on \(\underset{\sim}{Y^{\prime}}\), given the fact that \(\underset{\sim}{f_{i}}\) depends on \(f_{j}\). This impact can be explained by the information theory, if and only if we can compute \(I(\underset{\sim}{Y^{\prime}};\underset{\sim}{f_{i}}||f_{j});\forall i,j=1(1)k, i\neq j\). The previously described
MI can not provide the desired result. To achieve our goal we have to first calculate the Conditional Mutual Impact [44][45].
The conditional mutual information between the output variable \(\underset{\sim}{Y^{\prime}}\) and the target feature \(\underset{\sim}{(f_{i}|f_{j})}\), \(\forall i,j=1(1)k;i\neq j\) is defined as [44]:
\[\begin{split}& I(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})=I( \underset{\sim}{f_{i}},\underset{\sim}{f_{j}})-I(\underset{\sim}{Y^{\prime}},\underset{\sim}{f_{i}|f_{j}})\\ &=\sum_{[f_{j}=\text{f};f_{j}\in|F|]^{k\times n^{\prime}}|}\sum_{ [f_{i}=\text{f}^{*};f_{j}\in|F|]^{k\times n^{\prime}}|}\sum_{Y^{\prime}=y;Y^{ \prime}\in|Y^{\prime}||^{\prime}}P(\text{f}^{*},\text{f},y)\log_{2}[\frac{P(y,\text{f}^{*}|\text{f})}{P(y|\text{f})P(\text{f}^{*}|\text{f})}]\end{split} \tag{21}\]
If any of the features \(f_{i},f_{j}\), or \(Y^{\prime}\) is continuous, the summation operator can be replaced by the integral operator.
#### 4.2.1 MCIR; with two dependent features
When any two of the \(k\) features are dependent on each other and other features are independent, the state-of-the-art CMI is sufficient to explain the mutual dependency of \(Y^{\prime}\) and \((f_{i}|f_{j})\). But the value of CMI varies from \(0\) to \(\infty\) which is an open bound, thus making scalability a major challenge. So, to scale it down in between \([0,1]\), we derive MCIR and as,
\[C(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})=\frac{I(\underset{\sim}{Y^{\prime} };f_{i}|f_{j})}{I(\underset{\sim}{Y^{\prime}};\underset{\sim}{f_{i}|f_{j}}{ \sim})+I(\underset{\sim}{Y^{\prime}},f_{i},f_{j})} \tag{22}\]
where, \(0\leq I(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})\leq\infty\) and \(0\leq I(\underset{\sim}{Y^{\prime}},f_{i},f_{j})\leq\infty\). Then, \(0\leq\frac{I(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})}{I(\underset{\sim}{Y^{ \prime}};f_{i}|f_{j})+I(\underset{\sim}{Y^{\prime}};f_{i},f_{j})}\leq 1\). So we can claim, \(0\leq C(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})\leq 1\). \(C(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})\) is considered as CMI of the target feature \(\underset{\sim}{f_{i}}\) on output variable \(\underset{\sim}{Y^{\prime}}\) when \(\underset{f_{i}}{f_{i}}\) depends on \(f_{j}\). \(C(\underset{\sim}{Y^{\prime}};f_{i}|f_{j})\) can capture how the changes in inputs of target variable \(f_{i}\) can affect the output variable \(\underset{\sim}{Y^{\prime}}\) considering the fact that the input of \(f_{j}\) may or may not be changed. \(I(\underset{\sim}{Y^{\prime}},f_{i},f_{j})\) is called Joint Mutual Information \((\text{JMI})\). For simplifying the notations, we consider a simple environment where \(f_{1}\) and \(f_{2}\) are dependent on each other and \(f_{3},f_{4},....,f_{k}\) are independent features. We also assume that, \((f_{1},f_{3},....,f_{m})\) are directly related to output variable \(\underset{\sim}{Y}\) and \((\underset{\sim}{f_{2}},f_{p},....,f_{k})\) are conversely related to \(\underset{\sim}{Y^{\prime}}\); \(m,p\leq k\). Then using 22 on equation (14), our proposed ExCIR model takes the form:
\[E(\underset{\sim}{Y^{\prime}})=M^{\prime}(f_{j(f,C)}))=\frac{C(\underset{\sim }{Y^{\prime}};f_{1}|f_{2})f_{1}+\eta_{f_{3}}f_{3}+......+\eta_{f_{m}}f_{m}}{C( \underset{\sim}{Y^{\prime}};\underset{\sim}{f_{2}}|f_{1})f_{2}+\eta_{f_{p}}f_ {p}+......+\eta_{f_{k}}f_{k}} \tag{23}\]
where \(\eta_{f_{i}};i=3,...,m,....,p,...,k;m,p\leq k\) is the correlation impact for the independent feature \(f_{i},i=3,...,m,...,p,...,k;m,p\leq k\). \(I(\underset{\sim}{Y^{\prime}};f_{1}|f_{2})\) and \(I(\underset{\sim}{Y^{\prime}};f_{2}|f_{1})\) can be called MCIRs for the dependent features \(f_{1}\) and \(f_{2}\) respectively. But, this theory is only based on the fact that any two of the features are dependent on each other. However, in real-life cases, a feature can be dependent on multiple other features. In that case, one has to consider multivariate distribution considering multiple cases of dependency. Keeping this in mind, we propose a new concept for CMMI in the next section.
#### 4.2.2 Mutual Correlation Impact Ratio; when all features are dependent
In ExCIR, it is necessary to calculate the mutual impact of a feature on the output variable while assuming the target feature is dependent on other features. But before we define MCIR for multivariate cases, we have to derive CMMI. In [45] the authors derived CMI for a multivariate environment where the features are dependent on another variable. i.e., they considered the case when all variables are dependent on one common variable. But in our work, we address the case when all the features are dependent on each other. So, if we want to calculate the mutual dependence between a targeted feature
and the output variable we have to calculate the CMMI given that target feature is dependent on multiple features. More specifically, in existing works [44][45][46], the notion of \(I(Y^{\prime}_{\sim};f_{1},f_{2},...,f_{k-1}|f_{k})\) is derived and used in many real-life cases. However, this approach cannot be directly applied in our environment. We therefore introduce a new matrix \(I(Y^{\prime}_{\sim};f_{i}|\{f_{j}\}\subseteq\{||F||^{k\times n^{\prime}}-f_{i} \};i\neq j)\).
**Definition 4: CMMI:** _If an environment has dataset including \(k\) number of dependent features \(f_{1},f_{2},...,f_{k}e||F||^{k\times n^{\prime}}\); \(f_{i}=(f_{i1},f_{i2},...,f_{in})\), \(i=1(1)k\) with the output variable \(Y^{\prime}_{\sim}=(y^{\prime}_{1},y^{\prime}_{2},...,y^{\prime}_{n^{\prime}})\), then the CMMI between output variable \(Y\) and any targeted feature \(f_{i}\) will be:_
\[\begin{array}{l}I(Y^{\prime}_{\sim};f_{i}|\{f_{j}\}\subseteq\{||F||^{k \times n^{\prime}}-f_{i}\};i\neq j))\\ =\sum_{\phi}\int_{\psi}\int_{f_{i}}\sum_{Y^{\prime}}P(Y^{\prime},f_{1},..,f_{ i},..f_{k})\log_{2}[\frac{P(Y^{\prime}|\{f_{j}\}\subseteq||F||^{k\times n^{ \prime}})}{P(Y^{\prime}|\{f_{j}\}\subseteq\{||F||^{k\times n^{\prime}}-f_{i}\}; i\neq j)}]\end{array} \tag{24}\]
where \(\phi=f_{j}\subseteq\{||F||^{k\times n^{\prime}}-f_{i}\}_{d}\) is the set of discrete features, and \(\psi=f_{j}\subseteq\{||F||^{k\times n^{\prime}}-f_{i}\}_{c}\) - the set of continuous features not including \(f_{i}\). The derivation of the equation 24 is based on the concept of MI[46] and it can be found in the supplementary subsection 6.3. CMMI could take the values between \(0\) to \(\infty\), unlike the strict bound of \([0,1]\) that the normal correlation coefficient has.
**Definition 5: Mutual Correlation Impact Ratio:** Suppose an environment with a dataset has \(k\) number of features \(f_{1},f_{2},...,f_{k}e||F||^{k\times n^{\prime}}\); \(f_{i}=(f_{i1},f_{i2},...,f_{in^{\prime}})\), \(i=1(1)k\) and the features are dependent on each other. The output variable is \(Y^{\prime}_{\sim}=(y^{\prime}_{1},y^{\prime}_{2},...y^{\prime}_{n^{\prime}})\). Then, MCIR can be defined as:
\[\begin{array}{l}C(Y^{\prime}_{\sim};f_{i}|\{f_{j}\}\subseteq\{||F||^{k \times n^{\prime}}-f_{i}\};i\neq j))\\ =\frac{I(Y^{\prime}_{\sim};f_{i}|\{f_{j}\}\subseteq\{||F||^{k\times n^{\prime} }-f_{i}\};i\neq j)}{I(Y^{\prime}_{\sim};f_{i}|\{f_{j}\}\subseteq\{||F||^{k \times n^{\prime}}-f_{i}\};i\neq j)+I(Y^{\prime}_{\sim},f_{1},f_{2},...f_{j-1},f_{j},f_{j+1},..,f_{k})}\\ \sim\frac{\sim}{\sim}\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim \sim\sim\sim\sim\sim\end{array} \tag{25}\]
**Result 1:**\(C(Y^{\prime}_{\sim};f_{i}|\{f_{j}\}\subseteq\{||F||^{k\times n^{\prime}}-f_{i }\};i\neq j))\) lies between \(0\) and \(1\). (See supplementary subsection 6.4 for the proof). Then the ExCIR model with \(k\) dependent features can be written as :
\[E(Y^{\prime}_{\sim})=M^{\prime}(f_{j(f,\mathfrak{C})})=\mathfrak{J}+\frac{ \mathfrak{C}_{f_{1}}f_{1}+......+\mathfrak{C}_{f_{m}}f_{m}}{\mathfrak{C}_{f_{ f}}f_{p}+......+\mathfrak{C}_{f_{k}}f_{k}} \tag{26}\]
where,\(\mathfrak{C}_{f_{i}}=C(Y^{\prime};f_{i}|\{f_{j}\}\subseteq\{||F||^{n}-f_{i}\};i \neq j))\) and \(\mathfrak{J}=C(\begin{array}{c}Y^{\prime}_{\sim},f_{1},f_{2},...,f_{k})\\ \sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim \sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim \sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim\sim \sim\
Conclusion
ExCIR can balance the tradeoff between explainability and accuracy irrespective of a large number of dependent and independent features. This approach also considers the uncertainty associated with features and output. We provide a time complexity upper bound when features are dependent. The upper bound only depends on input observations and as we used a lightweight model with a sample dataset for our whole approach, the time complexity can be considered as a reliable result.
## References
* [1] Buchanan, Bruce G., and Edward H. Shortliffe. Rule based expert systems: the mycin experiments of the stanford heuristic programming project (the Addison-Wesley series in artificial intelligence). Addison-Wesley Longman Publishing Co., Inc., 1984.
* [2] Wick, Michael R., and William B. Thompson. "Reconstructive expert system explanation." Artificial Intelligence 54.1-2 (1992): 33-70.
* [3] Guidotti, Riccardo, et al. "A survey of methods for explaining black box models." ACM computing surveys (CSUR) 51.5 (2018): 1-42.
* [4] Gunning, David. "Explainable artificial intelligence (xai)." Defense advanced research projects agency (DARPA), nd Web 2.2 (2017): 1.
* [5] Nunes, Ingrid, and Dietmar Jannach. "A systematic review and taxonomy of explanations in decision support and recommender systems." User Modeling and User-Adapted Interaction 27 (2017): 393-444.
* [6] Holzinger, Andreas. "Interactive machine learning for health informatics: when do we need the human-in-the-loop?." Brain Informatics 3.2 (2016): 119-131.
* [7] Roque, Antonio, and Suresh K. Damodaran. "Explainable AI for Security of Human-Interactive Robots." International Journal of Human-Computer Interaction 38.18-20 (2022): 1789-1807.
* [8] Zanzotto, Fabio Massimo. "Human-in-the-loop artificial intelligence." Journal of Artificial Intelligence Research 64 (2019): 243-252.
* [9] Holzinger, Andreas, et al. "Explainable AI methods-a brief overview." xxAI-Beyond Explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers. Cham: Springer International Publishing, 2022.
* [10] Dwivedi, Rudresh, et al. "Explainable AI (XAI): Core ideas, techniques, and solutions." ACM Computing Surveys 55.9 (2023): 1-33.
* [11] Hind, Michael. "Explaining explainable AI." XRDS: Crossroads, The ACM Magazine for Students 25.3 (2019): 16-19.
* [12] Atakishiyev, Shahin, et al. "Explainable artificial intelligence for autonomous driving: a comprehensive overview and field guide for future research directions." arXiv preprint arXiv:2112.11561 (2021).
* [13] Ohana, Jean Jacques, et al. "Explainable AI (XAI) models applied to the multi-agent environment of financial markets." Explainable and Transparent AI and Multi-Agent Systems: Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3-7, 2021, Revised Selected Papers 3. Springer International Publishing, 2021.
* [14] Tjoa, Erico, and Cuntai Guan. "A survey on explainable artificial intelligence (xai): Toward medical xai." IEEE transactions on neural networks and learning systems 32.11 (2020): 4793-4813.
* [15] Machlev, R., et al. "Explainable Artificial Intelligence (XAI) techniques for energy and power systems: Review, challenges and opportunities." Energy and AI (2022): 100169.
* [16] Ji, Yingchao. "Explainable AI methods for credit card fraud detection: Evaluation of LIME and SHAP through a User Study." (2021).
* [17] Dieber, Jurgen, and Sabrina Kirrane. "Why model why? Assessing the strengths and limitations of LIME." arXiv preprint arXiv:2012.00093 (2020).
* [18] Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in neural information processing systems 30 (2017).
* [19] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
* [20] Visani, Giorgio, Enrico Bagli, and Federico Chesani. "OptiLIME: Optimized LIME explanations for diagnostic computer algorithms." arXiv preprint arXiv:2006.05714 (2020).
* [21] Winter, Eyal. "The shapley value." Handbook of game theory with economic applications 3 (2002): 2025-2054.
* [22] Goldstein, Alex, et al. "Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation." journal of Computational and Graphical Statistics 24.1 (2015): 44-65.
* [23] Molnar, Christoph. Interpretable machine learning. Lulu. com, 2020.
[24] Galkin, Fedor, et al. "Human microbiome aging clocks based on deep learning and tandem of permutation feature importance and accumulated local effects." BioRxiv (2018): 507780.
[25] Mehdiyev, Nijat, and Peter Fettke. "Prescriptive process analytics with deep learning and explainable artificial intelligence." (2020).
[26] Ryo, Masahiro, et al. "Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models." Ecography 44.2 (2021): 199-205.
[27] Khoda Bakhshi, Arash, and Mohamed M. Ahmed. "Utilizing black-box visualization tools to interpret non-parametric real-time risk assessment models." Transportmetrica A: Transport Science 17.4 (2021): 739-765.
[28] Coroama, Loredana, and Adrian Groza. "Explainable Artificial Intelligence for Person Identification." 2021 IEEE 17th International Conference on Intelligent Computer Communication and Processing (ICCP). IEEE, 2021.
[29] Gunning, David, et al. "XAI--Explainable artificial intelligence." Science robotics 4.37 (2019): eaay7120.
[30] Molnar, Christoph, Giuseppe Casalicchio, and Bernd Bisch. "Interpretable machine learning-a brief history, state-of-the-art and challenges." ECML PKDD 2020 Workshops: Workshops of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2020): SoGood 2020, PDFL 2020, MLCS 2020, NFMCP 2020, DINA 2020, EDML 2020, XKDD 2020 and INRA 2020, Ghent, Belgium, September 14-18, 2020, Proceedings. Cham: Springer International Publishing, 2021.
[31] Lin, Jianhua. "Divergence measures based on the Shannon entropy." IEEE Transactions on Information theory 37.1 (1991): 145-151.
[32] Klir, George J. "Uncertainty and information: foundations of generalized information theory." Kybernetes 35.7/8 (2006): 1297-1299.
[33] Klir, George, and Mark Wierman. Uncertainty-based information: elements of generalized information theory. Vol. 15. Springer Science & Business Media, 1999.
[34] Altmann, Andre, et al. "Permutation importance: a corrected feature importance measure." Bioinformatics 26.10 (2010): 1340-1347.
[35] Gray, Robert M. Entropy and information theory. Springer Science & Business Media, 2011.
[36] Bromiley, P. A., N. A. Thacker, and E. Bouhova-Thacker. "Shannon entropy, Renyi entropy, and information." Statistics and Inf. Series (2004-004) 9 (2004): 2-8.
[37] Cai, Yuhang, and Lek-Heng Lim. "Distances between probability distributions of different dimensions." IEEE Transactions on Information Theory 68.6 (2022): 4020-4031.
[38] Danielsson, Per-Erik. "Euclidean distance mapping." Computer Graphics and image processing 14.3 (1980): 227-248.
[39] Adadi, Amina, and Mohammed Berrada. "Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)." IEEE access 6 (2018): 52138-52160.
[40] Renyi, Alfred. "On measures of entropy and information." Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. Vol. 4. University of California Press, 1961.
[41] Joyce, James M. "Kullback-leibler divergence." International encyclopedia of statistical science. Springer, Berlin, Heidelberg, 2011. 720-722.
[42] Menendez, M. L., et al. "The jensen-shannon divergence." Journal of the Franklin Institute 334.2 (1997): 307-318.
[43] Baram, Yoram, Ran El Yaniv, and Kobi Luz. "Online choice of active learning algorithms." Journal of Machine Learning Research 5.Mar (2004): 255-291.
[44] Batina, Lejla, et al. "Mutual information analysis: a comprehensive study." Journal of Cryptology 24.2 (2011): 269-291.
[45] Linge, Yanis, Cecile Dumas, and Sophie Lambert-Lacroix. "Maximal Information Coefficient Analysis." Cryptology ePrint Archive (2014).
[46] Gu, Xiangyuan, et al. "Conditional mutual information-based feature selection algorithm for maximal relevance minimal redundancy." Applied Intelligence 52.2 (2022): 1436-1447.
[47] Li, Ming, and Paul Vitanyi. An introduction to Kolmogorov complexity and its applications. Vol. 3. New York: Springer, 2008.
[48] Hammer, Daniel, et al. "Inequalities for Shannon entropy and Kolmogorov complexity." Journal of Computer and System Sciences 60.2 (2000): 442-464.
[49] Uspensky, Vladimir A. "Complexity and entropy: an introduction to the theory of Kolmogorov complexity." Kolmogorov complexity and computational complexity (1992): 85-102.
Supplementary
|
2304.12918 | N2G: A Scalable Approach for Quantifying Interpretable Neuron
Representations in Large Language Models | Understanding the function of individual neurons within language models is
essential for mechanistic interpretability research. We propose $\textbf{Neuron
to Graph (N2G)}$, a tool which takes a neuron and its dataset examples, and
automatically distills the neuron's behaviour on those examples to an
interpretable graph. This presents a less labour intensive approach to
interpreting neurons than current manual methods, that will better scale these
methods to Large Language Models (LLMs). We use truncation and saliency methods
to only present the important tokens, and augment the dataset examples with
more diverse samples to better capture the extent of neuron behaviour. These
graphs can be visualised to aid manual interpretation by researchers, but can
also output token activations on text to compare to the neuron's ground truth
activations for automatic validation. N2G represents a step towards scalable
interpretability methods by allowing us to convert neurons in an LLM to
interpretable representations of measurable quality. | Alex Foote, Neel Nanda, Esben Kran, Ionnis Konstas, Fazl Barez | 2023-04-22T19:06:13Z | http://arxiv.org/abs/2304.12918v1 | N2G: A scalable approach for quantifying interpretable neuron representations in Large Language Models
###### Abstract
Understanding the function of individual neurons within language models is essential for mechanistic interpretability research. We propose **Neuron to Graph (N2G)**, a tool which takes a neuron and its dataset examples, and automatically distills the neuron's behaviour on those examples to an interpretable graph. This presents a less labour intensive approach to interpreting neurons than current manual methods, that will better scale these methods to Large Language Models (LLMs). We use truncation and saliency methods to only present the important tokens, and augment the dataset examples with more diverse samples to better capture the extent of neuron behaviour. These graphs can be visualised to aid manual interpretation by researchers, but can also output token activations on text to compare to the neuron's ground truth activations for automatic validation. N2G represents a step towards scalable interpretability methods by allowing us to convert neurons in an LLM to interpretable representations of measurable quality.
## 1 Introduction
Interpretability of machine learning models is an active research topic (Hendrycks et al., 2021; Amodei et al., 2016) and can have a wide range of applications from bias detection (Vig et al., 2020) to autonomous vehicles (Barez et al., 2022) and Large Language Models (LLMs; Elhage et al. (2022)). The growing subfield of mechanistic interpretability aims to understand the behaviour of individual neurons within models as well as how they combine into larger circuits of neurons that perform a particular function (Olah et al., 2020; Olah, 2022; Goh et al., 2021), with the ultimate aim of decomposing a model into interpretable components and using this to ensure model safety.
Interpretability tools for understanding neuron in LLMs are lacking. Currently, researchers often look at dataset examples containing tokens on which a neuron strongly activates and investigate common elements and themes across examples to give some insight into neuron behaviour (Ehlage et al., 2022; Geva et al., 2020). However, this can give the illusion of interpretability when real behaviour is more complex (Bolukbasi et al., 2021), and measuring the degree to which these insights are correct is challenging. Additionally, inspecting individual neurons by hand is time-consuming and unlikely to scale to entire models.
To overcome these challenges, we present **Neuron to Graph (N2G)**, which automatically converts a target neuron within an LLM to an interpretable graph that visualises the contexts in which the neuron activates. The graph [https://www.overleaf.com/project/641173ac7cb539eb827754acbcan](https://www.overleaf.com/project/641173ac7cb539eb827754acbcan) be visualised to facilitate understanding the neuron's behaviour, as well as used to process text and produce predicted token activations. This allows us to measure the correspondence between the target neuron's activations and the graph's activations, which provides a direct measurement of the degree to which a graph captures the neuron's behaviour.
Our method takes maximally activating dataset examples for a target neuron, prunes them to remove irrelevant context, identifies the tokens which are important for neuron activation, and creates additional examples by replacing the important tokens with other likely substitutes using BERT (Devlin et al., 2018). These processed examples are then given as input to the graph builder, which removes
unimportant tokens and creates a condensed representation in the form of a trie. This trie can then be used to process text and will predict activations for each token, and can be converted to a graph for visualisation.
## 2 Related Work
Prior work in neuron analysis has identified the presence of neurons correlated with specific concepts (Radford et al., 2017). For instance, Dalvi et al. (2019) explored neurons which specialised in linguistic and non-linguistic concepts in large language models, and Seyffarth et al. (2021) evaluated neurons which handle concepts such as causation in language. The existence of similar concepts embedded within models can also be found across different architectures. Wu et al. (2020) and Schubert et al. (2021) examined neuron distributions across models and found that different architectures have similar localised representations of information, even when Durrani et al. (2020) used a combination of neuron analysis and visualization techniques to compare transformer and recurrent models, finding that the transformer produces fewer neurons but exhibits stronger dynamics.
There are various methods of identifying concept neurons (Geva et al., 2020). Bau et al. (2018) proposed a method of identifying important neurons across models by analyzing correlations between neurons from different models. In contrast, Dai et al. (2021) developed a method to identify concept neurons in transformer feed-forward networks by computing the contribution of each neuron to the knowledge prediction. In contrast, we focus on identifying neurons using highly activating dataset examples. Mu and Andreas (2020) demonstrated how the co-variance of neuron activations on a dataset can be used to distinguish neurons that are related to a particular concept. Torroba Hennigen et al. (2020) also used neuron activations to train a probe which automatically evaluates language models for neurons correlated to linguistic concepts.
One limitation of using highly activating dataset examples is that the accurate identification of concepts correlated with a neuron is limited by the dataset itself. A neuron may represent several concepts, and Bolukbasi et al. (2021) emphasise the importance of conducting interpretability research on varied datasets, in order to avoid the "interpretability illusion", in which neurons that show consistent patterns of activation in one dataset activate on different concepts in another. Poerner et al. (2018) also showed the limitations of datasets in concept neuron identification. They demonstrated that generating synthetic language inputs that maximise the activations of a neuron surpasses naive search on a corpus.
Figure 1: **Overall architecture of N2G. Activations of the target neuron on the dataset examples are retrieved (neuron and activating tokens in red). Prompts are pruned and the importance of each token for neuron activation is measured (important tokens in blue). Pruned prompts are augmented by replacing important tokens with high-probability substitutes using BERT. The augmented set of prompts are converted to a graph. The output graph is a real example which activates on the token “except” when preceded by any of the other tokens.**
Figure 2: An example of a graph built from Neuron 2 of Layer 1 of the model.
## 3 Methodology
N2G constructs an interpretable graph representing the context required for a given neuron to activate on a particular token. The graph can be used to process text and predict whether the target neuron will fire (strongly activate) for each token in the input. N2G takes as input a model, the layer and neuron indices of the target neuron, and a set of prompts which contain one or more tokens for which the target neuron strongly activates. Figure 1 illustrates the overall process as well as an example of a real graph. Figure 2 shows another example and Appendix A.1 contains further examples with interesting behaviours.
**Prune**: Given a prompt, we process it with the model and retrieve the token activations of the target neuron using the TransformerLens library (Nanda, 2022), which allows easy access to internal neuron activations. The prune function takes the prompt and activations and finds the token with the highest activation (the key token). It removes all sentences after the key token (we study autoregressive models so these cannot affect neuron activation) and removes all tokens before the key token. It then measures the activation of the neuron on the key token in the truncated prompt to determine the change in activation. If this activation has decreased by more than a user-defined percentage (we choose \(50\%\)), then the prior token is added to the truncated prompt. This process is then repeated until the neuron activation on the key token is sufficient to pass the condition.
**Saliency**: The importance of each token for neuron activation on every other token is then computed to create a matrix of token importance. The importance \(I_{k}\) of the \(k^{th}\) token relative to the \(j^{th}\) token is calculated as \(I_{k}=1-(a_{j,\textit{masked}}/a_{j})\), where \(a_{j}\) is the activation of the neuron on the \(j^{th}\) token and \(a_{j,\textit{masked}}\) is the activation of the neuron on the \(j^{th}\) token when token \(k\) is masked with a special padding token. This method is similar to other perturbation-based saliency methods in Computer Vision (Dabkowski & Gal, 2017) and NLP (Liu et al., 2018).
**Augment**: The pruned prompt is then used to generate more varied inputs to better explore the neuron's behaviour. Each token that is important for activation on the key token is masked in turn, and BERT (Devlin et al., 2018) predicts the top \(n\) substitutions for the masked token. A new prompt is created for each substitute token, provided they cross a probability threshold. This technique is very similar to existing methods of data augmentation used during training (Ma, 2019).
**Graph Building**: The pruned prompts and the augmented prompts are the input to the graph-building stage, along with normalised token activations and the importance matrix for each prompt. The normalised activation \(a_{N}\) of the \(i^{th}\) token is calculated as \(a_{N}=a_{i}/a_{\textit{max}}\), where \(a_{\textit{max}}\) is the maximum activation of the neuron on any token in the training dataset.
Each neuron graph is implemented as a trie, with each node representing a token. The first layer of nodes contains tokens on which the neuron strongly activates, and each sub-trie of one of these activating nodes represents the contexts for which the neuron will activate on that token.
For every token in the prompt with a normalised activation above a threshold, we create a top layer node in the trie. Starting at the given activating token, we work backwards through the preceding tokens, adding them to the trie if they have an importance for the activating token above a threshold or adding them as a special ignore node if the importance is below the threshold. When processing text, ignore nodes are allowed to match to any token. Experimentally, we found that a normalised activation threshold of \(0.5\) and an importance threshold of \(0.75\) worked well. The final important token is marked as a termination node, which represents a valid stopping point when processing text. It records the normalised activation of the activating node for this path in the trie. We repeat this process for all activating tokens in all the input prompts.
**Text Processing**: The resulting trie can be used to process text by beginning at the root of the trie and working backwards through the text prompt, checking if any consecutive sequence of prior tokens matches any path through the trie and reaches a termination node. We collate all valid matching paths and return the stored normalised activation on the longest matching path.
**Visualisation**: To visualize the trie, we create a condensed graph representation. We remove ignore nodes and termination nodes and create a layered graph by de-duplicating nodes by token value at each depth in the trie. We then color the activating nodes in the graph according to their normalised activation, with stronger activations corresponding to brighter red. Similarly, we color the rest of the
token nodes in blue according to their importance. Additionally, we indicate nodes connected to a termination node in the full trie with a bold outline.
## 4 Results and Discussion
As the neuron graphs that are built by the algorithm can be directly used to process text and predict token activations, we can evaluate the degree to which they accurately capture the target neuron's behaviour by measuring the correspondence between the activations of the neuron and the predicted activations of the graph on some evaluation text. In our experiments we use a six-layer decoder-only Transformer model with SoLU activation, which may improve model interpretability by reducing polysemanticity Elhage et al. (2022). The model is trained on the Pile (Gao et al., 2020), and we use data from Neuroscience (Nanda, 2022), which provides token level activations for the top \(20\) prompts in the model's training set with the highest neuron activation on any token within the prompt, for all neurons in the model.
For each neuron, we take these top \(20\) dataset examples and randomly split them in half to form a train and test set, and give the training examples to N2G to create a neuron graph. We then take the test examples and normalise the token activations as described above. We apply a threshold to the token activations, defining an activation above the threshold as a _firing_ of the neuron, and an activation below the threshold as the neuron _not firing_. In these experiments we set the threshold to \(0.5\). We then process the test prompts with the neuron graph to produce predicted token firings. We can then measure the precision, recall, and \(F1\) score of the graph's predictions compared to the ground truth firings.
Table 4 shows the average precision, recall, and \(F1\) score of the neuron graphs for a random sample of \(50\) neurons from each layer of the model, stratified by neuron firing. In layer 0, the graphs on average capture the behaviour of the neurons well, with high recall and good precision on the tokens for which the real neuron fires, whilst maintaining near-perfect recall on the much larger number of tokens for which the neuron does not fire. Note that predicting token-level firings is in general a very imbalanced problem, as neurons typically fire on a small proportion of tokens in the input prompts.
However, as we progress to deeper layers of the model, the recall and precision of the graphs generally decrease. This corresponds to neurons in the later layers on average exhibiting more complex behaviour that is less completely captured in the training examples. Specifically, neurons in early layers tend to respond to a small number of specific tokens in specific, narrow contexts, whereas later layers often respond to more abstract concepts represented by a wider array of tokens in many different contexts, which was similarly observed by (Ehlage et al., 2022). Precision also drops as the graphs may over-generalise and fail to capture the nuances of the context which caused a neuron to activate on a given token.
For example, Figure 3 shows a comparison between a graph from Layer 0 and Layer 3. The graph from Layer 0 is typical for that layer - a small number of activating nodes that activate in simple contexts, often requiring just one of a small set of possible prior tokens to be present, and sometimes
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Firing Tokens**} & \multicolumn{3}{c}{**Non-Firing Tokens**} \\
**Layer** & **Precision** & **Recall** & **F1** & **Precision** & **Recall** & **F1** \\ \hline
0 & 0.74 & 0.85 & 0.74 & 1.0 & 0.99 & 1.0 \\
1 & 0.66 & 0.77 & 0.64 & 1.0 & 1.0 & 1.0 \\
2 & 0.60 & 0.77 & 0.6 & 1.0 & 1.0 & 1.0 \\
3 & 0.48 & 0.70 & 0.48 & 1.0 & 0.99 & 1.0 \\
4 & 0.44 & 0.72 & 0.46 & 1.0 & 0.99 & 1.0 \\
5 & 0.45 & 0.67 & 0.42 & 1.0 & 0.99 & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Precision, recall and \(F1\)-score of the neuron graphs’ token-level predictions of neuron firing compared to ground truth on held-out test data, for 50 random neurons from each layer of the model. Tokens on which the real neuron fired and tokens on which it didn’t fire are evaluated separately as there are generally many more tokens on which a neuron didn’t fire, making it trivially easy to get near-perfect scores by always predicting the neuron will not fire.
requiring no additional context at all. In contrast, the graph from Layer 3 exhibits a more complex structure, with longer and more intricate context that captures the more abstract concept of software licensing required for activation on the activating nodes.
## 5 Conclusions and Limitations
We introduced N2G, a method for automatically converting neurons in LLMs into interpretable graphs which can be visualized. The degree to which a graph captures the behaviour of a target neuron can be directly measured by comparing the output of the graph to the activations of the neuron, making this method a step towards scalable interpretability methods for LLMs. We find the neuron graphs capture neuron behaviour well for early layers of the model, but only partially capture the behaviour for later layers due to increasingly complex neurion behaviour, and this problem would likely become more prominent in larger models. Our approach primarily used SoLU (Ehlage et al., 2022a) models to reduce polysemanticity. Although also applicable to models with typical activation functions, the resulting graphs may need to be more comprehensive due to more complex neuron behaviours. Our study focused on predicting neuron behaviour on the text that most activates it, excluding weaker activations. Future work could address these limitations by utilizing more training examples, better exploring the input space, and generalizing from exact token matches to matching abstract concepts, for example by using embeddings.
|
2308.12415 | Benchmarking Causal Study to Interpret Large Language Models for Source
Code | One of the most common solutions adopted by software researchers to address
code generation is by training Large Language Models (LLMs) on massive amounts
of source code. Although a number of studies have shown that LLMs have been
effectively evaluated on popular accuracy metrics (e.g., BLEU, CodeBleu),
previous research has largely overlooked the role of Causal Inference as a
fundamental component of the interpretability of LLMs' performance. Existing
benchmarks and datasets are meant to highlight the difference between the
expected and the generated outcome, but do not take into account confounding
variables (e.g., lines of code, prompt size) that equally influence the
accuracy metrics. The fact remains that, when dealing with generative software
tasks by LLMs, no benchmark is available to tell researchers how to quantify
neither the causal effect of SE-based treatments nor the correlation of
confounders to the model's performance. In an effort to bring statistical rigor
to the evaluation of LLMs, this paper introduces a benchmarking strategy named
Galeras comprised of curated testbeds for three SE tasks (i.e., code
completion, code summarization, and commit generation) to help aid the
interpretation of LLMs' performance. We illustrate the insights of our
benchmarking strategy by conducting a case study on the performance of ChatGPT
under distinct prompt engineering methods. The results of the case study
demonstrate the positive causal influence of prompt semantics on ChatGPT's
generative performance by an average treatment effect of $\approx 3\%$.
Moreover, it was found that confounders such as prompt size are highly
correlated with accuracy metrics ($\approx 0.412\%$). The end result of our
case study is to showcase causal inference evaluations, in practice, to reduce
confounding bias. By reducing the bias, we offer an interpretable solution for
the accuracy metric under analysis. | Daniel Rodriguez-Cardenas, David N. Palacio, Dipin Khati, Henry Burke, Denys Poshyvanyk | 2023-08-23T20:32:12Z | http://arxiv.org/abs/2308.12415v1 | # Benchmarking Causal Study to Interpret Large Language Models for Source Code
###### Abstract
One of the most common solutions adopted by software researchers to address code generation is by training Large Language Models (LLMs) on massive amounts of source code. LLMs are rooted in the concept of emergent capabilities in which machines statistically learn complex patterns from code data. Although a number of studies have shown that LLMs have been effectively evaluated on popular accuracy metrics (_e.g._, BLEU, CodeHeu), previous research has largely overlooked the role of Causal Inference as a fundamental component of the interpretability of LLMs' performance. Existing benchmarks and datasets are meant to highlight the difference between the expected and the generated outcome, but do not take into account confounding variables (_e.g._, lines of code, number of tokens, prompt size) that equally influence the accuracy metrics. The fact remains that, when dealing with generative software tasks by LLMs, no benchmark is available to tell researchers how to quantify neither the causal effect of SE-based treatments nor the correlation of confounders to the model's performance. In an effort to bring statistical rigor to the evaluation of LLMs, this paper introduces a benchmarking strategy named _Galeras_ comprised of curated testbeds for three SE tasks (_i.e._, code completion, code summarization, and commit generation) to help aid the interpretation of LLMs' performance.
We illustrate the insights of our benchmarking strategy by conducting a case study on the performance of ChatGPT under distinct prompt engineering methods. The results of the case study demonstrate the positive causal influence of prompt semantics on ChatGPT's generative performance by an _average treatment effect_ of \(\approx 3\%\). Moreover, it was found that confounders such as prompt size are highly correlated with accuracy metrics (\(\approx 0.412\)). The end result of our case study is to showcase causal inference evaluations, _in practice_, to reduce _confounding bias_. By reducing the bias, we offer an interpretable solution for the accuracy metric under analysis.
Software Engineering, Testbeds, Large Language Models, dl4se, Interpretability
## I Introduction
Deep Learning for Software Engineering (_DL4SE_) is an emerging research area in the field of software maintainability that entails a paradigm shift in the form by which machines statistically learn complex patterns from code data. To support actionable downstream SE tasks (_e.g._, code completion, code summarization, or commit generation), ample evidence supports that _DL4SE_ approaches in the form of Language Models are able to generate code conditioned on a well-defined prompt [1, 2, 3]. While essential, _DL4SE_ approaches have been reduced to a group of large and self-supervised neural architectures (_i.e.,_ Large Language Models or simply LLMs) comprised of multiple self-attention layers that perform linear transformations to extract salient features from programming and natural language data. In particular, Large Language Models for Code (LLMc) have led to a renewed interest in the automation of software engineering tasks. Most of this automation is a generative process in which underlying code and natural language features interact with each other to auto-complete [4, 5, 6, 7, 8, 9], summarize [10, 11, 12], review [13, 14, 15, 16], trace [17] and translate code [18]; generate test cases [19, 20, 21], detect cone clones [22, 23] or fix bugs [24, 25, 26, 27, 28, 29, 30, 31]. In fact, LLMc have been deployed in large-scale solutions to provide code generative services. Tools such as ChatGPT and GitHub Copilot, which are based on the _gpt_ architecture, exhibit good performance at the aforementioned tasks [2].
Therefore, an increased interest has emerged in further evaluating these LLMc [32, 33, 34, 35] to standardize the quality assessment of the generated code. Unfortunately, the current evaluation process overly-relies on accuracy metrics leaving no consensus as to what other features or properties are impacting the code generation process. In other words, we require to control for factors that influence the performance of LLMc if our goal is to _interpret_ models' output. Few studies have sought to examine accuracy metrics from a causal perspective to interpret LLMc [36]. Ergo, the problem remains that, when attempting to understand the prediction performance of LLMc, no benchmarks are available to articulate causal queries.
Previous research has largely overlooked the role of causal inference in evaluating LLMc. In fact, existing benchmarks are not without flaws to detect _confounding bias_, which refers to the statistical ability to control for variables that can influence models' performance beyond the SE treatments under study (_i.e.,_ evaluating the best prompting method). That is, we study causation because we need to understand not only _what_ but also _why_ LLMc arrive at performance decisions. To overcome these challenges, we pose a code-based benchmarking strategy, named _Galeras_, to interpret LLMc concentrated on answering causal queries of interest. _Galeras_ enables SE researchers to explain LLMc performance decisions from a curated set of code-based confounders, which are associated with a given SE treatment under study. _Galeras_ is comprised of three parts: 1) seven testbeds for evaluating distinct SE downstream tasks free of sampling bias and data snooping, 2) a set of confounders to compute causal effects, and 3) a pipeline to curate data from open repositories.
To illustrate how to exploit _Galeras_ to interpret LLMc, we conducted a causal study to quantify the impact of confounding variables on ChatGPT's prediction performance to assess whether certain types of _prompt engineering_ methods are excelling at automating code completion tasks. Prompt engineering is associated with the emergent ability of LLMs to learn from prompts (_i.e.,_ in-context learning). This ability comprises a set of techniques that manipulates the structure of a LLM's input sequence to attain better and less computationally expensive outputs than applying other downstream methods such as fine-tuning [33]. We organize our study around two RQs that are fundamentally centered on the problem of _prompt engineering_ for code:
**RQ\({}_{1}\) Exploratory Analysis:**_How different is the distribution of tokens between the generated and ground-truth code?_
**RQ\({}_{2}\) Causal Analysis:**_To what extent the type of Prompt Engineering is influencing the code completion performance?_
The achieved results show that prompt engineering methods indeed causally impact the accuracy of the model by an _Average Treatment of Effect_ (ATE) of 3% between the semantics of the prompt and the accuracy metric. Hence, choosing an adequate prompting strategy can positively influence the code completion performance of ChatGPT. To summarize, our key contributions are: 1) A filtered testbed with non-contaminated code snippets for LLMc benchmarking; 2) a set of (confounding) features (_e.g.,_ Cyclo Complexity, # of AST levels) included in the testbed; 3) a pipeline to generate new testbeds for a given SE task; and 4) a causal inference benchmarking to interpret LLMc.
## II Related Work
Considerable research attention has been devoted to data collection and benchmarking for LLMc. Tab.I showcases eight qualitative properties that we use to compare three state-of-art benchmarks (_i.e.,_ _CodeXGLUE_, _IdBench_, and _MultiPL-E_) with _Galeras_. Firstly, Husain _et al._ introduced _CodeSearchNet_ for code retrieval automation [37]. Their datasets have been mostly employed to pre-train LLMs rather than benchmarking software tasks. Later, researchers at Microsoft extended _CodeSearchNet_ and amalgamated 12 SE-related datasets for other relevant downstream tasks (_e.g.,_ clone detection, refinement, translation) [38]. These datasets and benchmarks are known as _CodeXGLUE_, which partially support some accuracy and distance metrics. Secondly, Wainakh _et al._ proposed _IdBench_ to evaluate generated identifiers by measuring similarity distances of semantic representations [39]. Finally, Chen _et al._ notably posed _HumanEval_ to validate the functional correctness of generated code [35]. Cassano _et al._ amplified _HumanEval_ to create _MultiPL-E_ for code translation [40]. Although these three benchmarks have been successfully employed for evaluating LLMc, these benchmarking strategies were not conceived to address the _interpretation_ of models' outputs.
As LLMc are quickly evolving due to data and hyperparameter augmentation, current models (_e.g.,_ ChatGPT, AlfaCode, Copilot) could have been trained on samples already used for evaluation (_a.k.a._ data snooping) and datasets such as _BigQuery_[41], _BigPython_[42], and the _Pile_[43] have omitted the importance of interpreting LLMc' performance. _Galeras_, however, offers curated testbeds for enabling prompt engineering evaluation. This evaluation includes an interpretability analysis based on _causal inference_ in the form of Structural Causal Models (SCM). What is more, _Galeras_ provides a pipeline to collect and access confounders and treatment data. Such data is plugged into the SCM to estimate the causal effects between treatments and outcomes. Estimating these casual effects promote statistical rigor in evaluating SE-based generative tasks.
## III Testbed Curation Pipeline
This section considers our proposed pipeline to structure and collect required testbeds for the comparative causal evaluation of LLMc. _Galeras_ is a benchmarking strategy that entails a software architecture solution for the curation process.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{**Qualitative Properties**} & \multicolumn{4}{c}{**Recallers**} \\ \hline \multirow{5}{*}{_Software Tools_} & \multicolumn{2}{c}{Close detection} & ✓ & ✗ & ✗ & ✗ \\ & \multicolumn{2}{c}{Detect detection} & ✓ & ✗ & ✗ & ✗ \\ & \multicolumn{2}{c}{Typeetting} & ✗ & ✓ & ✗ & ✗ \\ & \multicolumn{2}{c}{Summarization} & ✗ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Code generation} & ✗ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Commit generation} & ✗ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Repair} & ✓ & ✗ & ✗ & ✗ \\ & \multicolumn{2}{c}{Translation} & ✓ & ✗ & ✓ & ✗ \\ & \multicolumn{2}{c}{Search} & ✓ & ✗ & ✗ & ✗ \\ \hline \multirow{3}{*}{_IO_} & \multicolumn{2}{c}{code-code} & ✓ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{code-code} & ✓ & ✓ & ✗ & ✓ \\ & \multicolumn{2}{c}{text-code} & ✓ & ✓ & ✗ & ✓ \\ \hline \multirow{3}{*}{_Output_} & \multicolumn{2}{c}{Identifications} & ✗ & ✓ & ✗ & ✗ \\ & \multicolumn{2}{c}{Code line} & ✓ & ✗ & ✗ & ✗ \\ & \multicolumn{2}{c}{Method} & \multicolumn{2}{c}{✗} & ✓ & ✓ \\ & \multicolumn{2}{c}{Files} & ✓ & ✗ & ✗ & ✗ \\ \hline \multirow{3}{*}{_Type of_} & \multicolumn{2}{c}{Notes} & ✗ & ✓ & ✗ & ✗ \\ & \multicolumn{2}{c}{Takes} & ✓ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Supjects} & ✓ & ✗ & ✓ & ✓ \\ & \multicolumn{2}{c}{Pumps} & ✗ & ✗ & ✗ & ✓ \\ \hline \multirow{3}{*}{_Domain_} & \multicolumn{2}{c}{Note} & \multicolumn{2}{c}{14K} & \multicolumn{2}{c}{500 answers} & \multicolumn{2}{c}{160 answers} & \multicolumn{2}{c}{27K} \\ & \multicolumn{2}{c}{Images} & \multicolumn{2}{c}{\(\approx 12\)} & \multicolumn{2}{c}{19} & \multicolumn{2}{c}{1} \\ \hline \multirow{5}{*}{_Supervised_} & \multicolumn{2}{c}{BLU} & ✓ & ✓ & ✓ & ✓ \\ & \multicolumn{2}{c}{Code-BLU} & ✓ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Code-testing} & ✓ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Levernstein} & ✗ & ✓ & ✗ & ✓ \\ & \multicolumn{2}{c}{Accuracy} & ✓ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Causal filter} & ✗ & ✗ & ✗ & ✓ \\ \hline \multirow{3}{*}{_Propose_} [35] & \multicolumn{2}{c}{Single-step} & ✗ & ✗ & ✓ & ✓ \\ & \multicolumn{2}{c}{Multiple-step} & ✗ & ✗ & ✗ & ✓ \\ \hline \multirow{2}{*}{_Causal_} & \multicolumn{2}{c}{Coultations} & ✗ & ✗ & ✗ & ✓ \\ & \multicolumn{2}{c}{Inference} & ✗ & ✗ & ✗ & ✓ \\ \hline \multicolumn{5}{c}{Shadowed cells indicate _Galeras_ only.} \\ \hline \hline \end{tabular}
\end{table} TABLE I: SOTA Benchmark qualitative properties comparison.
Fig. 1: Testbed Curation Pipeline of _Galeras_
### _Structuring Testbed's Features_
_Galeras_ testbeds are sets of Python methods that serve as evaluative data points. Each data point comprises four dimensions. The first dimension corresponds to snippets' identification, which includes the _commit_id_ (_i.e.,_ commit hash), _repository_ name, _path_, _file_name_, and _fun_name_. The second dimension corresponds to snippets' documentation, which includes the _commit_message_ and _docstring_. The _docstring_ belongs to a JSON object that is extended to complementary natural language features such as _n_words,vocab_size, language,_ and _n_whitespaces_. The third dimension corresponds to the snippet's syntactic information, which includes the actual code base, _n_ast_errors_, _n_ast_levels, _n_ast_nodes_n_words, _vocab_size_, _token_count_, and _n_whitespaces_. Finally, the fourth dimension corresponds to canonical software metrics, which include _nloc_, _complexity_, and _n_identifiers_.
### _Collecting Code Samples_
Figure 1 describes a 4-step pipeline that _Galeras_ uses to collect code samples. In the first step (Fig. 1-1), we filtered the most popular Python Github repositories using the following query: \(language:Python\), \(fork:false\), \(size:>=30,000\), \(pushed:>2021-12-31\), \(stars:>1,000\). From the last paper report of ChatGPT [44], we assumed ChatGPT and other LLMc under analysis were not trained on commits from Jan 2, 2022 to Jan 1, 2023. Therefore, we claim that our testbeds help to avoid _data snopoping_, which is the misuse of data points to evaluate statistical hypotheses using training samples. Then, we collected a set of brand-new methods for each commit. This step resulted in \(\approx 338k\) data points. For each data point, we also collected its corresponding documentation without considering inline comments.
In the second step (Fig. 1-2), we engineered and pre-processed both code and documentation-related features from collected data points. Then we parsed the AST variables for our data points by employing the Tree-Sitter library. To guarantee efficient data management and once the previous features were engineered and extracted, we stored raw and preprocessed data points in a relational database. Next, we removed duplicated samples using a distinct query reducing the testbeds size to \(\approx 227K\) data points for code (_RawData_ in tab. II). Of these reduced data points, \(\approx 77K\) contains a valid _docstring_ (_RawDataDocstring_ in tab. II). A _docstring_ is valid when its text is larger than 3 words.
In the third step 1-3), we manually validated \(960\) out of \(\approx 227K\) data points. These validated data points were randomly selected from _RawData_ and _RawDataDocstring_. The remaining data points were automatically validated. Our validation process ensures the date of each pushed commit is within the range of dates stated in the original query. We also validated that the methods attached to each commit were indeed updated within the same range of dates. In addition, we validated the meaningfulness of the _docstring_ and _commit_message_ by inspecting the consistency of the natural language descriptions with the actual code implementation, removing \(\approx 1.9\%\)_RawDataDocstring_ obtaining \(\approx 57K\) datapoints (tab. II). Lastly, _complexity_ was validated using the Codalyze plugin in Visual Studio Code. For the sake of simplicity, we omit explaining all considered fine-grained validation steps in this paper. However, the reader can consult our online appendix for more information [45].
In the final step (Fig.1-4), we sampled \(3k\) data points from _RawData_ testbed to build five additional testbeds, each one for a specific SE task. _Galeras_ comprises _RandomCut_, _WithDocString_ and _FromDocString_ for _code completion_; _CommitGen_ for _code generation_; and _SummarizationGen_ for _code summarization_. These additional testbeds are described in Tab. II. To build _RandomCut_, we chose data points with more than \(10\) tokens or \(100\) characters. Next, the data point is randomly cut after the method signature. To build _SummarizationGen_ and _CommitGen_, we filtered the _RawDataDocstring_ data points with more than 10 words or 50 characters. After building the five testbeds, we removed duplicated snippets using the Jaccard similarity on preprocessed data points with BPE HuggingFace tokenizer. Because the de-duplication between training and test sets was discarded (_i.e.,_ no multiset threshold), we set \(0.7\) as the similarity threshold for our testbeds [46, 47]. Table. III shows the SE Task associated with each curated testbed, the percentage rate of detected duplicates, and the final size.
## IV Causal Analysis for Interpretable LLMc
_Galeras_ is a causal benchmarking to compare the performance of LLMc against each other by controlling for _confounding variables_, which are features of the source code that can influence the prediction performance of LLMc. Ideally, researchers can use _Galeras_ to contextualize the outcomes of LLMc by presenting possible tailored treatment variables that explain the behavior of the model. _Galeras_' goal is to empower the research community to interpret typical performance metrics by stating the assumptions of the prediction problem in a _Structural Causal Model_ (SCM). The SCM comprises four random variables. The first variable is the _treatments_\(T\), which represents the input configuration prompts in our case study. The second variable is the _potential outcomes_\(Y\), which is the model prediction performance measured using distance metric (_e.g.,_ BLEU, CodeBLEU, Levenshtein). The third variable is the _confounders_\(Z\), which represents variables affecting both
Fig. 2: _Galeras_ Structural Causal Model Benchmarking
\(T\) and \(Y\) (see Fig. 2). The last variable is the _effect modifiers_, which is the features directly affecting outcomes \(Y\).
The purpose of the causal analysis is to eliminate _spurious correlations_ between the treatments \(T\) and the outcomes \(Y\) by controlling for confounding features \(Z\). The elimination of the confounding features can be formally described with both an SCM and the \(do\)-operator introduced by Pearl _et al._[48]. We measure the _Average Treatment Effect_ (ATE) by approximating the conditional probability \(p(Y|do(T))\) with statistical methods such as the propensity score matching, stratification, or IPW [48, 49]. An in-depth analysis and explanation of causal inference methods are beyond the scope of this paper.
## V Causal Study: Interpretable Code Completion
To demonstrate how to employ _Galeras_ for causal analysis, in practice, we design a study in which we evaluate ChatGPT's performance for two prompt engineering methods \(T_{1}\) and \(T_{2}\) based on Liu _et al._[33]. Prompt engineering is the activity of optimizing the input space of a given LLM in order to generate better outcomes without giving rise to expensive fine-tuning. The goal of our case study is to compare these two prompting methods after controlling for confounding features.
### _Evaluation Methodology_
The evaluation methodology of the case study is divided into three parts. The first part addresses the exploratory analysis of _Galeras_ testbeds. We employed the BPE tokenizer to normalize the vocabulary of each treatment \(T\) and outcome \(Y\) sentence. The token count categorized by taxonomy is presented in Fig.3. Tokens within each sentence were classified based on their taxonomy, _i.e., 'try'_ and '_catch_' are classified as _exceptions_ and '_if_' and '_else_' as conditionals. Since the analysis focused solely on Python, keywords related to data types were classified as _casting_ tokens.
The second part canonically evaluates ChatGPT using our testbed _WithDocString_. CodeBLEU was computed with a default parameter value of \(0.25\). In addition, BLUE was computed with a 4-gram parameter. On the other hand, we computed the Levenshtein distance and similarity for a local evaluation (see Tab.IV-Performance Metrics).
The third part estimates the causal effect of prompt engineering methods and ChatGPT performance. Figure 2 illustrates our Structural Causal Models for the prompt engineering case of ChatGPT. We use _Galeras_ to compare the performance of two different treatments. The first treatment \(T_{1}\) is one prompt, which contains a command (_e.g., Complete the following a Python code, return only code and complete method: '{partial code}'_ ) followed by the actual input code to be completed. The second treatment \(T_{2}\) comprises two prompts. The first one is a context prompt that entails both the _docstring_ and the incomplete cut code. The second one is a _processing prompt_ that contains sentences asking for removing comments and optimizing code (_e.g.,_ Remember you have a Python function named '{fun_name }', the function starts with the following code '{code}'. The description for the function is: '{docstring }' ). We used the previous treatments against a _control_ group. The _control_ is a _task prompt_ that encompasses an action word or verb followed by the incomplete code input (_e.g.,_ Complete the following python method: '{partial code}'). To evaluate whether treatments \(T\) are impacting ChatGPT performance \(Y\), we controlled for confounding features \(Z\). Our confounders _prompt_size_, \(n\_whitepspaces\), _token_count_, and _nloc_ were selected due to their high correlation (\([0.4{-}0.8]\)) with the Levenstein distance in control and treatment groups. Although \(n\_ast\_nodes\) has a high correlation with the Levenstein distance, we assumed that structural features are ignoring the treatments. Hence, AST-based features are effect modifiers. The potential outcomes \(Y_{2}\),\(Y_{1}\),\(Y_{0}\) are observed under the treatments \(T_{1}\),\(T_{2}\),_control_. Next, we approximate the _Average Treatment Effect_\(p(Y|do(T)\) using the SCM defined in Fig.2.
### _Results_
**RQ\({}_{1}\)**_Exploratory Analysis._ The purpose of the exploratory analysis is to expose and understand the testbeds' feature distribution grouped by prompt engineering methods \(T\). Table II depicts the average and standard deviation for each code feature. We observed high variability in \(n\_whitepspaces\) (\(902.22\)) and _token_count_ (\(262.59\)), which implies the method sizes are not homogeneous across the testbeds. While the descriptive analysis showcases high variability for all code features, our testbeds are a representative sub-sample of open repositories. For instance, the _complexity_ feature has an average value of \(3.25\) suggesting that the code has a reasonable number of loops, conditionals, and operators. Therefore, our collected methods exhibit that our pipeline process guarantee data point diversity.
We observed no significant differences in the counting of tokens among potential outcomes (including the _control_)
\begin{table}
\begin{tabular}{l l c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Method**} & \multicolumn{3}{c}{**Combenders\({}^{*}\)**} & \multicolumn{3}{c}{**Effect modifiers**} \\ \hline
**Testbed** & **Deding** & **\#Uniqueness** & **\#size** & **\#token_counts** & **\#n\_ext\_errors** & **\#act\_levels** & **\#n\_and\_codes** & **\#
and the ground truth (see Fig. 3-A). For instance, _control_ and \(T_{2}\) on declarations (with a diff. around \(550\) tokens) and loops (with a diff. around \(600\) tokens) are relatively small. However, \(T_{1}\) outcome exhibited high difference and excessive use of OOP, declarations, and loops with a diff. around \(2.6k\), \(2k\), and \(1.5k\) tokens respectively. Figure 3-B showcases the token distribution for each testbed. We detected that the two prompt engineering methods were generating a similar amount of tokens (_i.e.,_ green and red distributions) compared to the _control_ and ground truth. This suggests that sophisticated prompts tend to generate repetitive tokens. Figure 3-C depicts the Levenshtein similarity distance between the ChatGPT outputs, generated with both prompt engineering methods and the _control_, and the ground truth. We can observe from the proportion curve that \(T_{1}\) similarity performs the worst compared to the _control_ and \(T_{2}\).
**RQ\({}_{2}\)**_Causal Analysis:_
For two basic prompt engineering methods, code completion performance of ChatGPT is mainly affected by the following confounders: number of white spaces, lines of code, tokens in the outcome, and tokens in the prompt with a maximum correlation of \(0.80\) with the Levenstein distance (see Tab. IV-Correlations). This suggests that after controlling for confounders, the _Average Treatment Effect_ (ATE) the prompt engineering method\({}_{1}\), represented by \(T_{1}\), has a negative causal effect \(p_{1}(Y|do(T))=E[Y_{1}-Y_{0}]\approx-5.1\%\) compared to a positive causal effect \(p_{2}(Y|do(T))=E[Y_{2}-Y_{0}]\approx 3.3\%\) of method\({}_{2}\), represented by \(T_{2}\) (see Tab. IV-Causal Effects). This indicates that method\({}_{1}\) is negatively affecting the Levenshtein similarity (_i.e.,_ poor performance) across _WithDocString_ testbed, while method\({}_{2}\) is actually enhancing ChatGPT prediction performance. These results are consistent with the previous section in which we demonstrated that \(T_{2}\) performs better than \(T_{1}\). After controlling for the confounding effect of the code features such as the prompt size and token counts, we can claim that the reason why \(T_{2}\) is performing better than \(T_{1}\) is _purely_ due to the information contained in the prompt.
In order to validate the robustness of computed ATEs and proposed SCM, we refuted our effects using the following methods: _Placebo, Random Common Cause (RCC)_ and _Subet_ (see DoWhy refutes in [49]). We found that, for the ATEs computed with score matching, their corresponding refutation values are not stable. That is, the placebo value for \(Y_{1}\) similarity is far from zero with \(2.98\), while the RCC value differs by around \(212\) in \(Y_{2}\) distance.
**RQ\({}_{2}\)**Causal Analysis: The prompt engineering method\({}_{1}\) (treatment \(T_{1}\)) has a negative causal impact on the ChatGPT performance with an ATE estimation of \(-5\%\). Conversely, the prompt engineering method\({}_{2}\) (treatment \(T_{2}\)) has a subtle positive influence on the same performance with an ATE of \(3\%\). This suggests that after controlling for prompt size, white spaces, # of tokens, and nlocs; prompt engineering strategies are indeed affecting the quality of code completion.
## VI Conclusion & Future Work
This study used a qualitative technique to analyze the causal effect of SE-oriented treatments on the performance of LLMc. Such a technique is embedded into a benchmarking strategy named _Galeras_. Our benchmarking enables researchers to interpret _why_ a given LLMc is reporting a particular accuracy metric. We curated two raw Python testbeds: _RawData_ with only mined code and _RawDataDocstring_ with the corresponding documentation from GitHub. We also provide five SE
Python testbeds for three SE tasks (_i.e.,_ code completion, code summarization, and commit generation), we proposed a pipeline for collecting testbeds from git repositories. Finally, we conducted a rigorous evaluation of code completion with ChatGPT. Our causal study suggests that ChatGPT's performance is not only affected by the prompt size but also by the prompt semantics. Future research will focus on determining whether other unmeasured confounders are affecting LLMc's prediction by augmenting the number of testbeds.
## VII Acknowledgement
This research has been supported in part by the NSF CCF-2311469, CNS-2132281, CCF-2007246, and CCF-1955853. We also acknowledge support from Cisco Systems. Any opinions, findings, and conclusions expressed herein are the authors' and do not necessarily reflect those of the sponsors.
|
2310.08607 | Infrared Cloud Monitoring with UCIRC2 | The second generation of the Extreme Universe Space Observatory on a Super
Pressure Balloon (EUSO-SPB2) is a balloon instrument that searched for ultra
high energy cosmic rays (UHECRs) with energies above 1 EeV and very high energy
neutrinos with energies above 1 PeV. EUSO-SPB2 consists of two telescopes: a
fluorescence telescope pointed downward for the detection of UHECRs and a
Cherenkov telescope toward the limb for the detection of PeV-scale showers
produced by neutrino-sourced tau decay (just below the limb) and by cosmic rays
(just above the limb). Clouds inside the fields of view of these
telescopes--particularly that of the fluorescence telescope--reduce EUSO-SPB2's
geometric aperture. As such, cloud coverage and cloud-top altitude within the
field of view of the fluorescence telescope must be monitored throughout
data-taking. The University of Chicago Infrared Camera (UCIRC2) monitored these
clouds using two infrared cameras centered at 10 and 12 $\mu$m. By capturing
images at wavelengths spanning the cloud thermal emission peak, UCIRC2 measured
cloud color-temperatures and thus cloud-top altitudes. In this contribution, we
provide an overview of UCIRC2, including an update on its construction and
performance. We also show first results from the flight. | Rebecca Diesing, Stephan S. Meyer, Johannes Eser, Alexa Bukowski, Alex Miller, Jake Apfel, Gerard Beck, Angela V. Olinto | 2023-10-11T15:13:18Z | http://arxiv.org/abs/2310.08607v1 | # Infrared Cloud Monitoring with UCIRC2
###### Abstract:
The second generation of the Extreme Universe Space Observatory on a Super Pressure Balloon (EUSO-SPB2) is a balloon instrument that searched for ultra high energy cosmic rays (UHECRs) with energies above 1 EeV and very high energy neutrinos with energies above 1 PeV. EUSO-SPB2 consists of two telescopes: a fluorescence telescope pointed downward for the detection of UHECRs and a Cherenkov telescope toward the limb for the detection of PeV-scale showers produced by neutrino-sourced tau decay (just below the limb) and by cosmic rays (just above the limb). Clouds inside the fields of view of these telescopes--particularly that of the fluorescence telescope--reduce EUSO-SPB2's geometric aperture. As such, cloud coverage and cloud-top altitude within the field of view of the fluorescence telescope must be monitored throughout data-taking. The University of Chicago Infrared Camera (UCIRC2) monitored these clouds using two infrared cameras centered at 10 and 12 \(\mu\)m. By capturing images at wavelengths spanning the cloud thermal emission peak, UCIRC2 measured cloud color-temperatures and thus cloud-top altitudes. In this contribution, we provide an overview of UCIRC2, including an update on its construction and performance. We also show first results from the flight.
## 1 Introduction
Ultra High Energy Cosmic Rays (UHECRs), cosmic rays (CRs) with energies above \(10^{18}\) eV, are currently detected with ground-based observatories such as the Telescope Array [5] in Utah and the Pierre Auger Observatory [4] in Argentina. In particular, UHECRs can be detected via the characteristic particle shower, called an Extensive Air-Shower (EAS), that occurs when an UHECR interacts with Earth's atmosphere. This EAS produces fluorescence of atmospheric nitrogen molecules, detectable in the 300-400 nm spectral band, as well as optical Cherenkov light. Because UHECRs are rare, (\(<1\) per km\({}^{2}\) per century close to \(10^{20}\) eV), charged-particle astronomy requires extremely large detector volumes. One way to increase detector volume is to observe the atmosphere from above. This technique was tested by the Extreme Universe Space Observatory on a Super Pressure Balloon (EUSO-SPB2) during a brief flight in the spring of 2023.
A pathfinder to a more ambitious satelite mission, EUSO-SPB2 can detect UHECRs via two complementary techniques: looking down upon the atmosphere with a fluorescence telescope and looking towards the limb of the Earth to observe the Cherenkov signals produced by UHECRs above the limb. EUSO-SPB2 also searched for the signatures of neutrinos above \(10^{16}\) eV via the Cherenkov light from upward going tau leptons produced when a tau neutrino interacts near the surface of the Earth (see Figure 1) [1].
The presence of high clouds within the detectors' field of view (FoV)-particularly the fluorescence telescope-can significantly reduce the UHECR event detection rate and event energy calibration. Namely, it is possible for some of the EAS signal to occur behind high clouds. Determining the EUSO-SPB2s exposure to UHECRs thus requires knowledge of the effective detector volume, i.e., the volume of atmosphere within the FoV, above the clouds. Thus, EUSO-SPB2 requires continuous information about cloud coverage and altitude. This is the responsibility of the second generation of the University of Chicago Infrared Camera (UCIRC2). In this proceeding, we present an overview of the UCIRC2 instrument, including preliminary results from its 2023 flight.
Figure 1: EUSO-SPB2’s three detection modes: fluorescence from UHECRs (purple), Cherenkov from UHECRs (red), and Cherenkov from CNs (green).
## 2 Method
When EUSO-SPB2 is in observing (night) mode, IR images of the environmental conditions in and around the effective UHECR detection area are captured by UCIRC2 every 120 seconds. These images can be used to collect information about cloud coverage and altitude (cloud top height, CTH) within the field of view of the UHECR detectors (see Figure 2).
Because the clouds are at the temperature of the air, CTH can be inferred from cloud temperature, \(T_{\rm c}\) which can be estimated using two brightness temperatures in bands near in wavelength to the cloud blackbody peak. More specifically, UCIRC2's two IR cameras observe at wavelengths of 10\(\mu\)m and one at 12\(\mu\)m. A calibrated image in a single frequency band can be used to determine the temperature of an object of known emissivity (\(\epsilon\)), but cloud emissivity is highly variable and significantly less than 1. Thus, a multifrequency observation is required to break the degeneracy between \(\epsilon\) and \(T_{\rm c}\). For a single layer of clouds above an ocean of known surface temperature and reflectivity (and thus power, \(P_{\rm E}\)), one can estimate power on the detector, \(P_{\rm tot}\) as,
Figure 2: Uncalibrated images of mountains (top) during the daytime and clouds (bottom) during the night captured by UCIRC2, which flew on EUSO-SPB2 in the spring of 2023. Also visible in the foreground is a portion of the EUSO-SPB2 gondola (lower left corners of each image) as well as cables and antennas hanging from the gondola. Using an IR camera, cloud coverage can be easily determined. Cloud temperature (and thus altitude) can be determined by observing at two wavelengths near the cloud blackbody peak, in our case 10 and 12 \(\mu\)m.
\[P_{\rm tot}=\epsilon P_{c}+(1-\epsilon)P_{E}. \tag{1}\]
Here, \(P_{c}\) is the power of the cloud, from which \(T_{c}\) and thus CTH, can be inferred. Other methods for reconstructing CTH can be found in [2].
For a more precise calculation, we use the Coupled Ocean-Atmosphere Radiative Transfer Model (COART) presented in [3], which calculates the radiance at any frequency by solving the radiative transfer equation from the ocean to a specified level in the atmosphere, including clouds of arbitrary altitude and emissivity (see Figure 3).
## 3 Design
IR Cameras.UCIRC2 is outfitted with two \(640\times 480\) pixel Teledyne DALSA Calibir GXF uncooled IR cameras with 14mm lenses, focused at infinity. The cameras have a \(42^{\circ}\times 32^{\circ}\) FoV, chosen to be somewhat larger than that of EUSO-SPB2's fluorescence telescope. When the payload is in "night mode", which occurs when the atmosphere is dark enough to allow for proper functioning of photodetection modules (PDMs), UCIRC takes a pair of pictures every two minutes. The wide field of view of the IR cameras makes it possible to extrapolate the cloud conditions in the section of the atmosphere swept out by the PDM field of view in the time between pictures.
The native spectral response of the cameras is 8 to 14 \(\mu\)m, but each camera is fitted with a filter to facilitate the radiative CTH reconstruction. One of the cameras is fitted with an Edmund Optics bandpass light filter that transmits wavelengths between 9.6 and 11.6\(\mu\)m (denoted 10\(\mu\)m); the other
Figure 3: Upgoing spectral radiance as a function of wavelength as calculated using COART [3], assuming different ocean temperatures with no clouds (red, green, and blue lines) and the presence of clouds with tops 4 km above a 20 C ocean (black lines). The two solid lines shown in each case correspond to 0 and 30 degree zenith angles. The blackbody curves corresponding to each ocean temperature are also shown for reference (dotted lines), as are the bandpasses of UCIRC2’s two filters. We will use this model to determine the altitudes of clouds beneath EUSO-SPB2’s fluorescence detector.
is fitted with a SPECTROGON bandpass light filter which transmits wavelengths between 11.5 and 12.9\(\mu\)m (denoted 12\(\mu\)m). These bands are spaced to obtain brightness temperature data that facilitates both the Blackbody Power Ratio CTH reconstruction and the Radiative Transfer Equation CTH reconstruction methods discussed in the preceding section.
The cameras are powered via a 12V connection and communicate via Gigabit Ethernet with a single-board, industrial-grade CPU that can operate at temperatures between -40C and 85C.
Environment Control.UCIRC2 is designed to operate in a high altitude (\(\approx\) 33km) environment during both daytime, when ambient temperatures reaches approximately 40C, and nighttime, when ambient temperatures reach approximately -40C. Temperature management is therefore a central design concern. In particular, the camera response is temperature dependent, meaning that camera temperature must be held approximately constant during operation (night mode). To maintain a stable temperature, the two cameras are housed in a 300mm\(\times\)300mm\(\times\)300mm aluminum box coated with high emissivity flat white paint. This box splits into two halves to allow easy access to the cameras and electronics (see Figure 4).
A temperature management system consisting of resistive heaters and thermometers enables precise temperature monitoring and control. This heating system is controlled by a Meerstetter Engineering HV-1123 thermoelectric cooling and heating controller (TEC). Note that system is designed to be most effective at heating because, in general, UCIRC2 collects data during the nighttime, when the environment is cold. The temperature regulated camera stage is a machined
Figure 4: Drawing of UCIRC2, including its 3D-printed frame and two cameras located near the center of the box (pointed toward the viewer). To show the interior structure of the box, this drawing does not include the painted aluminum panels mounted to UCIRC2’s sides.
aluminum plate to which both IR cameras are thermally coupled. Note that the set point temperature for the cameras can be modified by telemetery command, with daytime and nighttime operating temperatures chosen to minimize power consumption.
## 4 Testing and Calibration
To replicate the expected flight environment, UCIRC2 was tested in a thermovac chamber pumped down to 0.3 mbar and a shroud cooled with liquid nitrogen vapor. The temperature management system was tested over all possible environmental temperatures to ensure that the cameras can be maintained within their operating temperature range.
To calibrate the cameras, UCIRC2 was then positioned above a calibration target consisting of a highly emissive, temperature-controlled material. By taking of images of the calibration target at multiple temperatures, this target can be used to perform a pixel by pixel calibration of each camera (see Figure 5). Because the cameras' thermal response depends on their temperature, calibration images were also taken at multiple camera temperatures.
## 5 Preliminary Results
Flight Overview.During EUSO-SPB2's brief (\(\lesssim\) 2 day) flight, UCIRC2 was able to capture one full night's worth of cloud data (i.e., with images captured every two minutes), in conjunction with observations taken by EUSO-SPB2's fluorescence telescope. All of these data were successfully telemetered to the ground. During nighttime operations, UCIRC2 maintained a constant temperature of 10 and later 15C (see Figure 6).
In-Flight Calibration.In addition to the thermovac tests described in the preceding section, we performed in-flight calibration by using the ocean as a flat field. Our method proceeds as follows:
1. Choose a relatively cloud-free image in which the ocean is visible in most pixels.
Figure 5: Sample calibration (flat-field) image from each camera, with the camera centered at 10 \(\mu\)m (12 \(\mu\)m) on the left (right), and both the cameras and calibration target set to 20C. The resulting pattern is a camera-specific additive offset, which is subtracted from the images taken during flight.
2. Subtract a flat-field image (\(I_{\rm{ff}}\)) taken in the thermovac (e.g., Figure 5). Use the same calibration data to estimate camera responsivity (counts per unit of radiance) in each pixel (\(I_{\rm{r}}\)).
3. Using a mask, remove pixels that correspond to foreground objects.
4. Fit the remaining, flat-field subtracted image to a second-order surface, \(I_{\rm{ocean}}\) (i.e., use the ocean as an additional calibrator).
To then calibrate an arbitrary image, \(I_{\rm{init}}\) (i.e., to measure the radiance impinging on in each pixel), we estimate,
\[I_{\rm{cal}}=\frac{I_{\rm{init}}-I_{\rm{ff}}-I_{\rm{ocean}}}{I_{\rm{r}}}+I_{ \rm{COART}}, \tag{2}\]
where \(I_{\rm{COART}}\) is the radiance from a 10 C ocean as a function of zenith angle, as predicted by the COART model described previously. A sample calibrated image is shown in Figure 7.
Outlook.Our calibrated images clearly show cloud coverage, as well as variations in cloud temperature. Going forward, we will use the COART model to make detailed estimates of CTH in order to constrain EUSO-SPB2's aperture during flight. This exercise will also inform design decisions for IR cameras on future missions to detect EAS's from above.
## 6 Acknowledgements
The authors acknowledge the support by NASA awards 11-APRA-0058, 16-APROBES16-0023, 17-APRA17-0066, NNX17AJ82G, NNX13AH54G, 80NSSC18K0246, 80NSSC18K0473, 80NSSC19K0626, 80NSSC18K0464, 80NSSC22K1488, 80NSSC19K0627 and 80NSSC22K0426, the French space agency CNES, National Science Centre in Poland grant n. 2017/27/B/ST9/02162, and by ASI-INFN agreement n. 2021-8-HH.0 and its amendments. This research used resources of the US National Energy Research Scientific Computing Center (NERSC), the DOE Science User Facility operated under Contract No. DE-AC02-05CH11231. We acknowledge the NASA BPO and CSBF staffs for their extensive support. We also acknowledge the invaluable contributions of the administrative and technical staffs at our home institutions.
Figure 6: UCIRC2 camera temperatures during flight as a function of time. During nighttime operations, the cameras were maintained at a constant temperature, with control lost only during EUSO-SPB2’s descent. |
2305.02822 | Enabling High-Precision 5G mmWave-Based Positioning for Autonomous
Vehicles in Dense Urban Environments | 5G-based mmWave wireless positioning has emerged as a promising solution for
autonomous vehicle (AV) positioning in recent years. Previous studies have
highlighted the benefits of fusing a line-of-sight (LoS) 5G positioning
solution with an Inertial Navigation System (INS) for an improved positioning
solution. However, the highly dynamic environment of urban areas, where AVs are
expected to operate, poses a challenge, as non-line-of-sight (NLoS)
communication can deteriorate the 5G mmWave positioning solution and lead to
erroneous corrections to the INS. To address this challenge, we exploit 5G
multipath and LoS signals to improve positioning performance in dense urban
environments. In addition, we integrate the proposed 5G-based positioning with
low-cost onboard motion sensors (OBMS). Moreover, the integration is realized
using an unscented Kalman filter (UKF) as an alternative to the widely utilized
EKF as a fusion engine to avoid ignoring the higher-order and non-linear terms
of the dynamic system model. We also introduce techniques to evaluate the
quality of each LoS and multipath measurement prior to incorporation into the
filter's correction stage. To validate the proposed methodologies, we performed
two test trajectories in the dense urban environment of downtown Toronto,
Canada. For each trajectory, quasi-real 5G measurements were collected using a
ray-tracing tool incorporating 3D map scans of real-world buildings, allowing
for realistic multipath scenarios. For the same trajectories, real OBMS data
were collected from two-different low-cost IMUs. Our integrated positioning
solution was capable of maintaining a level of accuracy below 30 cm for
approximately 97% of the time, which is superior to the accuracy level achieved
when multipath signals are not considered, which is only around 91% of the
time. | Qamar Bader, Sharief Saleh, Mohamed Elhabiby, Aboelmagd Noureldin | 2023-05-04T13:37:17Z | http://arxiv.org/abs/2305.02822v1 | Enabling High-Precision 5G mmWave-Based Positioning for Autonomous Vehicles in Dense Urban Environments
###### Abstract
5G-based mmWave wireless positioning has emerged as a promising solution for autonomous vehicle (AV) positioning in recent years. Previous studies have highlighted the benefits of fusing a line-of-sight (LoS) 5G positioning solution with an Inertial Navigation System (INS) for an improved positioning solution. However, the highly dynamic environment of urban areas, where AVs are expected to operate, poses a challenge, as non-line-of-sight (NLoS) communication can deteriorate the 5G mmWave positioning solution and lead to erroneous corrections to the INS. To address this challenge, we exploit 5G multipath and LoS signals to improve positioning performance in dense urban environments. In addition, we integrate the proposed 5G-based positioning with low-cost onboard motion sensors (OBMS). Moreover, the integration is realized using an unscented Kalman filter (UKF) as an alternative to the widely utilized EKF as a fusion engine to avoid ignoring the higher-order and non-linear terms of the dynamic system model. We also introduce techniques to evaluate the quality of each LoS and multipath measurement prior to incorporation into the filter's correction stage. To validate the proposed methodologies, we performed two test trajectories in the dense urban environment of downtown Toronto, Canada. For each trajectory, quasi-real 5G measurements were collected using a ray-tracing tool incorporating 3D map scans of real-world buildings, allowing for realistic multipath scenarios. For the same trajectories, real OBMS data were collected from two-different low-cost IMUs. Our integrated positioning solution was capable of maintaining a level of accuracy below \(30\) cm for approximately \(97\%\) of the time, which is superior to the accuracy level achieved when multipath signals are not considered, which is only around \(91\%\) of the time.
Autonomous vehicles are gaining popularity but require highly accurate positioning to operate safely. Achieving decimeter-level accuracy for at least \(95\%\) of the time is challenging in dense urban environments where GPS signals may be blocked. This paper proposes using 5G wireless networks to provide high-precision positioning services to address this issue, as 5G base stations are expected to be densely deployed in urban areas. However, maintaining a line-of-sight (LoS) communication with 5G base stations may not always be possible in dense urban areas due to the multi-path from surrounding buildings. Therefore, we suggest fusing LoS measurements with non-line-of-sight (NLoS) measurements to improve positioning accuracy in challenging urban environments. To guarantee seamless positioning even in scenarios involving 5G signal outages, we also incorporate onboard motion sensors like accelerometers, gyroscopes, and odometers to ensure that the autonomous vehicle's positioning remains accurate and reliable even in all challenging urban environments.
5G; angle of departure (AoD); autonomous vehicles (AVs); Kalman filter (KF); loosely-coupled (LC) integration; mm-Wave; multipath; positioning; onboard motion sensors (OBMS); round trip time (RTT). +
Footnote †: publication: 0018-9545 ©20202 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See [http://www.ieee.org/publications_standards/publications/rights/index.html](http://www.ieee.org/publications_standards/publications/rights/index.html) for more information.
## I Introduction
Autonomous vehicles (AVs) are becoming increasingly important in the transportation industry as they have the potential to greatly improve safety, reduce traffic congestion, and provide more efficient transportation. However, AVs rely heavily on absolute positioning systems to navigate and operate safely [1]. While global navigation satellite systems (GNSS) are often used for this purpose, they can be unreliable in urban areas due to the high-rise buildings that can block or reflect GNSS signals [2]. This can make it difficult for AVs to accurately determine their location and orientation, which is essential for safe operation. On the other hand, onboard motion sensors (OBMS), like accelerometers, gyroscopes, and odometers, do not suffer from the aforementioned problems as they are self-contained. OBMS measurements can be processed using a dead reckoning algorithm like the inertial navigation system (INS) to compute the vehicle's position, velocity and attitude. INS has the advantage of providing the positioning solution at a high data rate. However, the inherent errors of the OBMS may result in growing position errors if they work in standalone mode, which can be resolved by pairing it with other reliable positioning technologies of superior accuracy (such as 5G wireless positioning) to estimate and reset the INS errors [3].
Recently, 5G NR mmWave has been explored as a potential positioning technology for AVs [4, 5]. The high-frequency band of the 5G wireless spectrum provides a high bandwidth of \(400\) MHz, allowing for accurate time-based measurements like time of arrival (ToA), round-trip time (RTT), and time
difference of arrival (TDoA), as well as the ability to resolve multipath components (MPC) in the time domain. Massive multi-input-multi-output (MIMO) capabilities of mmWave allow for accurate angle-based measurements such as angle of arrival (AoA) and angle of departure (AoD). 5G mmWave also features low latency communications, making it ideal for supporting the real-time decision-making and navigation of AVs. By leveraging the unique propagation characteristics of mmWave signals, it is possible to achieve decimeter-level positioning accuracy, which is essential for AVs' safe and reliable operation. Finally, 5G small cells are expected to be densely deployed every \(200\) m to \(500\) m, which means that vehicular systems will endure a higher chance of LoS connectivity with the deployed gNBs [6].
Despite the higher line-of-sight (LoS) probability associated with 5G technology in comparison to LTE, the user equipment (UE) may still encounter non-line-of-sight (NLoS) communication. This is attributed to the dynamic nature of urban environments, where various physical obstacles, such as buildings, trees, pedestrians, buses, and trucks, can impede the 5G signal. Directly using NLoS signals for positioning using LoS-based algorithms will significantly bias the positioning solution. The literature offers several approaches to address the NLoS issue. One line of research focuses on techniques that minimize the effect of NLoS links on positioning accuracy [7, 8, 9, 10]. In contrast, others aim to detect and discard NLoS signals to avoid positioning errors caused by multipath [11, 12]. However, recent work explores multipath rays as an additional source of positioning information during 5G outages [13, 14]. This paper introduces a new, high-precision accurate positioning solution that combines LoS, multipath 5G mmWave-based signals, and OBMS to offer an uninterrupted positioning solution at a high data rate, suitable for AV operation in dense urban areas. To the best of our knowledge, no existing literature fuses multipath signals with OBMS. To expand on this, we propose a measurement selection scheme to evaluate each multipath measurement before fusion. Additionally, we suggest using the unscented Kalman filter (UKF) as an alternative to the commonly used extended Kalman filter (EKF) to avoid the errors associated with the linearization of the dynamic and measurement system models, as described in [15] and as demonstrated in our analysis later in this paper. Through rigorous testing, we demonstrate that the proposed solution achieves exceptional performance over two distinct trajectories with varying dynamics and 5G outage probabilities and with different suites of low-cost OBMS.
The contributions of this paper are as follows:
1. We present an enhanced positioning solution based on the loosely-coupled (LC) integration of 5G LoS and multipath signals with OBMS utilizing a UKF.
2. We employ a measurement exclusion scheme that relies on the UE and BS propagation link.
3. We propose an additional validation stage for 5G NLoS measurements using constraints derived from odometer measurements.
4. For validation, we conducted two road test trajectories in downtown Toronto (Ontario, Canada) involving actual OBMS measurements collected from sensors mounted inside the test vehicle and integrated with a quasi-real 5G-based mmWave observables generated by the \(S_{5}G\) simulation software, which accurately emulates the complex urban environment of Toronto's downtown area, where the road tests were conducted.
The paper is structured as follows: Section II presents a literature review. Section III outlines the system model, covering the foundations of 5G and INS measurements and various Kalman filter implementations. Section IV proposes a 5G/OBMS LC integration approach using a UKF. Section V provides information about the experimental and road test setup. Section VI presents the results and discussions. Finally, Section VII concludes the paper.
## II Literature Review
Very limited works have integrated 5G measurements with OBMS [16, 17, 18]. The work in [16] utilized federated filtering to integrate INS/5G/GPS/LEO by means of sub-filters reporting to a central filter. In their 5G/INS sub-filter, they fuse 5G pseudo-range measurements with INS measurements by means of tight coupling (TC) utilizing an EKF. Such integration will yield high linearization error as the transition and observation models are non-linear. Furthermore, excluding angle-based 5G measurements can also constrain positioning accuracy and mandate the UE to establish connections with at least three BSs concurrently to obtain a precise 3D positioning solution. Relying on trilateration assuming access to three or more base stations may not be possible in dense urban areas and would result in a severe multipath effect that deteriorates the positioning accuracy. In reference to [17], the authors fused INS with 5G ToA and AoA. During the prediction stage, they rely on accelerometer readings to estimate velocity and position by incorporating a constant acceleration model. However, it is worth noting that such a model may be considered unusual given that INS mechanization techniques are already established in the literature and could offer more reliable computation of position and velocity at higher data rates without imposing limitations on the vehicle dynamics (e.g. constant acceleration model). In addition, using an EKF for filtering in the presence of non-linear transition and measurement models may result in sub-optimal performance. Lastly, their IMU measurements are simulated, making it hard to generalize or compare with other positioning solutions. For instance, simulated IMU data may not account for the effects of external factors such as temperature changes, magnetic interference, and mechanical vibrations. As a result, the performance of the positioning solution based on simulated data may not generalize well to real-world scenarios, which is addressed in this paper. Finally, the approach proposed in [18] suggests the use of a constant acceleration model for the prediction stage and 5G ToA, AoA, and IMU accelerations in the \(x\) and \(y\) directions for corrections. The EKF is utilized for the final integration, where the UE position, velocity, and acceleration are considered system states. However, this method does not consider estimating the azimuth (heading) angle, an essential variable for the navigation solution in real-life operations. Additionally, the direct use of ToA and AoA measurements
in the measurements vector leads to linearization errors, as mentioned earlier.
## III System Model
### 5G System Model
We take into account a down-link 3D positioning scenario with a single base station (BS) and a single UE with positions \(\boldsymbol{p}_{b_{3D}}=\begin{bmatrix}x_{b}&y_{b}&z_{b}\end{bmatrix}^{T}\) and \(\boldsymbol{p}_{3D}=\begin{bmatrix}x&y&z\end{bmatrix}^{T}\) respectively. We assume that the position of the BS is known and that the BS and the UE are oriented in a given manner. We use the channel parameters like AoA, denoted by \(\beta\), AoD, denoted by \(\alpha\), and ToA, denoted by \(\tau\), for each path. ToA can be used to compute the range between the BS and the UE through the following formula:
\[\tau=\frac{d_{3D}}{c}, \tag{1}\]
where \(d_{3D}\) is the total propagation distance, and \(c\) is the speed of light. The use of ToA requires tight synchronization between the UE and the BS. Else, time bias will afflict the measurements, causing positioning errors. RTT and TDoA measurements, on the other hand, do not require synchronization between the UE and the BS. In this paper, RTT measurements are utilized. To compute the range between the BS and the UE, the RTT measurement should be first divided by two to account for the total distance travelled. This work assumes that the UE will only be connected to the nearest BS. To determine the type of communication link between the BS and the UE, an NLoS detection technique based on range comparisons between RTT and RSS measurements, proposed in [19], will be used.
#### Iii-A1 5G LoS Positioning
The AoD and RTT information obtained from a single BS are used to determine the 3D position of the UE. The AoD provides the direction of the signal sent to the UE, while the RTT information can be used to calculate the distance from the BS to the UE. These measurements can then be used to determine the 3D position of the UE as seen in (2).
\[\boldsymbol{p}_{3D}=\boldsymbol{p}_{b_{3D}}+d_{3D}\begin{bmatrix}\sin\alpha \cos\phi\\ \cos\alpha\cos\phi\\ \sin(\phi)\end{bmatrix} \tag{2}\]
Where \(d_{3D}\) is the measured 3D distance between the BS and the UE, and \(\alpha\) and \(\phi\) are the estimated horizontal and elevation AoD angles, respectively. If a constant height assumption can be made about the UE, then the 3D positioning equation can be simplified to estimate the 2D position of the UE as seen in (3).
\[\boldsymbol{p}=\boldsymbol{p}_{b}+d\begin{bmatrix}\sin\alpha\\ \cos\alpha\end{bmatrix} \tag{3}\]
Where \(\boldsymbol{p}\) is the estimated 2D position of the UE, \(\boldsymbol{p}_{b}\) is the 2D position of the BS, and \(d\) is the 2D distance from the BS to the UE.
#### Iii-A2 5G Multipath Positioning
The SBR-based positioning scheme introduced in [20] is utilized. The algorithm determines the segment of possible UE position by utilizing AoD, AoA, and the distance \(d\) of the strongest propagation path, as depicted in Fig. 1.
The figure displays the system model of a single-bounce reflection scenario. The scatterer's coordinates, \(\boldsymbol{p_{s}}=\begin{bmatrix}x_{s}&y_{s}\end{bmatrix}^{T}\), and the UE's coordinates, \(\boldsymbol{p}\), are calculated as seen in (4) and (5).
\[\boldsymbol{p_{s}}=\boldsymbol{p_{b}}+r\begin{bmatrix}\sin\beta\\ \cos\beta\end{bmatrix},\qquad r\in(0,d) \tag{4}\]
\[\boldsymbol{p}=\boldsymbol{p_{s}}-(d-r)\begin{bmatrix}\sin\alpha\\ \cos\alpha\end{bmatrix},\qquad r\in(0,d) \tag{5}\]
Where \(r\) is the distance between the BS and the scatterer. The possible position of the UE can be represented by a straight-line equation as seen in (6).
\[y=k(\alpha,\beta)x+b(\alpha,\beta,d) \tag{6}\]
Where,
\[k(\alpha,\beta)=\frac{\cos\alpha+\cos\beta}{\sin\alpha+\sin\beta}, \tag{7}\]
and,
\[b(\alpha,\beta,d)=-k(\alpha,\beta)(x_{b}-d\sin\alpha)+y_{b}-d\cos\alpha. \tag{8}\]
Accordingly, the position of the UE can be determined by finding the intersection between two lines of two propagation paths, if available. This work uses an order-of-reflection identification (OoRI) technique to filter out higher-order reflections. The technique is based on ensemble learning and relies on 5G channel parameters, such as AoA, AoD, ToA, and RSS [21].
## IV OBMS System Model
### _INS Measurables_
A typical INS comprises an IMU unit consisting of three accelerometers and three gyroscopes. These sensors measure, along three mutually orthogonal directions, the accelerations \(f_{x},f_{y}\) and \(f_{z}\) and angular rates \(\omega_{x},\omega_{y}\), and \(\omega_{z}\) of a moving body in a 3D space. Such measurements are often used
Fig. 1: System model of a single-bounce reflection scenario.
for dead-reckoning positioning, which involves estimating the current position of a moving body based on its previous position, velocity, and orientation states. By integrating the specific forces and angular rate measurements from an IMU over time, it is possible to estimate the displacement and orientation of the object relative to its starting position. To achieve this, the accelerometer and gyroscope readings must be converted from the body frame, also known as the b-frame, to a global Earth-fixed coordinate frame. A local-level frame, also known as the l-frame, is frequently used, as seen in (9). Such transformation utilizes the \(\mathbf{R}_{b}^{l}\) rotation matrix as defined in [3] that transforms the measurement from the body frame (b) to the local navigation frame (l).
\[\begin{split}\mathbf{f}_{l}&=\mathbf{R}_{b}^{l}\mathbf{f}_{b}\\ \mathbf{\omega}_{l}&=\mathbf{R}_{b}^{l}\mathbf{\omega}_{b}, \end{split} \tag{9}\]
Among the errors associated with the OBMS are the sensors' noise and bias. Sensor noise is the random fluctuations in the sensor output due to the inherent sensor design and possibly the surrounding environment. The bias has two components. The first is a deterministic offset that can be removed by calibration. The second is the bias drift which is stochastic in nature that changes over time, even when no external forces or rotation are present. Sensor fusion and calibration are frequently used to combine data from multiple sensors to estimate and correct such errors [22].
### _Odometers_
A wheel odometer that provides the vehicle's forward speed in the b-frame is utilized. However, since our states are in the l-frame, we need to transform the odometer velocities in the b-frame, denoted as \(v_{b}=\begin{bmatrix}0&v_{Odo}&0\end{bmatrix}^{T}\), using the second column of the rotation matrix \(\mathbf{R}_{b}^{l}\) as shown in (11).
\[\begin{bmatrix}v_{e}\\ v_{n}\\ v_{u}\end{bmatrix}=\begin{bmatrix}\sin a\cos p\\ \cos a\cos p\\ \sin p\end{bmatrix}v_{Odo} \tag{11}\]
## V 5G-OBMS Integration Scheme
Within this section, we propose the utilization of a UKF [23] to incorporate 5G measurements, derived from both LoS and multipath sources, with OBMS in a loosely-coupled (LC) manner. This method of integration fuses independent position estimates obtained from OBMS, 5G LoS, and 5G NLoS measurements, in contrast to tightly-coupled (TC) integration that directly fuses raw 5G and OBMS measurements. The latter approach leads to high linearization errors, as discussed in [15]. The states vector \(\mathbf{x}\), the state transition model \(f(\mathbf{x},\mathbf{u})\), and the process covariance matrix \(\mathbf{Q}\) of the proposed method will be displayed first. Then, the proposed measurement vector \(\mathbf{z}\), together with the measurement model \(h(\mathbf{x})\) and the noise covariance matrix \(\mathbf{R}\) are presented next. Finally, we showcase the proposed measurement assessment strategy based on the vehicle's movement constraints. The overall block diagram of the proposed system is shown in Fig.2.
### _States and States Transition Model_
The proposed method estimates the positioning states in the geodetic reference frame, namely, latitude \(\varphi\), longitude \(\lambda\), and altitude \(h\). In addition to the positioning states, the velocity component along the east, north, and up (ENU) directions are also estimated, denoted by \(v_{e}\), \(v_{n}\), and \(v_{u}\), respectively. Lastly, attitude components comprise the pitch \(p\), roll \(r\), and azimuth \(A\) angles. The aforementioned states are collectively referred to as the PVA states and are shown in (12).
\[\mathbf{x}_{PVA}=\begin{bmatrix}\varphi&\lambda&h&v_{e}&v_{n}&v_{u}&p&r&A\end{bmatrix} ^{T} \tag{12}\]
The aforementioned states are momentarily augmented with the system inputs, represented by the vector \(\mathbf{u}=\begin{bmatrix}f_{x}&f_{y}&f_{z}&\omega_{x}&\omega_{y}&\omega_{z}\end{bmatrix}\), which encompasses acceleration and angular velocity measurements. This preliminary stage precedes the generation of sigma points, with the objective of producing a uniform set of \(2n+1\) sigma points for INS measurements. This facilitates the capacity of the Unscented Kalman Filter (UKF) to characterize the impact of the inputs on the system state, thereby improving the accuracy of the system's actual state estimation.
The proposed transition model \(f(\mathbf{x},\mathbf{u})\) is governed by the INS mechanization process. INS mechanization is the process of computing the navigation PVA states from the raw inertial measurements. The mathematical representation of INS mechanization in the l-frame can be summarized in Eqs. (13-17):
\[\begin{bmatrix}\ddot{\varphi}\\ \dot{\lambda}\\ \dot{h}\end{bmatrix}=\begin{bmatrix}0&\frac{1}{R_{M}+h}&0\\ \frac{1}{(R_{N}+h)cosp}&0&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}v_{e}\\ v_{n}\\ v_{u}\end{bmatrix} \tag{13}\]
Eq. (13) demonstrates the relationship between the geodetic coordinates denoted by \(\dot{\varphi},\dot{\lambda},\text{and}\ \dot{h}\) and the velocities along the l-frame denoted by \(v_{e},v_{n},\text{and}\ v_{u}\). \(R_{N}\) is the radius of curvature in the Prime Vertical, and \(R_{M}\) is the radius of curvature in the Meridian. Eq. (14) represents the velocity mechanization in the l-frame.
\[\dot{\mathbf{v}}^{l}=\mathbf{R}_{b}^{l}\mathbf{f}^{b}-(2\mathbf{\Omega}_{ie}^{l}+\mathbf{\Omega}_{ el}^{l})\mathbf{v}^{l}+\mathbf{g}^{l}, \tag{14}\]
where \(\dot{\mathbf{v}}^{l}\) is the kinematic acceleration in the l-frame. The components \(2\mathbf{\Omega}_{ie}^{l}\cdot\mathbf{v}^{l}\), and \(\mathbf{\Omega}_{el}^{l}\cdot\mathbf{v}^{l}\) denote the acceleration observed in the l-frame with respect to the Earth frame (e-frame), and the Coriolis acceleration, respectively. In particular, \(\mathbf{\Omega}_{ie}^{l}\) is the skew-symmetric matrix of \(\mathbf{\omega}_{ie}^{l}\), which is a vector that represents the Earth's rotation rate in the l-frame as seen in (15).
\[\mathbf{\omega}_{ie}^{l}=[0\ \omega^{c}cos\varphi\ \omega^{c}sin\varphi]^{T} \tag{15}\]
\(\mathbf{\Omega}_{el}^{l}\) is a skew-symmetric matrix of \(\mathbf{\omega}_{el}^{l}\) representing the rotation rate of the l-frame relative to the e-frame and expressed in the l-frame as seen in (16).
\[\mathbf{\omega}_{el}^{l}=\begin{bmatrix}\frac{-v_{n}}{R_{M}+h}&\frac{v_{e}}{R_{N}+ h}&\frac{v_{e}tan\varphi}{R_{N}+h}\end{bmatrix}^{T} \tag{16}\]
Furthermore, \(\mathbf{g}^{l}=\begin{bmatrix}0&0&-g\end{bmatrix}^{T}\) is the gravity vector. Lastly, solving the time derivative equation of the transformation matrix \(\mathbf{R}_{l}^{b}\) yields the attitude (orientation) of the moving body as seen in (17).
\[\dot{\mathbf{R}}_{b}^{l}=\mathbf{R}_{b}^{l}(\mathbf{\Omega}_{ib}^{b}+\mathbf{\Omega}_{il}^{b}) \tag{17}\]
Where \(\mathbf{\Omega}_{ib}^{b}\) is a skew-symmetric matrix of \(\mathbf{\omega}_{ib}^{b}\) representing the gyroscope measurements that encode the rotation rate of the b-frame relative to the earth-centred-inertial (ECI) frame and expressed in the b-frame. The \(\mathbf{\Omega}_{il}^{b}\) is the skew-symmetric matrix of \(\mathbf{\omega}_{il}^{b}\) representing the rotation rate of the l-frame relative to the inertial frame expressed in the b-frame. It can be computed by adding \(\mathbf{\omega}_{ie}^{l}\) and \(\mathbf{\omega}_{el}^{l}\) as seen in (18).
\[\mathbf{\omega}_{il}^{b}=\mathbf{R}_{l}^{b}\cdot(\mathbf{\omega}_{ie}^{l}+\mathbf{\omega}_{el} ^{l}) \tag{18}\]
The summary of the transition system model \(f(\mathbf{x}_{k-1}^{+},\mathbf{u}_{k})\) can be seen in (19).
\[\begin{bmatrix}\dot{\mathbf{r}}^{l}\\ \dot{\mathbf{v}}^{l}\\ \dot{\mathbf{R}}_{b}^{l}\end{bmatrix}=\begin{bmatrix}\mathbf{D}^{-1}\mathbf{v}^{l}\\ \mathbf{R}_{\mathbf{b}}^{l}\mathbf{f}^{b}-(2\mathbf{\Omega}_{ie}^{l}+\mathbf{\Omega}_{el}^{l})\mathbf{ v}^{l}+\mathbf{g}^{l}\\ \mathbf{R}_{\mathbf{b}}^{l}(\mathbf{\Omega}_{ib}^{b}+\mathbf{\Omega}_{il}^{b})\end{bmatrix} \tag{19}\]
Where \(\dot{\mathbf{r}}^{l}\) is the time rate of change of the three position components, \(\varphi,\lambda\), and \(h\), and \(\mathbf{D}^{-1}\) is defined as follows:
\[\mathbf{D}^{-1}=\begin{bmatrix}0&\frac{1}{R_{M+h}}&0\\ \frac{1}{(R_{N}+h)\cos\varphi}&0&0\\ 0&0&1\end{bmatrix} \tag{20}\]
Fig. 3 presents the detailed block diagram of INS mechanization.
#### Iii-B1 Quaternions
The parameterization of the rotation matrix \(\mathbf{R}_{\mathbf{b}}^{l}\) is necessary to solve the mechanization equations. The use of quaternions is a widely adopted technique in many fields of study, owing to its numerous advantageous features [3]. For instance, the quaternion solution does not suffer from the problem of gimbal lock, which can be a major issue when using other rotation representations, such as Euler angles. A Gimbal lock occurs when two or more of the rotation axes align, resulting in a loss of one degree of freedom and making certain rotations impossible to represent. Additionally, quaternion computations are relatively simple to perform. Quaternions are composed of four components: a scalar part and a vector part. The scalar part is a real number, while the vector part is a three-dimensional vector, and is defined as follows:
Fig. 3: Detailed INS mechanization block diagram
Fig. 2: Block diagram of the proposed integrated positioning system.
\[\mathbf{q}=\begin{bmatrix}\frac{0.25*(r_{32}-r_{33})}{q_{4}}\\ \frac{0.25*(r_{13}-r_{31})}{q_{4}}\\ \frac{0.25*(r_{12}-r_{12})}{q_{4}}\\ 0.5*\sqrt{1+r_{11}+r_{22}+r_{33}}\end{bmatrix} \tag{21}\]
Where the notation \(r_{12}\) indicates the first row and second column element of the rotation matrix \(\mathbf{R}_{\mathbf{b}}^{\mathbf{I}}\), and \(q_{4}\) denotes the fourth element of the quaternion vector \(\mathbf{q}\). The components of a quaternion are typically subject to certain constraints. Specifically, in some contexts, the components of a quaternion may be required to have a norm or magnitude of 1 as seen in (22). This norm constraint ensures that the quaternion represents a rotation, and it is often referred to as the unit quaternion constraint.
\[q_{1}^{2}+q_{2}^{2}+q_{3}^{2}+q_{4}^{2}=1 \tag{22}\]
The aforementioned equivalence might not hold true due to computational errors. To compensate for this, the quaternion parameters vector Following each computational step, \(\mathbf{q}\) needs to be updated as follows:
\[\hat{\mathbf{q}}=\frac{\mathbf{q}}{\sqrt{1-\Delta}}\cong\mathbf{q}\left(1+\frac{\Delta}{2 }\right) \tag{23}\]
where,
\[\Delta=1-(q_{1}^{2}+q_{2}^{2}+q_{3}^{2}+q_{4}^{2}) \tag{24}\]
In order to predict quaternion components \(\mathbf{q}_{k+1}\) based on \(\mathbf{q}_{k}\), the following formula is used:
\[\mathbf{q}_{k+1}=\mathbf{q}_{k}+\left(\frac{1}{2}\Omega_{il}^{b}\left(\omega_{k} \right)\mathbf{q}_{k}\right)\Delta t, \tag{25}\]
where \(\omega_{k}\) is the angular velocities of body rotations. The following direct relationship can be used to find the rotation matrix \(\mathbf{R}_{b}^{l}\) once the quaternion parameters have been established as seen in (26).
According to the rotation matrix \(\mathbf{R}_{b}^{l}\) defined in (10), the attitude angles can be computed using the newly computed matrix utilizing the following relationships:
\[p=tan^{-1}\left(\frac{r_{32}}{\sqrt{r_{12}^{2}+r_{22}^{2}}}\right) \tag{27}\]
\[r=-tan^{-1}\left(\frac{r_{31}}{r_{33}}\right) \tag{28}\]
\[A=tan^{-1}\left(\frac{r_{12}}{r_{22}}\right) \tag{29}\]
#### Iii-A2 Process Covariance Matrix
In contrast to prior works, we adopt a diagonal process noise covariance matrix \(\mathbf{Q}\) representing the noises of the accelerometers and gyroscopes only, rather than encompassing all states noises, as seen in (30).
\[\mathbf{Q}=diag([\sigma_{\omega_{x}}^{2}\ \sigma_{\omega_{y}}^{2}\ \sigma_{\omega_{z}}^{2}\ \sigma_{f_{x}}^{2}\ \sigma_{f_{y}}^{2}\ \sigma_{f_{z}}^{2}]) \tag{30}\]
Where \(\sigma_{\omega_{x}}^{2}\), \(\sigma_{\omega_{y}}^{2}\), and \(\sigma_{\omega_{z}}^{2}\) are gyroscopic noises and \(\sigma_{f_{x}}^{2}\),\(\sigma_{f_{y}}^{2}\), and \(\sigma_{f_{z}}^{2}\) are accelerometer noises. All of which are additive white Gaussian noise (AWGN). The design of the process covariance matrix in this way makes it easily tunable as the uncertainties of the system states are influenced by the uncertainties of system inputs which are propagated to the states through the transition model. In order to produce the sigma points, it becomes necessary to augment the P and Q matrices to account for sensor noises.
### _Measurements and Measurements Model_
#### Iii-B1 Measurements
In the proposed method, the measurement vector \(\mathbf{z}\) comprises the 3D position of the UE from both LoS and NLoS measurements. Additionally, it consists of the vehicle velocity with respect to the l-frame as acquired from a wheel odometer, as shown in (31).
\[\mathbf{z}=\begin{bmatrix}\mathbf{\varphi}_{5G}&\mathbf{\lambda}_{5G}&\mathbf{h}_{5G}&v_{c_{ Ode}}&v_{n_{Ode}}&v_{u_{Ode}}\end{bmatrix}^{T} \tag{31}\]
Where \(\mathbf{\varphi}_{5G}\), \(\mathbf{\lambda}_{5G}\), and \(\mathbf{h}_{5G}\) are the 3D UE position measurements provided by 5G LoS and NLoS signals; and \(v_{c_{Ode}}\), \(v_{n_{Ode}}\), and \(v_{u_{Ode}}\) are the vehicle velocity measurements provided by the odometer in the l-frame.
#### Iii-B2 Measurement Exclusion
It is crucial to highlight that the measurement vector \(\mathbf{z}\) is subject to dynamic changes depending on the availability of LoS signals and SBRs. Prior to any positioning estimation, a measurement exclusion process is performed to filter out NLoS signals, allowing only LoS signals to be utilized by the LoS-based positioning module. This process follows our previous work described in [19]. The approach relies on the distinction in distance computation between the UE and the BS through the utilization of time-based and received signal strength-based calculations. On the other hand, when multipath signals are used for positioning, channel parameters are passed to an OoRI module, which filters out higher-order reflections by allowing only single-bounce reflections to be passed on to the multipath positioning module. The functioning of this OoRI module is presented in [21]. The machine learning model was trained on a dataset comprising \(3.6\) million observations, which consisted of 5G channel parameters such as ToA, AoA, AoD, and Received Signal Strength (RSS). The training process involved using ensemble learning, where a total of \(14\) decision tree learners were trained. Upon completion of the training, the model attained a classification accuracy of \(99.8\%\).
#### Iii-B3 Measurement Assessment
Given that the proposed OoRI model is based on machine learning, it is essential to address the issue of misclassified SBRs, which could result in substantial errors in the computed position if they are passed to the multipath positioning module. Hence, position computations resulting from multipath positioning undergo a second stage of validation, which is contingent upon the vehicle's motion constraints. These constraints are determined using odometer measurements and posterior estimations from the previous epoch \(k-1\), as illustrated in equations (32) and (33). These equations are derived from the non-holonomic constraints of land vehicles [24].
\[\mathbf{R}_{b}^{l}=\begin{bmatrix}\mathbf{q}_{(1)}^{2}-\mathbf{q}_{(2)}^{2}-\mathbf{q}_{(3)}^{2}+ \mathbf{q}_{(4)}^{2}&2\mathbf{q}_{(1)}\mathbf{q}_{(2)}+2\mathbf{q}_{(3)}\mathbf{q}_{(4)}&2\mathbf{q}_{(1)} \mathbf{q}_{(3)}-2\mathbf{q}_{(2)}\mathbf{q}_{(4)}\\ 2\mathbf{q}_{(1)}\mathbf{q}_{(2)}-2\mathbf{q}_{(3)}\mathbf{q}_{(4)}&-\mathbf{q}_{(1)}^{2}+\mathbf{q}_{ (2)}^{2}-\mathbf{q}_{(3)}^{2}+\mathbf{q}_{(4)}^{2}&2\mathbf{q}_{(2)}\mathbf{q}_{(3)}+2\mathbf{q}_{( 1)}\mathbf{q}_{(4)}\\ 2\mathbf{q}_{(1)}\mathbf{q}_{(3)}+2\mathbf{q}_{(2)}\mathbf{q}_{(4)}&2\mathbf{q}_{(2)}\mathbf{q}_{(3)}- 2\mathbf{q}_{(1)}\mathbf{q}_{(4)}&-\mathbf{q}_{(1)}^{2}-\mathbf{q}_{(2)}^{2}+\mathbf{q}_{(3)}^{2}+ \mathbf{q}_{(4)}^{2}\end{bmatrix} \tag{26}\]
\[\Delta\varphi_{const.}=\frac{\cos r_{k-1}^{+}\cos A_{k-1}^{+}(v_{Odo_{k}}+ \epsilon)dt}{R_{M}+h_{k-1}^{+}} \tag{32}\]
\[\Delta\lambda_{const.}=\frac{\sin r_{k-1}^{+}\cos A_{k-1}^{+}(v_{Odo_{k}}+ \epsilon)dt}{(R_{N}+h_{k-1}^{+})\cos\varphi_{k-1}^{+}} \tag{33}\]
Where \(\epsilon\) denotes the quantization error of the odometer, and \(dt\) denotes the sampling time. The SBR measurements are then incorporated in the measurement vector if they satisfy the motion constraint of the vehicle, as shown in (34).
\[\text{SBR}=\begin{cases}\text{Include},&\Delta\varphi<\Delta\varphi_{const.} \wedge\Delta\lambda<\Delta\lambda_{const.}\\ \text{Discard},&\text{otherwise}.\end{cases} \tag{34}\]
Where \(\Delta\varphi\) and \(\Delta\lambda\) are the geodetic velocities estimated by the SBR measurement and are computed as seen in (35).
\[\Delta\varphi=\varphi_{k-1}^{+}-\varphi_{k_{SBR}} \tag{35}\]
\[\Delta\lambda=\lambda_{k-1}^{+}-\lambda_{k_{SBR}}\]
#### V-C4 Observation Model
The observation model representing the relationship between states and observations is linear, as demonstrated in (36).
\[\mathbf{H}=\begin{bmatrix}\mathbf{I}_{3\times 3}&\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 9} \\ \mathbf{0}_{3\times 3}&\mathbf{I}_{3\times 3}&\mathbf{0}_{3\times 9}\end{bmatrix} \tag{36}\]
#### V-C5 Measurement Noise Covariance
The measurement covariance matrix is shown in (37). Entries for positioning that rely on 5G, whether Los measurements or SBRs are denoted by \(\mathbf{\sigma}_{\psi SG}^{2}\), \(\mathbf{\sigma}_{\lambda_{SG}}^{2}\), and \(\mathbf{\sigma}_{\mathbf{h}_{SG}}^{2}\).
\[\mathbf{R}=\text{diag}\left(\left[\mathbf{\sigma}_{\psi SG}^{2}\mathbf{\sigma}_{\lambda_ {SG}}^{2}\mathbf{\sigma}_{h_{SG}}^{2}\sigma_{v_{Odo}}^{2}\sigma_{v_{Odo}}^{2} \sigma_{v_{Odo}}^{2}\sigma_{v_{Odo}}^{2}\right]\right) \tag{37}\]
## VI Road Tests Setup
A quasi-real 5G simulation configuration offered by Siradel was used for validation. Siradel 5G Channel suite incorporates LiDAR-based maps of the structures, vegetation, and water bodies in downtown regions of cities like Toronto, as shown in Fig. 4. The simulation tool uses its ray-tracing capabilities and propagation models to calculate necessary positioning measurables like RSS, ToA, AoA, and AoD based on the position of the UE and the virtually connected BSs. A car equipped with NovAtel's high-end positioning solution, which includes a tactical grade KVH 1750 IMU, and a tactical grade GNSS receiver, was driven in Downtown Toronto to simulate a real urban navigation situation. Then, in accordance with the Release 16 guidelines of the 3GPP, BSs were placed approximately \(250\) m apart along the driven trajectory. Finally, Siradel was used to create the required 5G measurables using the imported BS positions and NovAtel's reference solution. The mmWave transmissions used by Siradel have a carrier frequency of \(28\) GHz and a bandwidth of \(400\) MHz. The UE was equipped with an omnidirectional antenna, while the BSs had \(8\times 1\) ULAs.
Two test trajectories, namely NavINST 1 and NavINST 2, are used for validation in this work, as seen in Figs. 5 and 6 respectively. The characteristics of each trajectory, along with the equipment used, are summarized in Table VI.
The trajectories were carried out during rush hour, resulting in numerous instances of sudden car acceleration and stopping dynamics. Furthermore, the trajectories included many turns and challenging maneuvers.
Fig. 4: Downtown Toronto, ON, Google Earth (Top) vs Siradel simulation tool (Bottom).
## VII Results and Discussions
### _Standalone Positioning_
This section presents the positioning solution error statistics for the standalone (SA) operation of INS, 5G-LoS, and 5G-SBRs. Tables II-III summarize the error statistics of trajectories NavINST 1 and 2, respectively. Fig. 7 shows the error cumulative distribution function (CDF) of all 5G SA positioning solutions. In Table II, it can be seen that 5G LoS- and SBRs-based positioning have close error statistics, with SBRs providing slightly better results when they are available. However, their RMS and max errors are drastically higher as they cause severe positioning errors when the available SBRs are insufficient (i.e. less than two). The dissimilarity in error statistics is evident from Table III, where the trajectory exhibits a reduced likelihood of LoS communication with the BS. This finding indicates that the probability of obtaining a sufficient number of SBRs in urban settings is higher than the probability of LoS communication.
A close-up of the positioning solution of the proposed integration using UKF compared to that of SA 5G LoS measurements is shown in Fig. 13.
Another close-up is shown in Fig. 13 where the 5G LoS outage as previously observed in Fig. 10 has been successfully bridged with the aid of OBMS.
### _Integration with SBR-based Positioning_
In this section, we expand our integration approach to incorporate SBRs using UKF, building upon earlier findings. The results are summarized in Table V. Furthermore, the positioning error CDF is shown in Figs. 15-16. Once again, it is evident that the disparity in results is more pronounced in the NavINST 2 trajectory than in NavINST 1, primarily due to the more frequent occurrence of outages in NavINST 2. Upon examining the results of NavINST 2, it is apparent that integrating multipath signals can maintain a level of accuracy below \(30\) cm for \(97\%\) of the time, compared to only \(91\%\) without utilizing multipath. As a benchmark, reliable operation of autonomous vehicles requires a decimeter level positioning accuracy of \(<30\) cm for at least \(2\sigma\), (\(>95\%\)) of the time [25].
Figs. 17-19 show close-up comparisons between the positioning solution of the proposed integration using UKF with and without SBRs. The results indicate that prolonged LoS outages should be bridged since the IMU positioning solution is prone to drift. Multipath signals are more likely to be present than LoS communication and can serve as a bridge to fill these gaps.
Fig. 8: Close-up scenario that showcases the capability of multipath positioning accuracy during LoS outage.
Fig. 10: Close-up scenario that shows an instance where both LoS and SBRs are not available.
Fig. 7: CDF of the positioning errors of standalone 5G-LoS (solid) positioning vs. 5G-SBRs (dashed) for trajectories NavINST 1 and NavINST 2.
Fig. 9: Close-up scenario that showcases the positioning solution of utilizing LoS measurements during SBRs outage.
## VIII Conclusion
In conclusion, this paper presents an improved positioning solution for AVs that incorporates 5G mmWave LoS and multipath signals as well as integration with OBMS. The work employs a UKF fusion engine as an alternative to the commonly used EKF. To evaluate the health of the 5G measurements, two techniques were used. The first was based on the communication link between the BS and the UE, while the second relied on the motion restrictions of the vehicle. To validate the proposed methods, two trajectories with real-vehicle dynamics and different low-end IMU units were utilized. A novel quasi-real 5G simulator with ray-tracing capabilities was used to obtain 5G measurements. In the course of our analysis, it was observed that SBRs are more easily accessible compared to LoS links. Moreover, it was found that UKF outperforms EKF, particularly during extended periods of 5G outages. Finally, we demonstrated the integration capabilities with multipath measurements. Our findings indicate that exploiting available multipath signals is necessary to achieve decimeter-level accuracy. With the proposed positioning solution, the system achieved a sub-30 cm level of accuracy for about \(97\%\) of the time, compared to only \(91\%\) of the time without incorporating multipath signals.
|
2307.12290 | Remark on the Stability of Energy Maximizers for the 2D Euler equation
on $\mathbb{T}^2$ | It is well-known that the first energy shell, \[\mathcal{S}_1^{c_0}:=\{\alpha
\cos(x+\mu)+\beta\cos(y+\lambda): \alpha^2+\beta^2=c_0\,\, \&\,\,
(\mu,\lambda)\in\mathbb{R}^2\}\] of solutions to the 2d Euler equation is
Lyapunov stable on $\mathbb{T}^2$. This is simply a consequence of the
conservation of energy and enstrophy. Using the idea of Wirosoetisno and
Shepherd \cite{WS}, which is to take advantage of conservation of a properly
chosen Casimir, we give a simple and quantitative proof of the $L^2$ stability
of single modes up to translation. In other words, each
\[\mathcal{S}_1^{\alpha,\beta}:=\{\alpha \cos(x+\mu)+\beta\cos(y+\lambda):
(\mu,\lambda)\in\mathbb{R}^2\}\] is Lyapunov stable. Interestingly, our
estimates indicate that the extremal cases $\alpha=0,$ $\beta=0$, and
$\alpha=\pm\beta$ may be markedly less stable than the others. | Tarek M. Elgindi | 2023-07-23T10:54:32Z | http://arxiv.org/abs/2307.12290v2 | # Remark on the Stability of Energy Maximizers for the 2D Euler equation on \(\mathbb{T}^{2}\)
###### Abstract
It is well-known that the first energy shell,
\[\mathcal{S}_{1}^{c_{0}}:=\{\alpha\cos(x+\mu)+\beta\cos(y+\lambda):\alpha^{2}+ \beta^{2}=c_{0}\ \&\ (\mu,\lambda)\in\mathbb{R}^{2}\}\]
of solutions to the 2d Euler equation is Lyapunov stable on \(\mathbb{T}^{2}\). This is simply a consequence of the conservation of energy and enstrophy. Using the idea of Wirossoetisno and Shepherd [15], which is to take advantage of conservation of a properly chosen Casimir, we give a simple and quantitative proof of the \(L^{2}\) stability of single modes up to translation. In other words, each
\[\mathcal{S}_{1}^{\alpha,\beta}:=\{\alpha\cos(x+\mu)+\beta\cos(y+\lambda):(\mu, \lambda)\in\mathbb{R}^{2}\}\]
is Lyapunov stable. Interestingly, our estimates indicate that the extremal cases \(\alpha=0\), \(\beta=0\), and \(\alpha=\pm\beta\) may be markedly less stable than the others.
_Dedicated to Vladimir Sverak on the occasion of his 65th birthday._
###### Contents
* 1 Introduction
* 1.1 Main Theorem
* 1.2 Remarks on the Main Theorem
* 2 Proof of the Main Theorem
* 2.1 \(L^{2}\) Stability of the First Shell
* 2.2 Two calculus lemmas
* 2.3 A conserved quantity that distinguishes values of \((\alpha,\beta)\)
* 3 Acknowledgements
## 1 Introduction
Recall the 2d Euler equation in vorticity form
\[\partial_{t}\omega+u\cdot\nabla\omega=0, \tag{1}\] \[u=\nabla^{\perp}(-\Delta)^{-1}\omega. \tag{2}\]
In studying the dynamics of solutions to (1)-(2), it is important to recall the conservation laws enjoyed by regular solutions. Indeed, on any smooth domain, we have that the energy
\[E(\omega):=\int|u|^{2} \tag{3}\]
is conserved1. We also have the conservation of all Casimirs:
Footnote 1: It is customary to take the no-penetration boundary condition \(u\cdot n=0\) on the boundary of the domain, though we will only be concerned with domains without boundary here.
\[H_{f}(\omega):=\int f(\omega). \tag{4}\]
The conservation of the Casimirs (4) is equivalent to the statement that any regular solution to the Euler equation is always a volume preserving rearrangement of its initial data. All of these conservation laws put infinitely many constraints on solutions to (1)-(2) and thus play a crucial role in describing their dynamics. In particular, they play a vital role in the study of (nonlinear) stability of steady states.
The most powerful tool for studying nonlinear stability that we are aware of is Kelvin's variational principle [13], which was placed in a general setting by Arnold [1, 2] in the 1960's. Arnold's theory of stability, which is expounded upon extensively in the notes of V. Sverak [12], is based on a simple minimization principle using the conserved quantities. A simple example of this principle is
**Lemma 1.1**.: _Consider the ordinary differential equation on \(\mathbb{R}^{d}:\)_
\[\frac{d}{dt}x=N(x), \tag{5}\]
_for some smooth \(N:\mathbb{R}^{d}\to\mathbb{R}^{d}.\) Assume that \(N\) has a first integral \(E:\mathbb{R}^{d}\to\mathbb{R},\) in other words that \(\frac{d}{dt}E(x)=0,\) for every solution \(x\) to (5). If \(x_{*}\) is a strict local extremizer of \(E,\) then \(x_{*}\) is a Lyapunov stable steady solution of (5)._
The proof is elementary. Note that this result can be extended to the case when (5) has other first integrals \(H_{i}\) with the weaker assumption that \(x_{*}\) is an extremizer of \(E\) on a single leaf \(\cap_{i}\{H_{i}=c_{i}\}\) (see [2, Theorem 3.3]). For the Euler equation, the energy (3) is a first integral; moreover, a single solution is always a rearrangement of its initial data. This motivates Arnold's notion of hydrodynamic stability2:
Footnote 2: In fact, we are giving a more general notion than the one given in Arnold’s original paper that was given by Burton in [3] (it can be argued that this concept is even present in Kelvin’s short note [13]).
**Definition 1.2**.: _Fix a two dimensional domain \(\Omega.\) We say that \(\omega_{*}\) is an Arnold-stable 2d Euler steady state if it is a strict local extremizer of \(E\) among all volume preserving rearrangements of \(\omega_{*}.\)_
Arnold-stable steady states exist on all simply connected domains; in fact, one can construct an infinite-dimensional family of Arnold-stable steady states on any simply connected domain [4, 5]. See also [10] for recent advances on Arnold stable solutions on \(\mathbb{R}^{2}\) and the vanishing viscosity limit. On spatially periodic domains (like rectangular or square tori), the situation is quite different. It is not difficult to see that there are no non-trivial Arnold stable steady states on \(\mathbb{T}^{2},\) due to translation invariance. In fact, the author is unaware of _any_ non-trivial Lyapunov stable steady state on \(\mathbb{T}^{2}\). The purpose of this work is to review a part of the picture on steady states on \(\mathbb{T}^{2}\) of particular interest: the global maxima of energy.
### Main Theorem
If we restrict our attention to vorticities with unity \(L^{2}\) norm, we find that any element of the three-dimensional manifold
\[\mathcal{S}^{1}_{1}=\{\alpha\cos(x+\mu)+\beta\cos(y+\lambda):\alpha^{2}+\beta ^{2}=1,(\mu,\lambda)\in\mathbb{R}^{2}\}\]
is a global maximizer of energy. Since all of these are obviously not _strict_ global maximizers, we just get that \(\mathcal{S}^{1}_{1}\) itself is \(L^{2}\) stable. Whether any particular element of \(\mathcal{S}^{1}_{1}\) is stable is an open problem; however, it turns out that one can use conservation of the Casimirs to slightly restrict the dynamics. Now we state our main theorem regarding the sets \(\mathcal{S}^{\alpha,\beta}_{1}:\)
\[\mathcal{S}^{\alpha,\beta}_{1}=\{\alpha\cos(x+\mu)+\beta\cos(y+\lambda):(\mu, \lambda)\in\mathbb{R}^{2}\}.\]
For ease of exposition, we take \(\alpha^{2}+\beta^{2}=1.\)
**Theorem 1.3**.: _Define the "extremal set" \(\mathcal{E}:=\{(\pm 1,0),(0,\pm 1),(\pm\frac{1}{\sqrt{2}},\pm\frac{1}{\sqrt{2}})\},\) consisting of eight points on the unit circle. If \((\alpha,\beta)\in\mathcal{E},\) there exists \(C>0\) so that for all \(\epsilon\) sufficiently small, we have that_
\[d(\omega_{0},\mathcal{S}^{\alpha,\beta}_{1})<\epsilon\implies d(\omega(t), \mathcal{S}^{\alpha,\beta}_{1})<C\sqrt{\epsilon},\]
_for all \(t\in\mathbb{R}.\) In contrast, for each \((\alpha,\beta)\in\{\alpha^{2}+\beta^{2}=1\}\setminus\mathcal{E},\) there exists a constant \(C>0\) so that for all \(\epsilon\) sufficiently small, we have that_
\[d(\omega_{0},\mathcal{S}^{\alpha,\beta}_{1})<\epsilon\implies d(\omega(t), \mathcal{S}^{\alpha,\beta}_{1})<C\epsilon,\]
_for all \(t\in\mathbb{R}.\)_
**Remark 1.4**.: _As examples, the steady states \(\cos(y)\) and \(\frac{1}{\sqrt{2}}(\cos(x)+\cos(y))\) are extremal while the steady state \(\frac{1}{2}\cos(x)+\frac{\sqrt{3}}{2}\cos(y)\) is not extremal (thus possibly more stable)._
### Remarks on the Main Theorem
The idea of the proof originates in the paper of Wirosoetisno and Shepherd [15], which is to combine the original argument of Arnold with the conservation of higher norms of vorticity (to distinguish between different points on the first shell). The argument of [15] does not imply \(L^{2}\) stability because of its use of higher order norms. The stability argument here is a bit different and we also make use of a different conserved quantity that is well-defined for \(L^{2}\) solutions. We should remark that, very recently, the authors of [14] established a _non-quantitative_ stability result based again on the idea of [15]. The argument of [14] is by contradiction and uses the whole transport of vorticity rather than just a single Casimir (in the spirit of Burton's work [3]). One advantage of the argument given here is that we give an elementary proof along with quantitative bounds and also provide evidence that different elements of the first energy shell may be more stable than others (i.e. extremal vs. non-extremal values of \((\alpha,\beta)\) in Theorem 1.3). In particular, exact shear flows and exact cellular flows on the first shell appear to be less stable than those flows with regions of both shearing as well as so-called cat's-eye structures.
It would be very interesting to determine whether the first stability bound in Theorem 1.3 is actually sharp. One potential explanation for the weaker stability of extremal points, which was offered by T. Drivas, is that if one varies \((\alpha,\beta)\) continuously, the extremal points are precisely the points at which there is a break in the topology of the streamlines. This implies that the foliation of the space of vorticities by the isovortical leaves may be degenerate precisely at the extremal points. To further investigate whether extremal points truly enjoy different stability properties, it would be interesting to study the structure of the set of steady states close to extremal and non-extremal points. A strong degeneracy was shown to be present near the shear flows on the first shell (which are extremal) in [7]. For comparison, it would be good to study the non-extremal case; perhaps there one can establish a result in the spirit of [4]. It would also be very interesting to know whether _individual_ elements of the first shell are stable (i.e. taking the velocity to have zero-mean and removing the translation); perhaps the ideas of [9] could be applied to that problem. Let us remark finally that the stability of the first Fourier shell is relevant also in the analysis of the long-time behavior of solutions to the Navier-Stokes equations [6]. In fact, there is a recent interesting numerical computation, done by T. Drivas, of the stochastically forced Navier-Stokes system that shows the solution travelling on the first shell [https://www.youtube.com/watch?v=8H4Xee6-_7g](https://www.youtube.com/watch?v=8H4Xee6-_7g). The computation qualitatively shows that the exact shear states do not appear to be as stable as states with cat's eyes.
Proof of the Main Theorem
The basic idea of the proof is to first recall that the whole of the first shell, \(\mathcal{S}_{1}^{1}\) is \(L^{2}\) stable. This follows from the conservation of energy and enstrophy. This means that in order to establish the stability of \(S_{1}^{\alpha,\beta},\) we need only show that there exists an invariant of the Euler equation, \(I:L^{2}\to\mathbb{R},\) with the property that \(I(\mathcal{S}^{\alpha,\beta})\) (which is in fact only a single number that depends on \((\alpha,\beta)\)) varies as we vary \((\alpha,\beta).\) Conservation of \(I\) will then imply the all-time stability of \(\mathcal{S}_{1}^{\alpha,\beta}\). This can be made quantitative by showing that the derivative of the value of \(I(\mathcal{S}^{\alpha,\beta})\) along the circle \(\alpha^{2}+\beta^{2}=1\) either doesn't vanish (which is the case at the non-extremal points) or vanishes but only to first order (which is the case at extremal points).
### \(L^{2}\) Stability of the First Shell
We start by showing the stability of \(\mathcal{S}_{1}^{1}.\) Let us begin with some notation. We denote by
\[\mathbb{Z}_{0}^{2}:=\mathbb{Z}^{2}\setminus\{(0,0)\}.\]
We define \(\mathbb{P}\) to be the orthogonal \(L^{2}\) projector onto \(\mathcal{S}_{1}\) and denote by
\[\mathbb{P}^{\perp}=\mathrm{Id}-\mathbb{P}.\]
**Proposition 2.1**.: _Let \(\omega\) be a smooth solution to the 2d Euler equation. Then,_
\[|\mathbb{P}^{\perp}\omega|_{L^{2}}\leq\sqrt{2}|\mathbb{P}^{\perp}\omega_{0}|_{ L^{2}}.\]
Proof.: We have that
\[\frac{1}{2}|\mathbb{P}^{\perp}\omega|_{L^{2}}^{2}\leq\sum_{k\in\mathbb{Z}_{0}^ {2}}(1-\frac{1}{|k|^{2}})|\hat{\omega}(k)|^{2}=|\omega|_{L^{2}}^{2}-|u|_{L^{2} }^{2}=|\omega_{0}|_{L^{2}}^{2}-|u_{0}|_{L^{2}}^{2}=\sum_{k\in\mathbb{Z}_{0}^{2 }}(1-\frac{1}{|k|^{2}})|\hat{\omega}_{0}(k)|^{2}\leq|\mathbb{P}^{\perp}\omega_ {0}|_{L^{2}}^{2},\]
where we used on both sides that the multiplier is zero when \(|k|=1.\)
### Two calculus lemmas
We will need two simple calculus lemmas. First, let us state the one we will apply at non-extremal points:
**Lemma 2.2**.: _Assume \(f:\mathbb{R}\to\mathbb{S}^{1}\) is continuous. Assume that \(F:\mathbb{S}^{1}\to\mathbb{R}\) is smooth and that \(F^{\prime}(\theta_{0})\neq 0.\) Then, there exists a fixed \(C:=C(F,\theta_{0})>0\) so that if \(\epsilon>0\) is sufficiently small and_
\[|F(f(t))-F(\theta_{0})|+|f(0)-\theta_{0}|<\epsilon,\]
_then we have that_
\[|f(t)-\theta_{0}|<C\epsilon,\]
_for all \(t\in\mathbb{R}.\)_
The second lemma will be applied at extremal points:
**Lemma 2.3**.: _Under the same assumptions as Lemma 2.2, except that \(F^{\prime}(\theta_{0})=0,\)\(F^{\prime\prime}(\theta_{0})\neq 0,\) we conclude that_
\[|f(t)-\theta_{0}|<C\sqrt{\epsilon},\]
_for all \(t\in\mathbb{R}.\)_
Both lemmas are elementary and follow from Taylor expanding \(F\) around \(\theta_{0}\) and using the continuity of \(f.\)
### A conserved quantity that distinguishes values of \((\alpha,\beta)\)
The reason that the standard Arnold method, which here would consist of maximizing the energy for fixed enstrophy, only gives stability of \(\mathcal{S}_{1}^{1}\) and not \(\mathcal{S}_{1}^{\alpha,\beta}\) is that both the energy and the enstrophy are constant on \(\mathcal{S}_{1}^{\alpha,\beta}\). In particular, one cannot distinguish the sets \(\mathcal{S}_{1}^{\alpha,\beta}\) using only the knowledge that the enstrophy and energy are conserved. As brilliantly observed by Wirosoetisno and Shepherd in [15], we can distinguish the various values of \((\alpha,\beta)\) using a higher order Casimir, like the \(L^{4}\) norm of vorticity. Unfortunately, using the \(L^{4}\) norm will not allow us to deduce stability in \(L^{2}\). We use a small modification of the \(L^{4}\) norm with a conserved quantity that is not technically a Casimir. Indeed, for any measurable function \(\omega\in\dot{H}^{-1},\) we may define the following quantity:
\[I(\omega)=\int|\omega|^{4}\chi\Big{(}\frac{\omega}{10|u|_{L^{2}}}\Big{)},\]
where \(\chi\in C^{\infty}(\mathbb{R})\) is even and equal to \(1\) on \([0,1]\) and equal to \(0\) on \([2,\infty).\) When \(\omega\) solves the 2d Euler equation, we of course have that
\[I(\omega)=I(\omega_{0}).\]
Let us now define:
\[\mathcal{S}_{*}^{1}=\{\alpha\cos(x)+\beta\cos(y):\alpha^{2}+\beta^{2}=1\}.\]
Now writing \((\alpha,\beta)=(\cos(\theta),\sin(\theta)),\) we may define
\[F(\theta)=I(\cos(\theta)\cos(x)+\sin(\theta)\cos(y)),\]
for \(\theta\in\mathbb{S}^{1}.\) The key observation is the following.
**Lemma 2.4**.: \(F^{\prime}(\theta)=0\) _if and only if \(\theta=\frac{\pi}{4}k,\)\(k\in\mathbb{Z}.\) Moreover, \(F^{\prime\prime}(\frac{\pi}{4}k)\neq 0,\) for \(k\in\mathbb{Z}.\)_
Proof.: By definition of \(I\), we have that
\[F(\theta) =\int_{\mathbb{T}^{2}}(\cos(\theta)\cos(x)+\sin(\theta)\cos(y))^{ 4}dxdy\] \[=2\pi\cos^{4}(\theta)\int_{0}^{2\pi}\cos^{4}(x)dx+6\pi^{2}\cos^{2 }(\theta)\sin^{2}(\theta)+2\pi\sin^{4}(\theta)\int_{0}^{2\pi}\cos^{4}(y)dy\] \[=\frac{3\pi^{2}}{2}(\cos^{4}(\theta)+4\cos^{2}(\theta)\sin^{2}( \theta)+\sin^{4}(\theta))=\frac{3\pi^{2}}{2}(1+2\cos^{2}(\theta)-2\cos^{4}( \theta))\] \[=\frac{3\pi^{2}}{2}(\frac{5}{4}-\frac{1}{4}\cos(4\theta)).\]
We thus see that \(F^{\prime}(\theta)=\frac{3\pi^{2}}{2}\sin(4\theta),\) from which the result follows.
Next, we define \(\mathbb{P}_{*}:\mathcal{S}_{1}\rightarrow\mathcal{S}_{*}\) by
\[\mathbb{P}_{*}(\alpha\cos(x+\mu)+\beta\cos(y+\lambda))=\alpha\cos(x)+\beta\cos (y).\]
Now we see that Lemma 2.4 combined with the calculus Lemmas 2.2-2.3 below easily imply the Main Theorem 1.3. Indeed, fix \((\alpha,\beta)\) and suppose that \(\omega_{0}\) is smooth and that \(dist(\omega_{0},\mathcal{S}_{1}^{\alpha,\beta})<\epsilon.\) It follows that
\[|\mathbb{P}^{\perp}\omega_{0}|_{L^{2}}\leq\sqrt{2}|\mathbb{P}^{\perp}\omega_{0 }|<\sqrt{2}\epsilon.\]
It follows that
\[|I(\omega)-I(\mathbb{P}\omega)|\leq C|\mathbb{P}^{\perp}\omega|_{L^{2}}\leq C\epsilon,\]
since \(I\) is clearly Lipschitz continuous on \(L^{2}.\) Now, by translation invariance of \(I,\) it follows that
\[|I(\omega)-I(\mathbb{P}_{*}(\omega))|\leq C\epsilon.\]
Since \(I(\omega)=I(\omega_{0}),\) it follows (from two applications of the inequality) that
\[|I(\mathbb{P}_{*}\omega_{0})-I(\mathbb{P}_{*}\omega)|\leq C\epsilon.\]
Conservation of the \(L^{2}\) norm and Lemmas 2.4,2.2, and 2.3 now give the result:
\[|\mathbb{P}_{*}\omega-\mathbb{P}_{*}\omega_{0}|_{L^{2}}\leq C\epsilon,\]
when \((\alpha,\beta)\) is not extremal, while
\[|\mathbb{P}_{*}\omega-\mathbb{P}_{*}\omega_{0}|_{L^{2}}\leq C\sqrt{\epsilon},\]
when \((\alpha,\beta)\) is extremal, so long as \(\epsilon\) is sufficiently small. This concludes the proof of the Main Theorem 1.3.
**Remark 2.5**.: _Let us close by noting that the reason that the extremal points \(\mathcal{E}\) enjoy weaker stability estimates is that they are critical points of the Casimir we have chosen when restricted to \(\mathcal{S}_{*}\). It can be checked directly that the criticality of the extremal points is independent of the choice of the Casimir (as long as it is smooth). This indicates that the extremal points may really be special, though we have no proof that the first estimate in Theorem 1.3 is actually sharp._
## 3 Acknowledgements
The author thanks V. Sverak for his guidance and many helpful discussions over the years that helped shape the author's view of fluid mechanics and PDE. The author also thanks T. Drivas for multiple comments that improved this paper. He finally acknowledges funding from the NSF DMS-2043024 and the Alfred P. Sloan foundation.
|
2303.16501 | AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot
AV-ASR | Audiovisual automatic speech recognition (AV-ASR) aims to improve the
robustness of a speech recognition system by incorporating visual information.
Training fully supervised multimodal models for this task from scratch, however
is limited by the need for large labelled audiovisual datasets (in each
downstream domain of interest). We present AVFormer, a simple method for
augmenting audio-only models with visual information, at the same time
performing lightweight domain adaptation. We do this by (i) injecting visual
embeddings into a frozen ASR model using lightweight trainable adaptors. We
show that these can be trained on a small amount of weakly labelled video data
with minimum additional training time and parameters. (ii) We also introduce a
simple curriculum scheme during training which we show is crucial to enable the
model to jointly process audio and visual information effectively; and finally
(iii) we show that our model achieves state of the art zero-shot results on
three different AV-ASR benchmarks (How2, VisSpeech and Ego4D), while also
crucially preserving decent performance on traditional audio-only speech
recognition benchmarks (LibriSpeech). Qualitative results show that our model
effectively leverages visual information for robust speech recognition. | Paul Hongsuck Seo, Arsha Nagrani, Cordelia Schmid | 2023-03-29T07:24:28Z | http://arxiv.org/abs/2303.16501v1 | # AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot AV-ASR
###### Abstract
Audiovisual automatic speech recognition (AV-ASR) aims to improve the robustness of a speech recognition system by incorporating visual information. Training fully supervised multimodal models for this task from scratch, however is limited by the need for large labelled audiovisual datasets (in each downstream domain of interest). We present AVFormer, a simple method for augmenting audio-only models with visual information, at the same time performing lightweight domain adaptation. We do this by (i) injecting visual embeddings into a frozen ASR model using lightweight trainable adaptors. We show that these can be trained on a small amount of weakly labelled video data with minimum additional training time and parameters. (ii) We also introduce a simple curriculum scheme during training which we show is crucial to enable the model to jointly process audio and visual information effectively; and finally (iii) we show that our model achieves state of the art zero-shot results on three different AV-ASR benchmarks (How2, VisSpeech and Ego4D), while also crucially preserving decent performance on traditional audio-only speech recognition benchmarks (LibriSpeech). Qualitative results show that our model effectively leverages visual information for robust speech recognition.
## 1 Introduction
Robustness or adaptation to new, unconstrained domains is a key challenge for automatic speech recognition (ASR) systems. In multimodal video (, TV, online edited videos), the visual stream can provide strong cues for improving the robustness of ASR systems, particularly in cases where the audio is noisy - this is called audiovisual ASR (AV-ASR). Unlike works that simply focus on lip motion [1, 7, 23, 24, 29, 33, 37, 41], we investigate the contribution of entire visual frames. This is particularly useful for videos 'in the wild', where the mouth is not necessarily visible (, egocentric viewpoints, face coverings, and low resolution etc.) [11]. The task is illustrated in Figure 1.
Building audiovisual datasets for training AV-ASR models, however, is challenging. Datasets such as How2 [36] and VisSpeech [11] have been created from instructional videos online, but they are small in size. Not only are datasets for this task small, but models are typically large and consist of both visual and audio encoders. For example the latest AV-ASR model AVATAR [11] shows impressive performance on both datasets, but requires the end-to-end training of visual and audio components in tandem, and consequently a large amount of compute. Like other AV-ASR works [4, 17, 25, 30, 38], it is also only trained and tested on instructional videos, and as we show in the experiments, generalizes poorly to new domains in the zero-shot setting.
On the other hand, there have been a number of recently released large-scale audio-only models [6, 8, 19] that are heavily optimised via self-supervised pretraining and large-scale supervised training on _audio-only_ data obtained from audio books such as LibriLight [20] and LibriSpeech [31]. These models contain billions of parameters, are readily available, and show strong _generalization across domains_.
Our goal is to reuse the extensive expertise and training time that has been invested in such models, by using their weights. We are inspired by recent works adapting _frozen_ foundation models for multi-modal tasks. A popular example is [2] that injects visual information into large language models (LLMs) for vision-text tasks. The benefit of
Figure 1: **Unconstrained audiovisual speech recognition.** We inject vision into a frozen speech model (BEST-RQ, in grey) for zero-shot audiovisual ASR via lightweight modules to create a parameter and data efficient model called AVFormer (blue). The visual context can provide helpful clues for robust speech recognition especially when the audio signal is noisy (the visual load of bread helps correct the audio-only mistake clove to leaf in the generated transcript).
building on strong frozen LLMs for these tasks is the hope that this will enable the visual-text model to retain powerful _language-only_ abilities such as few-shot language adaptation or external knowledge retrieval. Our goal is simple - we wish to do the same for AV-ASR, using strong audio-only ASR models. We add visual inputs to these models in a lightweight manner to enable AV-ASR, but still maintain the benefits of audio-only pretraining for zero-shot generalization.
Our framework is called AVFormer, and injects visual information into a frozen ASR model using lightweight projection layers and trainable adaptors. We show that these can be trained on a small amount of weakly labelled video data (only 5% of the data used by existing state of the art methods [11]) with minimum additional training time and parameters, minimizing the domain shift and catastrophic forgetting that can accompany end-to-end finetuning. In order to further ensure stability during the finetuning of these adapters, we also introduce a simple curriculum scheme during training which we show is crucial to enable the model to jointly process audio and visual information effectively. Finally, we show that our model outperforms existing state of the art zero-shot methods on three different AV-ASR benchmarks (How2, VisSpeech and Ego4D) across different domains, while also crucially preserving decent performance on traditional audio-only speech recognition benchmarks (LibriSpeech).
## 2 Related Works
**State-of-the-Art Speech Recognition** Recent state-of-the-art ASR models [45, 46, 6, 8, 19] almost all adopt transformer based audio encoders [16, 19, 40] embedding input audio signals into a set of token features thereby extracting local information within a temporal window. Encoders are trained end-to-end using losses such as CTC [15], RNN-T [14] and LAS [5]. In many cases, these encoders are pre-trained [45, 46, 8, 19, 6] on large-scale unannotated datasets such as LibriLight [20], and then finetuned for downstream ASR. Consequently, such models incorporate a number of highly-engineered training tricks and techniques suitable for ASR, which we want to reuse for multimodal inference. Rebuilding a multimodal model from scratch incorporating these learnings is expensive and must be redone for each new model. As models get larger and larger [46, 8, 19], this requires a prohibitive amount of compute. Our goal is to reuse this knowledge in a lightweight manner by injecting visual understanding capability into a readily available state-of-the-art ASR model.
**Audiovisual Speech Recognition** Most AV-ASR works are focused on lip motion, right from early works that use pre-extracted features [41, 29] to more recent end-to-end approaches that work on pixels directly [37, 1, 23, 24, 33, 7]. In contrast, the setting explored in this work is full frame AV-ASR beyond the speaker's mouth movements (also known as 'context-aware' speech recognition). Here the defacto strategy is to use pre-extracted visual context features (due to the high dimensionality of full frame video) - either action features [36, 32, 4, 12], or place and object features [38, 30, 25, 17, 4]. Unlike these works which all use visual features from classification models trained on a closed-set of pre-defined objects, places or actions, we use features from CLIP [35], which is trained on image and text paired data, and known to have strong generalization and zero-shot capabilities. This makes our features more suited to unconstrained videos 'in the wild'. An outlier is the recently proposed AVATAR [11], which uses full frame pixels and trains end-to-end on HowTo100. It is the state of the art for this task, achieving good performance on How2 and introducing a new dataset called VisSpeech. Unlike AVATAR, our method reuses strong frozen pretrained models, thereby requiring only 5% of the audiovisual data used in AVATAR, and generalises much better across different domains in the zero-shot setting.
**Adapting Large Frozen Pretrained Models** There has been a recent flurry of works that adapt frozen foundation models for multi-modal tasks, most notably for injecting visual information to large language models (LLMs) [2]. Architectural details vary: for example MAGMA [10] and Frozen-BiLM [42] add bottleneck adapters [39, 18] to the frozen LLM injecting some visual information; Clip-Cap [28] learns a vision-to-prefix bridging transformer to map vision features into a prefix for GPT-2, while VC-GPT [22] adds new learnt layers to the frozen LLM. In the AV-ASR domain specifically, multiple works use pre-extracted visual features to improve audio-only ASR [25]. Early work [25] leverages objects and places features from visual classifiers by projecting them to the same space as the audio features in a process known as Visual Adaptive Training (VAT). [17] also uses similar features, but adopts them as the beginning token of each sentence in a language modelling framework. [4] also uses VAT, but for a sequence to sequence model. Unlike these works which use a single visual feature, we show that having multiple visual features improves performance. The closest to our work is LLD [12], which also uses a stream of visual features extracted from the MIL-NCE model [26]. Their fusion method, however, consists of a complicated deliberation decoder, and while they initialize their model with audio-only pretraining, they then finetune the entire audiovisual model end-to-end. In contrast, most of our model remains frozen, and only lightweight adapters are tuned on a small amount of audio-visual data. All previous works are also only focused on the instructional video domain, reporting results either on internally collected datasets or the publicly released How2 [36]. Our focus instead is on zero-shot generalisation across multiple domains, including audio-only
Librispeech [31] (from audiobooks) and Ego4D [13] (egocentric video). We believe this is a more useful setting for actual deployment of such models.
## 3 Method
Unlike previous AV-ASR works which test only on instructional videos [4, 17, 25, 30, 38], our goal is _zero-shot_ generalization across multiple AV domains, while still maintaining good performance on traditional audio-only benchmarks. To do this, we start with an _existing_ state-of-the-art ASR model, and adapt it for unconstrained AV-ASR. Visual features are obtained from a strong pretrained visual model, and added to the model via the following two components - (i) we linearly project visual features into the audio token embedding space, and (ii) we inject lightweight adapters into the encoder of the frozen ASR model to allow domain adaptation. During training, we only tune these two sets of additional parameters, while both the ASR model and the visual feature extractor are _frozen_ (see Figure 2).
We do this because there are two forms of adaption that are required here - (i) adapting to new video domains and (ii) adapting to multimodal input, both of which we would like to do _without_ catastrophic forgetting. Because of the challenges with this setup, we also introduce a curriculum learning strategy to stabilize the learning process, without which the model fails to utilize the visual features effectively. In this section, we first describe the main components of our network architecture (Sec. 3.1) and then introduce our zero-shot curriculum learning strategy and training loss functions (Sec. 3.2).
### Model Architecture
In this section we describe the key components of our architecture - (i) the frozen conformer encoder and decoder, (ii) the visual encoder and projection layers for visual feature extraction and projection, and (iii) additional adaptation layers in the backbone for audio-only domain adaptation. A diagram is show in Figure 2.
#### 3.1.1 Frozen Conformer ASR Model
We start with a frozen ASR model that achieves state-of-the-art performance on traditional ASR benchmarks [31]. Specifically, we use BEST-RQ [6] that adopts a Conformer [16] model with an RNN-Transducer (RNN-T) [14]. The model is pretrained on LibriLight [20] in a self-supervised manner using a random projection quantization technique, after which it is then finetuned for ASR on LibriSpeech [31] using supervised training. The conformer consists of convolution-augmented transformer blocks (conformer blocks), which operate on audio token features that are extracted from a spectrogram via a stack of convolution and linear layers [16]. BEST-RQ uses ConformerXL as a backbone, which has 0.6B parameters [46] - note that training such a large model end-to-end is extremely compute heavy - and requires a large pretraining dataset (made possible by self-supervised learning on LibriLight). This self-supervised training also enables the model to generalize well across numerous domains. After pretraining, an RNN-T decoder is added to Conformer to generate text output for ASR with 1,024 WordPiece tokens [44]. The RNN-T decoder generates a sequence of tokens consisting of grapheme tokens or a special output token, which represents moving to the next input token (See Figure 2, right for a diagram of the decoder).
Formally speaking, given the log-mel spectrogram \(\mathbf{X}\in\mathbb{R}^{\hat{N}\times S}\) with \(S\) mel spectrogram bins in a length of \(\hat{N}\) converted from the input audio waveform, the tokenizer outputs a set of audio tokens \(\{\mathbf{t}_{i}\}_{1}^{N}=h_{\mathrm{tok}}(\mathbf{X})\) where \(D\) is the token embedding dimensionality and \(N=\hat{N}/4\). The encoder then contextualizes the audio tokens through a series of conformer blocks, each of which is a stack of feed-forward, multi-head self-attention, convolution layers followed by another feed-forward layer. The output of each layer is added with a residual connection. This process produces \(N\) contextualized tokens \(\hat{\mathbf{t}}_{i}\in\mathbb{R}^{D}\), _i.e._, \(\{\hat{\mathbf{t}}_{i}\}_{1}^{N}=h_{\mathrm{enc}}(\{\mathbf{t}_{i}\}_{1}^{N})\). The decoder finally generates the transcripts by predicting a sequence of \(K\) graphemes from the contextualized audio tokens. Given a token \(\hat{\mathbf{t}}_{i}\) and previously generated grapheme \(w_{j-1}\), the decoder generates the next grapheme \(w_{j}=h_{\mathrm{dec}}(\hat{\mathbf{t}}_{i},w_{j-1})\) where \(w_{j}\in\mathcal{V}\cup\{\epsilon\}\) with the vocabulary of the predefined graphemes \(\mathcal{V}\) and a special blank token \(\epsilon\) that represents moving to the next token \(\hat{\mathbf{t}}_{i+1}\) in the generation process. The decoder \(h_{\mathrm{dec}}\) is implemented as a two layer LSTM module with a grapheme classification head. Note that at a single audio token index \(i\), multiple graphemes can be emitted (vertical arrows) until an \(\epsilon\) is emitted (horizontal arrows) as depicted in Figure 2.
#### 3.1.2 Lightweight Adapters
In order to enable domain adaption in the model, we interleave an adapter layer within each conformer block of the encoder. Note that the BEST-RQ model has strong generalization capability, which we want to maintain. Hence we design our adapters to be lightweight, to prevent drastic domain shift and catastrophic forgetting. Given \(N\) audio tokens \(\mathbf{t}_{i}\) and \(M\) projected visual tokens \(\mathbf{t}_{j}^{v}\) (which will be described next) at a certain layer \(l\)1, we compute the adapted token features \(\tilde{\mathbf{t}}_{i}\) and \(\tilde{\mathbf{t}}_{j}^{v}\) using an adapter layer by \(\{\tilde{\mathbf{t}}_{i}\}\cup\{\tilde{\mathbf{t}}_{j}^{v}\}=\mathrm{adapt}( \{\mathbf{t}_{i}\}\cup\{\mathbf{t}_{j}^{v}\};\phi)\) where \(\mathrm{adapt}(\cdot)\) is an adapter layer parameterized by \(\phi\). We introduce and experiment with the following two types of lightweight adapters:
**Feed-forward Adapters (FF).** The simplest design is to independently project each token. To achieve this, we use a two-layered MLP with a residual connection as our adapter. To make the layer lightweight, we set the dimensionality of the hidden layer to \(B\), where \(B\ll D\). This allows the adaptor to effectively act as a bottleneck, and reduces total additional parameters.
**Feed-foward Adapters with Self-Attention (FF+SA).** The feed-forward adapters described above operate independently for each token. We can perform an additional contextualization across the input tokens via a self-attention layer [43]. To reduce additional parameters, we apply the same bottleneck projection technique as before, where each input token is transformed into a \(B\) dimensional query, key and value for attention, after which the attended feature is projected back into the \(D\) dimensional feature space. For multi-head self-attention, each head projects features into \(B/H\) dimensional spaces instead where \(H\) stands for the number of heads. This module is used with a residual connection and a feed-forward module described above; the combination of these forms a transformer block with bottlenecks. While this (FF+SA) allows additional contextualization across tokens, it introduces four times more parameters than vanilla FF adapters.
#### 3.1.3 Visual Feature Extraction and Projection
Given a sequence of \(M\) video frames \(\mathbf{f}_{i}\), we extract a \(\hat{D}\) dimensional visual feature \(\mathbf{v}_{i}=g(\mathbf{f}_{i})\) per frame using a pretrained visual encoder \(g\). Specifically, we use the CLIP encoder [34] with ViT-L/14 [9] as our visual backbone, which is known to have strong zero-shot generalization capability [34]. Because the CLIP encoder is frozen, we add a linear layer2 to project the visual features into the audio token embedding space, _i.e_., \(\mathbf{t}_{i}^{v}=\mathrm{proj}(\mathbf{v}_{i};\theta)\) where \(\mathbf{t}_{i}^{v}\in\mathbb{R}^{D}\) and \(\theta\) is a set of the parameters in the projection layer. The projected visual tokens are fed to the Conformer encoder together with audio tokens \(\mathbf{t}_{i}\). Note that these visual projection layers are essentially performing a type of prompt tuning [28, 21] since the rest of the ASR model is frozen.
Footnote 2: We tested more complex MLP projectors and found that a single linear layer is sufficient for good performance as detailed in the appendix.
### Training Strategy
It is a well-known that AV-ASR is an audio-dominant task, which is why previous works are forced to devise training strategies that prevent the audio stream from dominating training [11]. We observe a similar phenomenon while jointly training both sets of additional parameters (adapters and visual projections). The visual information is not used (similar performance with and without), and training is dominated by the model only adapting to the finetuning _audio_ domain. We hence introduce a curriculum training strategy. We first describe our finetuning data, the loss function, and then the curriculum in the next few paragraphs.
**Zero-shot Training with Web Videos.** Our extended model has two sets of new parameters \(\theta\) and \(\phi\) introduced for the visual projection layer and the adapters respectively. Since it is labor-intensive and costly to collect new training benchmarks for AV-ASR, we train these new parameters without manually labeled data. We use unlabeled web videos online along with the outputs of an ASR model as
Figure 2: **Overall architecture and training procedure for AVFormer.** Our architecture consists of a frozen Conformer encoder-decoder model [6], and a frozen CLIP [35] encoder (frozen layers shown in grey with a lock symbol), in conjunction with two lightweight trainable modules - (i) visual projection layer (orange) and bottleneck adapters (blue) to enable multi-modal domain adaptation. We propose a two-phase curriculum learning strategy - the adapters (blue) are first trained without any visual tokens, after which the visual projection layer (orange) are tuned while all the other parts are kept frozen.
pseudo ground truth. Our goal is to aid the pretrained ASR model with visual understanding capability using only these automatically collected transcripts; the trained model is then tested in a zero-shot setting on manually annotated public AV-ASR benchmarks.
**Loss Function.** As the RNN-T decoder in the pretrained ASR model is kept frozen in AVFormer, we adopt the same loss function that is used for ASR pretraining. With an RNN-T decoder, the probability of a transcript \(W=\{w_{1},w_{2},\cdots,w_{K}\}\) is obtained by marginalizing the probabilities of all valid generation paths \(y\) (_e.g._, the path with bold arrows in Figure 2), _i.e._,
\[P(W|X)=\sum_{y\in\mathcal{Y}}\prod_{(i,j)\in y}P(w_{j}|\hat{\mathbf{t}}_{i},w_{ 0:j-1}) \tag{1}\]
where \(\mathcal{Y}\) is a set of all valid paths \(y\) (paths on the grid from \((0,0)\) to \((N+1,K)\) in Figure 2) which is a sequence of pairs of token and output grapheme indices \((i,j)\), and \(P(w_{j}|\hat{\mathbf{t}}_{i},w_{0:j-1})\) is estimated by our decoder \(h_{\text{dec}}(\hat{\mathbf{t}}_{i},w_{j-1})\). We train our model by minimizing the negative log-likelihood of the pseudo-GT transcripts \(\hat{W}\) of input videos:
\[\mathcal{L}(\theta,\phi)=-\sum_{i}\log P(\hat{W}_{i}|X_{i};\theta,\phi). \tag{2}\]
**Curriculum Learning for Visual Processing.** We discover empirically that with a naive first round of joint training, our model struggles to learn both the adapters and the visual projectors in one go (as shown in the experiments, the issue becomes more severe as more visual tokens are added). To mitigate this issue, we propose a two-phase curriculum learning strategy that decouples these two factors (domain adaption and visual feature integration) and trains the network in a sequential manner. In the first phase, the adapter parameters \(\phi\) are optimized using \(\operatorname*{argmax}_{\phi}\mathcal{L}(\theta,\phi)\) as an objective. Note that at this phase, we do not feed visual tokens at all and thus \(\theta\) is an empty set. Once \(\phi\) is trained, we add the visual tokens and train the visual projection layers \(\theta\) using \(\operatorname*{argmax}_{\theta}\mathcal{L}(\theta,\phi)\). During this second phase of training, \(\phi\) is kept frozen.
The first stage focuses on audio domain adaptation. By the second phase, the adapters are completely frozen and the visual projector must simply learn to generate visual prompts that project the visual tokens into the audio space. In this way, our curriculum learning strategy allows the model to incorporate visual inputs as well as adapt to new audio domains in AV-ASR benchmarks. We apply each phase just once, as an iterative application of alternating phases leads to performance degradation. This is further discussed in the appendix.
**Content Word Masking.** We adopt the content word masking from [11] to encourage the models to further focus on visual understanding. We observe that the original zero-padded masking introduced in [11] causes instabilities and therefore we add Gaussian noise to the audio input corresponding to masked words, which stabilizes optimization.
## 4 Experiments
### Experimental Settings
**Implementation Details.** As mentioned earlier, we use BEST-RQ [6] as the frozen ASR model. Since it has 24 conformer blocks, we add 24 adapters (one in each layer) in all experiments. When added, all adapters and visual projectors are randomly initialized. The decoder predicts WordPiece tokenized graphemes with a vocabulary size of 1,024. In the adapters, we apply layer norm [3] at every residual connection. For both phases of training, we use standard SGD with momentum with a moving average coefficient of 0.9 and a cosine learning rate schedule; the initial learning rate is set to 0.4. We train for 40K and 30K iterations in phase 1 and 2 respectively, with a batch size of 256 on 32 TPU v4 chips. We run 5 independent experiments and report the mean scores for ablation studies. When testing Audiovisual models on audio-only benchmarks, we feed dummy visual inputs (zero tensors).
**Metrics.** We use word error rate (WER) for all evaluation (lower is better). The alignment between predicted words and ground truth is computed using dynamic programming. The WER is then computed by the number of errors (deletions, substitutions and insertions) across the whole test set divided by the number of ground truth words.
**Baselines.** We compare AVFormer to two strong baselines proposed this year - (i) the state-of-the art AV-ASR model AVATAR [11] and (ii) the state-of-the-art ASR (audio only) model BEST-RQ [6]. We apply both models to the same settings as AVFormer for a fair comparison.
### Datasets
The additional parameters in our model are finetuned on the HowTo100M dataset, which contains instructional videos from YouTube. In order to assess generalization, we evaluate across different domains - LibriSpeech (audibooks), How2 and VisSpeech (YouTube instructional videos) and Ego4D (egocentric video of daily-life activities). Note that VisSpeech consists of more unconstrained video (background noise, challenging accents etc) than How2. More details for each dataset are provided below.
**LibriLight [20] and LibriSpeech [31].** LibriLight is an unlabelled speech dataset that is used to pretrain BEST-RQ. The model is then finetuned for ASR on LibriSpeech containing 960 hours audio with manually annotated GT transcripts. For a fair comparison, we also use LibriSpeech for pretraining some of our baselines in the ablations.
**HowTo100M [27].** This dataset contains 1.2M instructional
videos without manual annotations. ASR is used to obtain pseudo-GT transcripts for training our adapters and visual projector. We remove videos present in VisSpeech and How2 (described next).
**How2 [36].** We use the 300hr version of How2, which consists of instructional videos with automatically collected user uploaded captions. The videos are segmented into 5.8s short clips with 20-word long transcripts in average. We use the validation (2,022 clips) and test (2,305 clips) splits to evaluate our model in a zero-shot setting.
**VisSpeech [11].** VisSpeech is an AV-ASR test benchmark that consists of 503 video clips with manually annotated transcripts, which are sampled from HowTo100M. The dataset curation process focuses on samples where an audio-only ASR model fails and where strong visual correlations are observed.
**Ego4D [13].** Ego4D consists of egocentric video from 74 worldwide locations and 9 countries, with over 3,670 hours of daily-life activity video. We use the audiovisual diarization benchmark in the Ego4D challenge3. It consists of 585 5-minutes long egocentric video clips splited into train (397 clips), validation (51 clips) and test (137 clips) sets. We report zero-shot results on the validation set as the test annotations are not released. We evaluate transcripts on segmented clips based on GT boundaries.
Footnote 3: [https://ego4d-data.org/docs/challenge/](https://ego4d-data.org/docs/challenge/)
### Results
In this section, we show ablations of the various design choices in our model (adapter architecture and bottleneck dimension), and then discuss the impact of curriculum learning and the benefit of adding visual tokens (including the impact of the number of visual tokens). We then show an ablation discussing the impact of adding both adapters and visual tokens, and the impact of finetuning dataset size. Finally, we show zero-shot performance of our model compared to state of the art baselines. Note that all ablations and results are provided on all \(3\) downstream datasets in a zero-shot setting - How2, VisSpeech and Ego4D.
**Adapter Architecture and Bottleneck Dimensionality.** Figure 3 compares results with feed-forward adapters (FF) only vs adapters with both feed-forward and self-attention (FF+SA). We also vary the bottleneck dimension from 32 to 256. We observe that on How2 (Figure 2(a)) and VisSpeech (Figure 2(b)), both adapter types perform similarly although FF+SA uses significantly more parameters than FF (Figure 2(d)), indicating that a simple projection is enough for strong adaptation. On Ego4D (Figure 2(c)), simple FF outperforms FF+SA by a large margin, potentially because of the larger domain gap (instructional edited videos online to egocentric daily activity videos). The greater number of parameters in FF+SA may result in a larger shift to the instructional video domain and away from Ego4D. Figure 3
Figure 4: **Effects of curriculum learning and the number of visual tokens \(M\) on performance.** Red and blue lines are for audiovisual models and are shown on 3 datasets in the zero-shot setting (lower WER% is better). Using the curriculum helps on all 3 datasets (for How2 (a) and Ego4D (c) it is crucial for outperforming audio-only performance). Performance improves up until 4 visual tokens, at which point it saturates. Best viewed in color.
Figure 3: **Effects of different architectures (feed-forward (FF) vs feed-forward + self-attention (FF+SA)) and the bottleneck dimensionality \(B\) of adaptor layers on performance.** Results are for audiovisual models trained with our curriculum learning, and are shown on 3 datasets in the zero-shot setting (lower WER% is better). We show that a bottleneck dimension of 64 with FF layers achieves the best or almost the best performance (a,b,c) with the least number of additional parameters (d). Best viewed in color.
also shows the effect of different bottleneck dimensions. In general the WER comes down from \(32\) to \(64\), but saturates at \(B=64\) across all datasets with FF, while introducing only few additional parameters (0.6% of the number of parameters in BEST-RQ). Hence in the rest of experiments, we adopt FF adapters with \(B=64\).
**Curriculum Learning and Visual Tokens.** We show the results of AVFormer with and without the proposed two-stage curriculum in Figure 4, and also compare to an audio-only baseline which had only FF adapters with \(B=64\) and no visual information. Without curriculum learning, our AV-ASR model is worse than the audio-only baseline across all datasets, with the gap increasing as more visual tokens are added. In contrast, when the proposed two-phase curriculum is applied, our AV-ASR model performs significantly better than the baseline audio-only model. We also test our model with different number of visual input tokens (where one token corresponds to one frame). More visual tokens improves the model up until \(M=4\) with up to 7.0% relative improvement, after which performance begins to degrade. Hence we set \(M=4\) in all experiments.
**Complementary Gain of Additional Components.** Table 1 shows the effect of our additional lightweight components (projection layer for visual tokens and adapter layers) for zero-shot AV-ASR. The first row is simply the vanilla baseline (frozen BEST-RQ). We observe that adding projected visual tokens and adapters bring individual gains to the baseline (the former adding visual information and the latter aiding with audio-domain adaptation), and when com
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline
**VT** & **Adapters** & **How2** & **VisSpeech** & **Ego4D** \\ \hline & & 21.90 & 31.61 & 77.98 \\ ✓ & & 19.74 \(\pm\) 0.04 & 31.13 \(\pm\) 0.06 & 76.50 \(\pm\) 0.11 \\ & ✓ & 14.66 \(\pm\) 0.03 & 17.18 \(\pm\) 0.15 & 65.45 \(\pm\) 0.14 \\ ✓ & ✓ & 13.63 \(\pm\) 0.10 & 16.39 \(\pm\) 0.11 & 64.63 \(\pm\) 0.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Effect of visual tokens (VT) and adapter layers.** Results on 3 datasets are obtained in the zero-shot setting (lower WER% is better). The first row corresponds to the vanilla pretrained BEST-RQ. Visual projector is added only when feeding VT. The gains from both VT and adapters are complementary.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Training-set size** & **How2** & **VisSpeech** & **Ego4D** \\ \hline
5\% & 13.69 \(\pm\) 0.17 & 16.60 \(\pm\) 0.17 & 64.75 \(\pm\) 1.05 \\
100\% & 13.63 \(\pm\) 0.10 & 16.39 \(\pm\) 0.11 & 64.63 \(\pm\) 0.79 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Effect of training dataset size.** Results are for audio-visual models trained with our curriculum learning, and are shown on 3 datasets in the zero-shot setting (lower WER% is better). Only 5% of HowTo100M is required.
Figure 5: **Qualitative Results on How2 (top), VisSpeech (middle) and Ego4D (bottom).** We show the ground truth (GT), and predictions from the audio only BEST-RQ model (B-RQ) and our audiovisual AVFormer (Ours) in the zero-shot setting. For each clip we show a single visual frame. Note how the visual context helps with visual objects (tuxedos, veil, scooter, bowl, cake, carrot _etc_), as well as actions (exhale, drive over) and works well even in the ego-centric domain (learns driving from input of road in row 3, column 3). Errors in the predicted words compared to the GT are highlighted in red. Faces are blurred for privacy.
bined with our curriculum learning, are complementary to performance, achieving the lowest WER.
**Training Dataset Size**. Given our additional components are so lightweight, we test whether adaptation can be done with a small amount of weakly labelled data. The results in Table 2 show only 5% of HowTo100M training data performs on par with the full dataset - the pretrained knowledge in BEST-RQ and CLIP yields considerable data efficiency to the model. Ablation results with more data fractions are provided in the appendix.
**Comparisons to Zero-shot Baselines on AV-ASR.** We compare our model to baselines in Table 3, for zero-shot performance on all 3 AV-ASR benchmarks4 AVFormer outperforms AVATAR and BEST-RQ on all, even outperforming both AVATAR and BEST-RQ when they are fully fine-tuned on LibriSpeech and then 100% of HowTo100M (3rd and 5th row). Note for BEST-RQ, this involves finetuning 0.6B params. Our model, in contrast only finetunes 4M params on 5% of HowTo100M.
Footnote 4: Note that the original AVATAR and BEST-RQ papers do not report this. We apply these models in the same setting as ours for a fair comparison.
**Comparisons to Zero-shot Baselines on LibriSpeech.** Even though this is not the main goal of this work, we also investigate performance on LibriSpeech, which is audio-only (Table 3). Note other AV-ASR works do not do this, but we believe it is important for deployment of AV-ASR models. We first note that AVATAR pretrained on LibriSpeech and then finetuned on HowTo100M performs poorly when re-evaluated on LibriSpeech (showing severe catastrophic forgetting between rows 1 and 3). We believe this is because all parameters are trained end-to-end. On the other hand, AVFormer performs much better on LibriSpeech (4.36 vs 24.08), and is much closer to BEST-RQ's 1.60 which is a model tuned only for LibriSpeech and incapable of AV-ASR, while AVFormer achieves SOTA on AV-ASR as well.
**Qualitative Results.** Qualitative examples are provided in Fig. 5 comparing our method to audio-only BEST-RQ for zero-shot ASR. We show that for all 3 downstream AV-ASR datasets, visual context improves mistakes that are made on objects (_eg._ tuxedos, veil and scooter from the top row), actions (exhale - top row, second column), and even corrects a homophone 5 (colonels to kernals, row 2 column 4).
Footnote 5: same pronunciation, different spelling
**Comparisons to SOTA after Finetuning.** For completeness, we also show finetuning results on two domains - instructional (How2) and egocentric (Ego4D) videos in Table 4. We outperform all previous works on How2 that use frozen visual features. Our model is also not too much worse (How2) or on par (Ego4D) with AVATAR, even though AVATAR is trained end-to-end, and all parameters (including a large visual encoder) are finetuned.
## 5 Conclusion
We present AVFormer, a lightweight method for adapting existing, frozen state-of-the-art ASR models for AV-ASR. Our approach is practical and achieves impressive zero-shot performance. As ASR models get larger and larger, tuning the entire parameter set of pre-trained models becomes impractical for different domains. Our method seamlessly allows both domain transfer and visual input mixing in the same, parameter efficient model.
\begin{table}
\begin{tabular}{l c|c c} \hline \hline
**Method** & **Frozen visual feats** & **How2** & **Ego4D** \\ \hline VAT [4] & ✓ & 18.0 & – \\ MultiRes [32] & ✓ & 20.5 & – \\ LLD [12] & ✓ & 16.7 & – \\ AVATAR [11] & & 9.11 & 55.27 \\ \hline AVFormer (Ours) & ✓ & 10.22 & **55.23** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Finetuning performance on How2 and Ego4D. We outperform all previous works on How2 that use frozen visual features. AVATAR is trained end-to-end, with all visual parameters finetuned. Scores are in WER %.**
\begin{table}
\begin{tabular}{c|c|c c c|c c c} \hline \hline & & \multicolumn{4}{c|}{**HowTo100M PT**} & & & \\
**Method** & **Modality** & **LibriSpeech PT** & Pretrained params & **Data \%** & **LibriSpeech** & **How2** & **VisSpeech** & **Ego4D** \\ \hline AVATAR [11] & A & ✓ & – & – & 8.85 & 39.43 & 65.33 & 110.86 \\ AVATAR [11] & A+V & – & All & 100 & 24.65 & 17.23 & 35.66 & 92.03 \\ AVATAR [11] & A+V & ✓ & All & 100 & 24.08 & 18.37 & 35.59 & 71.97 \\ \hline BEST-RQ [6] & A & ✓ & – & – & 1.60* & 21.90 & 28.62 & 77.98 \\ BEST-RQ [6] & A & ✓ & All & 100 & 5.60 & 15.32 & 16.69 & 68.34 \\ \hline
**AVFormer (Ours)** & **A+V** & ✓ & VP + Adapters & 5 & 4.36 & **13.69** & **16.60** & **64.75** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison to state-of-the-art methods for zero-shot performance across different AV-ASR datasets. We also show performance on LibriSpeech which is audio-only. Results are reported as WER % (lower is better). Note that AVATAR and BEST-RQ are finetuned end-to-end (all parameters) on HowTo100M, whereas for our model, only the visual projectors (VP) and adapters are finetuned on 5% of the dataset. PT means pretraining. When a model is marked with both LibriSpeech and HowTo100M pretraining, we first train the model on LibriSpeech and then on HowTo100M next. For LibriSpeech evaluation, we report numbers on test-clean set. *LibriSpeech trained model is evaluated directly on LibriSpeech test set.** |
2310.01145 | Parallel-in-Time Probabilistic Numerical ODE Solvers | Probabilistic numerical solvers for ordinary differential equations (ODEs)
treat the numerical simulation of dynamical systems as problems of Bayesian
state estimation. Aside from producing posterior distributions over ODE
solutions and thereby quantifying the numerical approximation error of the
method itself, one less-often noted advantage of this formalism is the
algorithmic flexibility gained by formulating numerical simulation in the
framework of Bayesian filtering and smoothing. In this paper, we leverage this
flexibility and build on the time-parallel formulation of iterated extended
Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE
solver. Instead of simulating the dynamical system sequentially in time, as
done by current probabilistic solvers, the proposed method processes all time
steps in parallel and thereby reduces the span cost from linear to logarithmic
in the number of time steps. We demonstrate the effectiveness of our approach
on a variety of ODEs and compare it to a range of both classic and
probabilistic numerical ODE solvers. | Nathanael Bosch, Adrien Corenflos, Fatemeh Yaghoobi, Filip Tronarp, Philipp Hennig, Simo Särkkä | 2023-10-02T12:32:21Z | http://arxiv.org/abs/2310.01145v2 | # Parallel-in-Time Probabilistic Numerical ODE Solvers
###### Abstract
Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation. Aside from producing posterior distributions over ODE solutions and thereby quantifying the numerical approximation error of the method itself, one less-often noted advantage of this formalism is the algorithmic flexibility gained by formulating numerical simulation in the framework of Bayesian filtering and smoothing. In this paper, we leverage this flexibility and build on the time-parallel formulation of iterated extended Kalman smoothers to formulate a _parallel-in-time_ probabilistic numerical ODE solver. Instead of simulating the dynamical system sequentially in time, as done by current probabilistic solvers, the proposed method processes all time steps in parallel and thereby reduces the span cost from _linear_ to _logarithmic_ in the number of time steps. We demonstrate the effectiveness of our approach on a variety of ODEs and compare it to a range of both classic and probabilistic numerical ODE solvers.
Probabilistic numerics provides a framework for treating classic numerical problems as problems of probabilistic inference (Hennig et al., 2015; Oates and Sullivan, 2019; Hennig et al., 2022). In the context of ODEs, methods based on Gaussian process regression (Skilling, 1992; Hennig and Hauberg, 2014) and in particular Gauss-Markov regression (Schober et al., 2019; Kersting et al., 2020; Tronarp et al., 2019) provide an efficient and flexible approach to compute posterior distributions over the solution of ODEs (Bosch et al., 2021; Kramer and Hennig, 2020), and even partial differential equations (Kramer et al., 2022) and differential-algebraic equations (Bosch et al., 2022). These so-called _ODE filters_ typically scale cubically in the ODE dimension (as do most _implicit_ ODE solvers) and specific approximations enable linear scaling (shared by most _explicit_ solvers) (Kramer et al., 2022). But to date, their linear scaling with the number of time steps remains.
For very large-scale simulations with very long time horizons, the sequential processing in time of most ODE solvers can become a bottleneck. This motivates the development of _parallel-in-time_ methods: By leveraging the ever-increasing parallelization capabilities of modern computer hardware, parallel-in-time methods can achieve _sub-linear_ scaling in the number of time steps (Gander, 2015). One well-known method of this kind is Parareal (Lions et al., 2001). It achieves temporal parallelism by combining an expensive, accurate solver with a cheap, coarse solver, in such a way that the fine solver is only ever applied to individual time slices in a parallel manner, leading to a square-root scaling (in ideal conditions). But, due to its sequential coarse-grid solve, Parareal still has only limited concurrency (Gander and Vandewalle, 2007), and while it has recently been extended probabilistically by Pentland et al. (2021, 2022) to improve its performance and convergence, these methods do not provide probabilistic solutions to ODEs per se.
In this paper, we leverage the time-parallel formulation of Gaussian filters and smoothers (Sarkka and Garcia-Fernandez, 2021; Yaghoobi et al., 2021, 2023) to formulate a parallel-in-time probabilistic numerical ODE solver. The paper is structured as follows. Section 2 formulates numerical ODE solutions as Bayesian state estimation problems and presents the established, sequential, filtering-based probabilistic ODE solvers. Section 3 then presents our proposed parallel-in-time probabilistic ODE solver; first as exact inference for affine ODEs, then as an iterative, approximate algorithm for general nonlinear ODEs. Section 4 then presents experiments on a variety of ODEs and compares the performance of our proposed method to that of existing, both probabilistic and non-probabilistic, ODE solvers. Finally, Section 5 concludes with a discussion of our results and an outlook on future work.
## 2 Numerical ODE Solutions as Bayesian State Estimation
Consider an initial value problem (IVP) of the form
\[\dot{y}(t)=f(y(t),t),\quad t\in[0,T],\qquad y(0)=y_{0}, \tag{1}\]
with vector field \(f:\mathbb{R}^{d}\times\mathbb{R}\rightarrow\mathbb{R}^{d}\) and initial value \(y_{0}\in\mathbb{R}^{d}\). To capture the numerical error that arises from temporal discretization, the quantity of interest in probabilistic numerics for ODEs is the _probabilistic numerical ODE solution_, defined as
\[p\left(y(t)\ \Big{|}\ y(0)=y_{0},\left\{\dot{y}(t_{n})=f(y(t_{n}),t_{n})\right\}_{n= 1}^{N}\right), \tag{2}\]
for some prior \(p\left(y(t)\right)\) and with \(\{t_{n}\}_{n=1}^{N}\subset[0,T]\) the chosen time-discretisation.
In the following, we pose the probabilistic numerical ODE solution as a problem of Bayesian state estimation, and we define the prior, likelihood, data, and approximate inference scheme. For a more detailed description of the transformation of an IVP into a Gauss-Markov regression problem, refer to Tronarp et al. (2019).
### Gauss-Markov Process Prior
We model the solution \(y\) of the IVP with a \(\nu\)-times integrated Wiener process prior (IWP\((\nu)\)). More precisely, let \(Y(t)=\left[Y^{(0)}(t),Y^{(1)}(t),\ldots,Y^{(\nu)}(t)\right]\) be the solution of the following linear, time-invariant stochastic differential equation with Gaussian initial condition
\[\mathrm{d}Y^{(i)}(t) =Y^{(i+1)}(t)\,\mathrm{d}t,\qquad i=0,\ldots,\nu-1, \tag{3a}\] \[\mathrm{d}Y^{(\nu)}(t) =\Gamma\,\mathrm{d}W(t),\] (3b) \[Y(0) \sim\mathcal{N}\left(\mu_{0},\Sigma_{0}\right), \tag{3c}\]
with initial mean and covariance \(\mu_{0}\in\mathbb{R}^{d(\nu+1)}\), \(\Sigma_{0}\in\mathbb{R}^{d(\nu+1)\times d(\nu+1)}\), diffusion \(\Gamma\in\mathbb{R}^{d\times d}\), and \(d\)-dimensional Wiener process \(W:\mathbb{R}\to\mathbb{R}^{d}\). Then, \(Y^{(i)}\) is chosen to model the \(i\)-th derivative of the IVP solution \(y\). By construction, accessing the \(i\)-th derivative can be done by multiplying the state \(Y\) with a projection matrix \(E_{i}\coloneqq I_{d}\otimes e_{i}\), that is, \(Y^{(i)}(t)=E_{i}Y(t)\).
This continuous-time prior satisfies discrete transition densities (Sarkka and Solin, 2019)
\[Y(t+h)\mid Y(t)\sim\mathcal{N}\left(\Phi(h)Y(t),Q(h)\right), \tag{4}\]
with transition matrix and process noise covariance \(\Phi(h),Q(h)\in\mathbb{R}^{d(\nu+1)\times d(\nu+1)}\) and step \(h\in\mathbb{R}_{+}\). For the IWP\((\nu)\) these can be computed in closed form (Kersting et al., 2020), as
\[\Phi(h) =I_{d}\otimes\breve{\Phi}(h), \left[\breve{\Phi}(h)\right]_{ij}=\mathbb{1}_{i=j}\frac{h^{i-j}} {(j-i)!}, \tag{5a}\] \[Q(h) =I_{d}\otimes\breve{Q}(h), \left[\breve{Q}(h)\right]_{ij}=\frac{h^{2\eta+1-i-j}}{(2\nu+1-i-j )(\nu-i)!(\nu-j)!}. \tag{5b}\]
**Remark 1** (Alternative Gauss-Markov priors).: _While \(\nu\)-times integrated Wiener process priors have been the most common choice for filtering-based probabilistic ODE solvers in recent years, the methodology is not limited to this choice. Alternatives include the \(\nu\)-times integrated Ornstein-Uhlenbeck process and the class of Matern processes, both of which have a similar continuous-time SDE representation as well as Gaussian transition densities in discrete time. Refer to Tronarp et al. (2021) and Sarkka and Solin (2019)._
The initial distribution \(\mathcal{N}(\mu_{0},\Sigma_{0})\) is chosen such that it encodes the initial condition \(y(0)=y_{0}\). Furthermore, to improve the numerical stability and the quality of the posterior, we initialize not only on the function value \(Y^{(0)}(0)=y_{0}\), but also the higher order derivatives, that is, \(Y^{(i)}(0)=\frac{\mathrm{d}^{i}y}{\mathrm{d}t}(0)\) for all \(i\leq\nu\)(Kramer and Hennig, 2020). These terms can be efficiently computed via Taylor-mode automatic differentiation (Griewank, 2000; Bettencourt et al., 2019). As a result, we obtain an initial distribution with mean
\[\mu_{0}=\left[y_{0},\frac{\mathrm{d}y}{\mathrm{d}t}(0),\ldots,\frac{\mathrm{ d}^{\nu}y}{\mathrm{d}t^{\nu}}(0)\right]^{T}, \tag{6}\]
and zero covariance \(\Sigma_{0}=0\), since the initial condition has to hold exactly.
### Observation Model and Data
To relate the introduced Gauss-Markov prior to the IVP problem from Equation (1), we define an observation model in terms of the information operator
\[\mathcal{Z}[y](t)\coloneqq\dot{y}(t)-f\left(y(t),t\right). \tag{7}\]
By construction, \(\mathcal{Z}\) maps the true IVP solution \(y\)_exactly_ to the zero function, that is, \(\mathcal{Z}[y]\equiv 0\). In terms of the continuous process \(Y\), the information operator can be expressed as
\[\mathcal{Z}[Y](t)=E_{1}Y(t)-f\left(E_{0}Y(t),t\right), \tag{8}\]
where \(E_{0}\) and \(E_{1}\) are the projection matrices introduced in Section 2.1 which select the zeroth and first derivative from the process \(Y\), respectively. There again, if \(Y\) corresponds to the true IVP solution (and its true derivatives), then \(\mathcal{Z}[Y]\equiv 0\).
Conversely, inferring the true IVP solution requires conditioning the process \(Y(t)\) on \(Z(t)=0\) over the whole continuous interval \(t\in[0,T]\). Since this is in general intractable, we instead condition \(Y(t)\) only on discrete observations \(Z(t_{n})=0\) on a grid \(\mathbb{T}=\{t_{n}\}_{n=1}^{N}\). This leads to the Dirac likelihood model commonly used in ODE filtering (Tronarp et al., 2019):
\[Z(t_{n})\mid Y(t_{n})\sim\delta\left(Y^{(1)}(t_{n})-f\left(Y^{(0)}(t_{n}),t_{n }\right)\right), \tag{9}\]
with zero-valued data \(Z(t_{n})=0\) for all \(t_{n}\in\mathbb{T}\).
**Remark 2** (Information operators for other differential equation problems).: _Similar information operators can be defined for other types of differential equations that are not exactly of the first-order form as given in Equation (1), such as higher-order differential equations, Hamiltonian dynamics, or differential-algebraic equations (Bosch et al., 2022)._
### Discrete-Time Inference Problem
The combination of prior, likelihood, and data results in a Bayesian state estimation problem
\[Y(0) \sim\mathcal{N}\left(\mu_{o},\Sigma_{0}\right), \tag{10a}\] \[Y(t_{n+1})\mid Y(t_{n}) \sim\mathcal{N}\left(\Phi(t_{n+1}-t_{n})Y(t_{n}),Q(t_{n+1}-t_{n}) \right),\] (10b) \[Z(t_{n})\mid Y(t_{n}) \sim\delta\left(Y^{(1)}(t_{n})-f\left(Y^{(0)}(t_{n}),t_{n}\right) \right), \tag{10c}\]
with zero data \(Z(t_{n})=0\) for all \(t_{n}\in\mathbb{T}\). The posterior distribution over \(Y^{(0)}(t)\) then provides a probabilistic numerical ODE solution to the given IVP, as formulated in Equation (2).
This is a standard nonlinear Gauss-Markov regression problem, for which many approximate inference algorithms have previously been studied (Sarkka and Svensson, 2023). In the context of probabilistic ODE solvers, a popular approach for efficient approximate inference is Gaussian filtering and smoothing, where the solution is approximated with Gaussian distributions
\[p\left(Y(t)\mid\{Z(t_{n})=0\}_{n=1}^{N}\right)\approx\mathcal{N}\left(\mu(t), \Sigma(t)\right). \tag{11}\]
This is most commonly performed with extended Kalman filtering (EKF) and smoothing (EKS) (Schober et al., 2019; Tronarp et al., 2019; Kersting et al., 2020); though other methods
have been proposed, for example based on numerical quadrature (Kersting and Hennig, 2016) or particle filtering (Tronarp et al., 2019). _Iterated_ extended Kalman smoothing (e.g. Bell, 1994; Sarkka and Svensson, 2023) computes the "maximum a posteriori" estimate of the probabilistic numerical ODE solution (Tronarp et al., 2021). This will be the basis for the parallel-in-time ODE filter proposed in this work, explained in detail in Section 3.
### Practical Considerations for Probabilistic Numerical ODE Solvers
While Bayesian state estimation methods such as the extended Kalman filter and smoother can, in principle, be directly applied to the formulated state estimation problem, there are a number of modifications and practical considerations that should be taken into account:
* _Square-root formulation:_ Gaussian filters often suffer from numerical stability issues when applied to the ODE inference problem defined in Equation (10), in particular when using high orders and small steps. To alleviate these issues, probabilistic numerical ODE solvers are typically formulated in square-root form (Kramer and Hennig, 2020); this is also the case for the proposed parallel-in-time method.
* _Preconditioned state transitions:_Kramer and Hennig (2020) suggest a coordinate change preconditioner to make the state transition matrices step-size independent and thereby improve the numerical stability of EKF-based probabilistic ODE solvers. This preconditioner is also used in this work.
* _Uncertainty calibration:_ The Gauss-Markov prior as introduced in Section 2.1 has a free parameter, the diffusion \(\Gamma\), which directly influences the uncertainty estimates returned by the ODE filter. In this paper, we consider scalar diffusions \(\Gamma=\sigma\cdot I\) and compute a quasi-maximum likelihood estimate for the parameter \(\sigma\) post-hoc, as suggested by Tronarp et al. (2019).
* _Approximate linearization:_ Variants of the standard EKF/EKS-based inference have been proposed in which the linearization of the vector-field is done only approximately. Approximating the Jacobian of the ODE vector field with zero enables inference with a complexity which scales only linearly with the ODE dimension (Kramer et al., 2022), while still providing polynomial convergence rates (Kersting et al., 2020). A diagonal approximation of the Jacobian preserves the linear complexity, but improves the stability properties of the solver (Kramer et al., 2022). In this work, we only consider the exact first-order Taylor linearization.
* _Local error estimation and step-size adaptation:_ Rather than predefining the time discretization grid, certain solvers employ an adaptive approach where the solver dynamically constructs the grid while controlling an internal estimate of the numerical error. Step-size adaptation based on _local_ error estimates have been proposed for both classic (Hairer et al., 1993, Chapter II.4) and probabilistic ODE solvers (Schober et al., 2019; Bosch et al., 2021). On the other hand, _global_ step-size selection is often employed in numerical boundary value problem (BVP) solvers (Ascher et al., 1995, Chapter 9), and has been extended to filtering-based probabilistic BVP solvers (Kramer and Hennig, 2021). For our purposes, we will focus on fixed grids.
## 3 Parallel-in-Time Probabilistic Numerical ODE Solvers
This section develops the main method proposed in this paper: a parallel-in-time probabilistic numerical ODE solver.
### Parallel-Time Exact Inference in Affine Vector Fields
Let us first consider the simple case: An initial value problem with affine vector field
\[\dot{y}(t)=L(t)y(t)+d(t),\quad t\in[0,T],\qquad y(0)=y_{0}. \tag{12}\]
The corresponding information model of the probabilistic solver is then also affine, with
\[Z(t)\mid Y(t) \sim\delta\left(H(t)Y(t)-d(t)\right), \tag{13a}\] \[H(t) \coloneqq E_{1}-L(t)E_{0}. \tag{13b}\]
Let \(\mathbb{T}=\{t_{n}\}_{n=1}^{N}\subset[0,T]\) be a discrete time grid. To simplify the notation in the following, we will denote a function evaluated at time \(t_{n}\) by a subscript \(n\), that is \(Y(t_{n})=:Y_{n}\), except for the transition matrices where we will use \(\Phi_{n}\coloneqq\Phi(t_{n+1}-t_{n})\) and \(Q_{n}\coloneqq Q(t_{n+1}-t_{n})\). Then, the Bayesian state estimation problem from Equation (10) reduces to inference of \(Y(t)\) in the model
\[Y_{0} \sim\mathcal{N}\left(\mu_{o},\Sigma_{0}\right), \tag{14a}\] \[Y_{n+1}\mid Y_{n} \sim\mathcal{N}\left(\Phi_{n}Y_{n},Q_{n}\right),\] (14b) \[Z_{n}\mid Y_{n} \sim\delta\left(H_{n}Y_{n}-d_{n}\right), \tag{14c}\]
with zero data \(Z_{n}=0\) for all \(n=1,\ldots,N\). Since this is an affine Gaussian state estimation problem, it can be solved exactly with Gaussian filtering and smoothing (Kalman, 1960; Rauch et al., 1965; Sarkka and Svensson, 2023); see also (Tronarp et al., 2019, 2021) for explicit discussions of probabilistic numerical solvers for affine ODEs.
Recently, Sarkka and Garcia-Fernandez (2021) presented a parallel-time formulation of Bayesian filtering and smoothing, as well as a concrete algorithm for exact linear Gaussian filtering and smoothing--which could be directly applied to the problem formulation in Equation (14). But as mentioned in Section 2.4, the resulting ODE solver might suffer from numerical instabilities. Therefore, we use the square-root formulation of the parallel-time linear Gaussian filter and smoother by Yaghoobi et al. (2023). In the following, we review the details of the algorithm.
#### 3.1.1 Parallel-Time General Bayesian Filtering and Smoothing
First, we follow the presentation of Sarkka and Garcia-Fernandez (2021) and formulate Bayesian filtering and smoothing as prefix sums. We define elements \(a_{n}=(f_{n},g_{n})\) with
\[f_{n}(Y_{n}\mid Y_{n-1}) =p(Y_{n}\mid Z_{n},Y_{n-1}), \tag{15a}\] \[g_{n}(Y_{n-1}) =p(Z_{n}\mid Y_{n-1}), \tag{15b}\]
where for \(n=1\) we have \(p(Y_{1}\mid Z_{1},Y_{0})=p(Y_{1}\mid Z_{1})\) and \(p(Z_{1}\mid Y_{0})=p(Z_{1})\), together with a binary operator \(\otimes_{f}:(f_{i},g_{i})\otimes_{f}(f_{j},g_{j})\mapsto(f_{ij},g_{ij})\) defined by
\[f_{ij}(x\mid z) \coloneqq\frac{\int g_{j}(y)f_{j}(x\mid y)f_{i}(y\mid z)\,\mathrm{ d}y}{\int g_{j}(y)f_{i}(y\mid z)\,\mathrm{d}y}, \tag{16a}\] \[g_{ij}(z) \coloneqq g_{i}(z)\int g_{j}(y)f_{i}(y\mid z)\,\mathrm{d}y. \tag{16b}\]
Then, Sarkka and Garcia-Fernandez (2021, Theorem 3) show that \(\otimes_{f}\) is associative and that
\[a_{1}\otimes_{f}\dots\otimes_{f}a_{n}=\begin{bmatrix}p(Y_{n}\mid Z_{1:n})\\ p(Z_{1:n})\end{bmatrix}, \tag{17}\]
that is, the filtering marginals and the marginal likelihood of the observations at step \(n\) are the results of a cumulative sum of the elements \(a_{1:n}\) under \(\otimes_{f}\). Since the operator \(\otimes_{f}\) is associative, this quantity can be computed in parallel with prefix-sum algorithms, such as the parallel scan algorithm by Blelloch (1989).
**Remark 3** (On Prefix-Sums).: _Prefix sums, also known as cumulative sums or inclusive scans, play an important role in parallel computing. Their computation can be efficiently parallelized and, if enough parallel resources are available, their (span) computational cost can be reduced from linear to logarithmic in the number of elements. One such algorithm is the well-known parallel scan algorithm by Blelloch (1989) which, given \(N\) elements and \(N/2\) processors, computes the prefix sum in \(2\lceil\log_{2}N\rceil\) sequential steps with \(2N-2\) invocations of the binary operation. This algorithm is implemented in both tensorflow (Abadi et al., 2015) and JAX (Bradbury et al., 2018); the latter is used in this work._
The time-parallel smoothing step can be constructed similarly: We define elements \(b_{n}=p(Y_{n}\mid Z_{1:n},Y_{n+1})\), with \(b_{N}=p(Y_{N}\mid Z_{1:N})\), and a binary operator \(b_{i}\otimes_{s}b_{j}=b_{ij}\), with
\[b_{ij}(x\mid z)=\int b_{i}(x\mid y)b_{j}(y\mid z)\,\mathrm{d}y. \tag{18}\]
Then, \(\otimes_{s}\) is associative and the smoothing marginal at time step \(n\) is the result of a reverse cumulative sum of the elements \(b_{n:N}\) under \(\otimes_{s}\)(Sarkka and Garcia-Fernandez, 2021):
\[b_{n}\otimes_{s}\dots\otimes_{s}b_{N}=p(Y_{n}\mid Z_{1:N}). \tag{19}\]
Again, since the smoothing operator \(\otimes_{s}\) is associative, this cumulative sum can be computed in parallel with a prefix-sum algorithm (Blelloch, 1989).
#### 3.1.2 Parallel-Time Linear Gaussian Filtering in Square-Root Form
In the linear Gaussian case, the filtering elements \(a_{n}=(f_{n},g_{n})\) can be parameterized by a set of parameters \(\{A_{n},b_{n},C_{n},\eta_{n},J_{n}\}\) as follows:
\[f_{n}(Y_{n}\mid Y_{n-1}) =p(Y_{n}\mid Z_{n},Y_{n-1})=\mathcal{N}\left(Y_{n};A_{n}Y_{n-1}+b _{n},C_{n}\right), \tag{20a}\] \[g_{n}(Y_{n-1}) =p(Z_{n}\mid Y_{n-1})\propto\mathcal{N}_{I}\left(Y_{n-1};\eta_{n},J_{n}\right), \tag{20b}\]
where \(\mathcal{N}_{I}\) denotes a Gaussian density parameterized in information form, that is, \(\mathcal{N}_{I}(x;\eta,J)=\mathcal{N}(x;J^{-1}\eta,J^{-1})\). The parameters \(\{A_{n},b_{n},C_{n},\eta_{n},J_{n}\}\) can be computed explicitly from the given state-space model (Sarkka and Garcia-Fernandez, 2021, Lemma 7). But since probabilistic numerical ODE solvers require a numerically stable implementation of the underlying filtering and smoothing algorithm (Kramer and Hennig, 2020), we formulate the parallel-time linear Gaussian filtering algorithm in square-root form, following Yaghoobi et al. (2023).
To this end, let \(\sqrt{M}\) denote a left square-root of a positive semi-definite matrix \(M\), that is, \(\sqrt{M}\sqrt{M}^{T}=M\); the matrix \(\sqrt{M}\) is sometimes also called a "generalised Cholesky factor" of \(M\)(S. Grewal and P. Andrews, 2014). To operate on square-root matrices, we also define the _triangularization_ operator: Given a wide matrix \(M\in\mathbb{R}^{n\times m}\), \(m\geq n\), the triangularization operator \(\operatorname{tria}(M)\) first computes the QR decomposition of \(M^{\mathsf{T}}\), that is, \(M^{\mathsf{T}}=QR\), with wide orthonormal \(Q\in\mathbb{R}^{m\times n}\) and square upper-triangular \(R\in\mathbb{R}^{n\times n}\), and then returns \(R^{\mathsf{T}}\). This operator plays a central role in square-root filtering algorithms as it enables the numerically stable addition of covariance matrices, provided square-roots are available: Given two positive semi-definite matrices \(A,B\in\mathbb{R}^{n\times n}\) with square-roots \(\sqrt{A},\sqrt{B}\), a square-root of the sum \(A+B\) can be computed as
\[\sqrt{A+B}=\operatorname{tria}\left(\begin{bmatrix}\sqrt{A}&\sqrt{B}\end{bmatrix} \right). \tag{21}\]
With these definitions in place, we briefly review the parallel-time linear Gaussian filtering algorithm in square-root form as provided by Yaghoobi et al. (2023) in the following.
Parameterization of the filtering elements.Let \(m_{0}=\mu_{0}\), \(P_{0}=\Sigma_{0}\), and \(m_{n}=0\), \(P_{n}=0\) for all \(n\geq 1\), and define
\[m_{n}^{-} =\Phi_{n-1}m_{n-1}, \tag{22a}\] \[\sqrt{P_{n}^{-}} =\operatorname{tria}\left(\begin{bmatrix}\Phi_{n-1}\sqrt{P_{n-1}}& \sqrt{Q_{n-1}}\end{bmatrix}\right). \tag{22b}\]
Then, the square-root parameterization of the filtering elements \(a_{n}\) is given by
\[A_{n} =(I-K_{n}H_{n})\Phi_{n-1}, \tag{23a}\] \[b_{n} =m_{n}^{-}-K_{n}\left(H_{n}m_{n}^{-}-d_{n}\right),\] (23b) \[\sqrt{C_{n}} =\Psi_{22},\] (23c) \[\eta_{n} =\sqrt{J_{n}}\sqrt{S_{n}}^{-1}d_{n},\] (23d) \[\sqrt{J_{n}} =\Phi_{n-1}^{\mathsf{T}}H_{n}^{\mathsf{T}}\sqrt{S_{n}}^{-\mathsf{ T}}, \tag{23e}\]
where \(I\) is the identity matrix and \(\Psi_{22}\), \(\sqrt{S_{n}}\) and \(K_{n}\) are defined via
\[\begin{bmatrix}\Psi_{11}&0\\ \Psi_{21}&\Psi_{22}\end{bmatrix} =\operatorname{tria}\left(\begin{bmatrix}H_{n}\sqrt{P_{n}^{-}}& \sqrt{R_{n}}\\ \sqrt{P_{n}^{-}}&0\end{bmatrix}\right), \tag{24a}\] \[\sqrt{S_{n}} =\Psi_{11},\] (24b) \[K_{n} =\Psi_{21}\Psi_{11}^{-1}. \tag{24c}\]
For generality the formula includes an observation noise covariance \(R_{n}\), but note that in the context of probabilistic ODE solvers we have a noiseless measurement model with \(\sqrt{R_{n}}=0\)
Associative filtering operator.Let \(a_{i},a_{j}\) be two filtering elements, parameterized in square-root form by \(a_{i}=\{A_{i},b_{i},\sqrt{C_{i}},\eta_{i},\sqrt{J_{i}}\}\) and \(a_{j}=\{A_{j},b_{j},\sqrt{C_{j}},\eta_{j},\sqrt{J_{j}}\}\). Then, the associative filtering operator \(\otimes_{f}\) computes the filtering element \(a_{ij}=a_{i}\otimes_{f}a_{j}\) as
\[A_{ij} =A_{j}A_{i}-A_{j}\sqrt{C_{i}}\Xi_{11}^{-\mathsf{T}}\Xi_{21}^{ \mathsf{T}}A_{i}, \tag{25a}\] \[b_{ij} =A_{j}\left(I-\sqrt{C_{i}}\Xi_{11}^{-\mathsf{T}}\Xi_{21}^{ \mathsf{T}}\right)(b_{i}+\sqrt{C_{i}}\sqrt{C_{i}}^{\mathsf{T}}\eta_{j})+b_{j},\] (25b) \[\sqrt{C_{ij}} =\operatorname{\mathrm{tria}}\left(\left[A_{j}\sqrt{C_{i}}\Xi_{1 1}^{-1}\quad\sqrt{C_{j}}\right]\right),\] (25c) \[\eta_{ij} =A_{i}^{\mathsf{T}}\left(I-\Xi_{21}\Xi_{11}^{-1}\sqrt{C_{i}}^{ \mathsf{T}}\right)\left(\eta_{j}-\sqrt{J_{j}}\sqrt{J_{j}}^{\mathsf{T}}b_{i} \right)+\eta_{i},\] (25d) \[\sqrt{J_{ij}} =\operatorname{\mathrm{tria}}\left(\left[A_{i}^{\mathsf{T}}\Xi_{2 2}\quad\sqrt{J_{i}}\right]\right), \tag{25e}\]
where \(\Xi_{11}\), \(\Xi_{21}\), \(\Xi_{22}\) are defined via
\[\begin{bmatrix}\Xi_{11}&0\\ \Xi_{21}&\Xi_{22}\end{bmatrix}=\operatorname{\mathrm{tria}}\left(\begin{bmatrix} \sqrt{C_{i}}^{\mathsf{T}}\sqrt{J_{j}}&I\\ \sqrt{J_{j}}&0\end{bmatrix}\right). \tag{26}\]
See Yaghoobi et al. (2023) for the detailed derivation.
The filtering marginals.The filtering marginals are then given by
\[p(Y_{n}|Z_{1:n})=\mathcal{N}\left(Y_{n};m_{n}^{f},P_{n}^{f}\right),\qquad \text{with}\qquad m_{n}^{f}\coloneqq b_{1:n},\quad\sqrt{P_{n}^{f}}\coloneqq \sqrt{C_{1:n}}. \tag{27}\]
This concludes the parallel-time linear Gaussian square-root filter.
#### 3.1.3 Parallel-Time Linear Gaussian Smoothing in Square-Root Form
Similarly to the filtering equations, the linear Gaussian smoothing can also be formulated in terms of smoothing elements \(b_{n}\) and an associative operator \(\otimes_{s}\), and the smoothing marginals can also be computed with a parallel prefix-sum algorithm.
Parameterization of the smoothing elements.The smoothing elements \(b_{n}\) can be described by a set of parameters \(\{E_{n},g_{n},\sqrt{L_{n}}\}\), as
\[b_{n}=p(Y_{n}\mid Z_{1:n},Y_{n+1})=\mathcal{N}\left(Y_{n};E_{n}Y_{n+1}+g_{n}, \sqrt{L_{n}}\sqrt{L_{n}}^{\mathsf{T}}\right). \tag{28}\]
The smoothing element parameters can be computed as
\[E_{n} =\Pi_{21}\Pi_{11}^{-1}, \tag{29a}\] \[g_{n} =m_{n}^{f}-E_{n}\Phi_{n}m_{n}^{f},\] (29b) \[\sqrt{L_{n}} =\Pi_{22}, \tag{29c}\]
where \(I\) is the identity matrix and the matrices \(\Pi_{11}\), \(\Pi_{21}\), \(\Pi_{22}\) are defined via
\[\begin{bmatrix}\Pi_{11}&0\\ \Pi_{21}&\Pi_{22}\end{bmatrix}=\operatorname{\mathrm{tria}}\left(\begin{bmatrix} \Phi_{n}\sqrt{P_{n}^{f}}&\sqrt{Q_{n}}\\ \sqrt{P_{n}^{f}}&0\end{bmatrix}\right). \tag{30}\]
Associative smoothing operator.Given two smoothing elements \(b_{i},b_{j}\) be two filtering elements, parameterized in square-root form by \(b_{i}=\{E_{i},g_{i},\sqrt{L_{i}}\}\) and \(b_{j}=\{E_{j},g_{j},\sqrt{L_{j}}\}\), the associative smoothing operator \(\otimes_{s}\) computes the smoothing element \(b_{ij}=b_{i}\otimes_{s}b_{j}\) as
\[E_{ij} =E_{i}E_{j}, \tag{31a}\] \[g_{ij} =E_{i}g_{j}+g_{i},\] (31b) \[\sqrt{L_{ij}} =\text{trai}\left(\begin{bmatrix}E_{i}\sqrt{L_{j}}&\sqrt{L_{i}} \end{bmatrix}\right). \tag{31c}\]
The smoothing marginals.The smoothing marginals can then be retrieved from the reverse cumulative sum of the smoothing elements as
\[p(Y_{n}\mid Z_{1:N}) =\mathcal{N}\left(Y_{n};m_{n}^{s},P_{n}^{s}\right), \tag{32a}\] \[m_{n}^{s} =g_{n:N},\] (32b) \[\sqrt{P_{n}^{s}} =\sqrt{L_{n:N}}. \tag{32c}\]
Refer to Yaghoobi et al. (2023) for a thorough derivation. The full parallel-time Rauch-Tung-Striebel smoother is summarized in Algorithm 1.
```
0: Initial distribution \((\mu_{0},\Sigma_{0})\), linear transition models \(\{(\Phi_{n},Q_{n})\}_{n=1}^{N}\), affine observation models \(\{(H_{n},d_{n})\}_{n=1}^{N}\), data \(Z_{1:N}\).
1: Compute the filtering elements: \(a_{n}=(A_{n},b_{n},\sqrt{C_{n}},\eta_{n},\sqrt{J_{n}})\) for all \(n=1,\ldots,N\)\(\triangleright\) Eq. (23)
2: Run the time-parallel Kalman filter: \(\left\{\left(A_{n}^{f},b_{n}^{f},\sqrt{C_{n}^{f}},\eta_{n}^{f},\sqrt{J_{n}^{f }}\right)\right\}_{n=1}^{N}\leftarrow\texttt{AssociativeScan}\left(\otimes_{f},(a_{n})_{n=1}^{N}\right)\)\(\triangleright\) Eq. (25)
3: Compute the smoothing elements: \(b_{n}=(E_{n},g_{n},\sqrt{L_{n}})\) for all \(n=0,\ldots,N\)\(\triangleright\) Eq. (28)
4: Run the time-parallel Rauch-Tung-Striebel smoother: \(\left\{\left(E_{n}^{s},g_{n}^{s},\sqrt{L_{n}^{s}}\right)\right\}_{n=1}^{N} \leftarrow\texttt{ReverseAssociativeScan}\left(\otimes_{s},(b_{n})_{n=1}^{N}\right)\)\(\triangleright\) Eq. (31)
5: Smoothing marginals \(p(Y_{n}\mid Z_{1:N})=\mathcal{N}\left(Y_{n};g_{n}^{s},L_{n}^{s}\right)\)
```
**Algorithm 1** Parallel-time Rauch-Tung-Striebel Smoother (ParRTS)
This concludes the parallel-in-time probabilistic numerical ODE solver for affine ODEs: since affine ODEs result in state-estimation problems with affine state-space models, as discussed in the beginning of this section, the parallel-time Rauch-Tung-Striebel smoother presented here can be used to solve affine ODEs in parallel time.
### Parallel-Time Approximate Inference in Nonlinear Vector Fields
Let us now consider the general case: An IVP with nonlinear vector field
\[\dot{y}(t)=f(y(t),t),\quad t\in[0,T],\qquad y(0)=y_{0}. \tag{33}\]
As established in Section 2, the corresponding state estimation problem is
\[Y_{0} \sim\mathcal{N}\left(\mu_{o},\Sigma_{0}\right), \tag{34a}\] \[Y_{n+1}\mid Y_{n} \sim\mathcal{N}\left(\Phi_{n}Y_{n},Q_{n}\right),\] (34b) \[Z_{n}\mid Y_{n} \sim\delta\left(E_{1}Y_{n}-f\left(E_{0}Y_{n},t_{n}\right)\right), \tag{34c}\]
with temporal discretization \(\mathbb{T}=\left\{t_{n}\right\}_{n=1}^{N}\subset[0,T]\) and zero data \(Z_{n}=0\) for all \(n=1,\ldots,N\). In this section, we describe a parallel-in-time algorithm for solving this state estimation problem: the _iterated extended Kalman smoother_ (IEKS).
#### 3.2.1 Globally Linearizing the State-Space Model
To make inference tractable, we will linearize the whole state-space model along a reference trajectory. And since the observation model (specified in Equation (34c)) is the only nonlinear part of the state-space model, it is the only part that requires linearization. In this paper, we only consider linearization with a first-order Taylor expansion, but other methods are possible; see Remarks 4 and 5.
For any time-point \(t_{n}\in\mathbb{T}\), we approximate the nonlinear observation model
\[Z_{n}\mid Y_{n}\sim\delta\left(E_{1}Y_{n}-f\left(E_{0}Y_{n},t_{n}\right)\right) \tag{35}\]
with an affine observation model by performing a first-order Taylor series expansion around a linearization point \(\eta_{n}\in\mathbb{R}^{d(\nu+1)}\). We obtain the affine model
\[Z_{n}\mid Y_{n}\sim\delta\left(H_{n}Y_{n}-d_{n}\right), \tag{36}\]
with \(H_{n}\) and \(d_{n}\) defined as
\[H_{n} \coloneqq E_{1}-F_{y}(E_{0}\eta_{n},t_{n})E_{0}, \tag{37a}\] \[d_{n} \coloneqq f(E_{0}\eta_{n},t_{n})-F_{y}(E_{0}\eta_{n},t_{n})E_{0}\eta_{n}, \tag{37b}\]
where \(F_{y}\) denotes the Jacobian of \(f\) with respect to \(y\).
In the IEKS, this linearization is performed _globally_ on all time steps simultaneously along a trajectory of linearization points \(\left\{\eta_{n}\right\}_{n=1}^{N}\subset\mathbb{R}^{d(\nu+1)}\). We obtain the following linearized inference problem:
\[Y_{0} \sim\mathcal{N}\left(\mu_{o},\Sigma_{0}\right), \tag{38a}\] \[Y_{n+1}\mid Y_{n} \sim\mathcal{N}\left(\Phi_{n}Y_{n},Q_{n}\right),\] (38b) \[Z_{n}\mid Y_{n} \sim\delta\left(H_{n}Y_{n}-d_{n}\right), \tag{38c}\]
with zero data \(Z_{n}=0\) for all \(n=1,\ldots,N\). This is now a linear state-space model with linear Gaussian observations. It can therefore be solved exactly with the numerically stable, time-parallel Kalman filter and smoother presented in Section 3.1.
**Remark 4** (Linearizing with approximate Jacobians (Ek0 & DiagonalEK1)).: _To reduce the computational complexity with respect to the state dimension of the ODE, the vector field can also be linearized with an approximate Jacobian. Established choices include \(F_{y}\approx 0\) and \(F_{y}\approx\operatorname{diag}(\nabla_{y}f)\), which result in probabilistic ODE solvers known as the EKO and DiagonalEK1, respectively. See Kramer et al. (2022) for more details._
**Remark 5** (Statistical linear regression).: _Statistical linear regression (SLR) is a more general framework for approximating conditional distributions with affine Gaussian distributions, and many well-established filters can be understood as special cases of SLR. This includes notably the Taylor series expansion used in the EKF/EKS, but also sigma-point methods such as the unscented Kalman filter and smoother (Julier et al., 2000; Julier and Uhlmann, 2004; Sarkka, 2008), and more. For more information on SLR-based filters and smoothers refer to Sarkka and Svensson (2023, Chapter 9)._
#### 3.2.2 Iterated Extended Kalman Smoothing
The IEKS (Bell, 1994; Sarkka and Svensson, 2023) is an approximate Gaussian inference method for nonlinear state-space models, which iterates between linearizing the state-space model along the current best-guess trajectory and computing a new state trajectory estimate by solving the linearized model exactly. It can equivalently also be seen as an efficient implementation of the Gauss-Newton method, applied to maximizing the posterior density of the state trajectory (Bell, 1994). This also implies that the IEKS computes not just some Gaussian estimate, but the _maximum a posteriori_ (MAP) estimate of the state trajectory. In the context of probabilistic numerical ODE solvers, the IEKS has been previously explored by Tronarp et al. (2021), and the resulting MAP estimate has been shown to satisfy polynomial convergence rates to the true ODE solution. Here, we formulate an IEKS-based probabilistic ODE solver in a parallel-in-time manner, by exploiting the time-parallel formulation of the Kalman filter and smoother from Section 3.1.
The IEKS is an iterative algorithm, which starts with an initial guess of the state trajectory and then iterates between the following two steps:
1. _Linearization step:_ Linearize the state-space model along the current best-guess trajectory. This can be done independently for each time step and is therefore fully parallelizable.
2. _Linear smoothing step:_ Solve the resulting linear state-space model exactly with the time-parallel Kalman filter and smoother from Section 3.1.
The algorithm terminates when a stopping criterion is met, for example when the change in the MAP estimate between two iterations is sufficiently small. A pseudo-code summary of the method is provided in Algorithm 2.
As with the sequential filtering-based probabilistic ODE solvers as presented in Section 2, the mean and covariance of the initial distribution \(Y(0)\sim\mathcal{N}\left(\mu_{0},\Sigma_{0}\right)\) are chosen such that \(\mu_{0}\) corresponds to the exact solution of the ODE and its derivatives and \(\Sigma_{0}\) is set to zero; see also Kramer and Hennig (2020). The initial state trajectory estimate \(\{\eta_{n}\}_{n=0}^{N}\) is chosen to be constant, that is, \(\eta_{n}=\mu_{0}\) for all \(n=0,\ldots,N\). Note that since only \(E_{0}\eta_{n}\) is required to perform the linearization, it could equivalently be set to \(\eta_{n}=[y_{0},0,\ldots,0]\) for all \(n\).
Finally, the stopping criterion should be chosen such that the algorithm terminates when the MAP estimate of the state trajectory has converged. In our experiments, we chose a combination of two criteria: (i) the change in the state trajectory estimate between two iterations is sufficiently small, or (ii) the change in the _objective value_ between two iterations is sufficiently small, where the objective value is defined as the negative log-density of the
state trajectory:
\[\mathcal{V}(\eta_{0:N})=\frac{1}{2}\sum_{n=1}^{N}\|\eta_{n}-\Phi(h_{n})\eta_{n-1} \|_{Q^{-1}(h)}^{2}\,. \tag{39}\]
In our experiments, we use a relative tolerance of \(10^{-13}\) for the first criterion and absolute and relative tolerances of \(10^{-9}\) and \(10^{-6}\) for the second criterion, respectively.
### Computational Complexity of the Time-Parallel Probabilistic ODE Solver
The standard, sequential formulation of a Kalman smoother has a computational cost that scales linearly in the number of data points \(N\), of the form
\[C_{\text{KS}}^{s}=N\cdot\left(C_{\text{predict}}^{s}+C_{\text{update}}^{s}+C_{ \text{smooth}}^{s}\right), \tag{40}\]
where \(C_{\text{predict}}^{s};C_{\text{update}}^{s},C_{\text{smooth}}^{s}\) are the costs of the sequential formulation of the predict, update, and smoothing steps, respectively. For nonlinear models, the extended Kalman filter/smoother linearizes the observation model sequentially at each prediction mean. With \(C_{\text{linearize}}\) the cost of linearization, which requires evaluating the vector field and computing its Jacobian, the cost for a sequential extended Kalman smoother becomes
\[C_{\text{EKS}}^{s}=N\cdot\left(C_{\text{predict}}^{s}+C_{\text{linearize}}+C _{\text{update}}^{s}+C_{\text{smooth}}^{s}\right). \tag{41}\]
The proposed IEKS differs in two ways: (i) the prefix-sum formulation of the Kalman smoother enables a time-parallel inference with logarithmic complexity, and (ii) the linearization is not done locally in a sequential manner but can be performed globally, fully in parallel. Assuming a large enough number of processors / threads, the span cost of a single parallelized IEKS iteration becomes
\[C_{\text{EKS}}^{p}=C_{\text{linearize}}+\log(N)\cdot\left(C_{\text{filter}}^ {p}+C_{\text{smooth}}^{p}\right), \tag{42}\]
where \(C_{\text{filter}}^{p},C_{\text{smooth}}^{p}\) are the costs of the associative filtering and smoothing operation as used in the _parallel_ Kalman filter formulation, respectively. They differ from the costs of the sequential formulation in a constant manner.
## 4 Experiments
This section investigates the utility and performance of the proposed parallel IEKS-based ODE filter on a range of experiments. It is structured as follows: First, Section 4.1 investigates the runtime of a single IEKS step in its sequential and parallel formulation, over a range of grid sizes and for different GPUs. Section 4.2 then compares the performance of both ODE solver implementations on multiple test problems. Finally, Section 4.3 benchmarks the proposed method against other well-established ODE solvers, including both classic and probabilistic numerical methods.
ImplementationAll experiments are implemented in the Python programming language with the JAX software framework (Bradbury et al., 2018). Reference solutions are computed with SciPy (Virtanen et al., 2020) and Diffrax (Kidger, 2021). Unless specified otherwise, experiments are run on an NVIDIA V100 GPU. Code for the implementation and experiments is publicly available on GitHub.1
Footnote 1: [https://github.com/nathanaelbosch/parallel-in-time-ode-filters](https://github.com/nathanaelbosch/parallel-in-time-ode-filters)
### Runtime of a Single Extended Kalman Smoother Step
We first evaluate the runtime of the proposed method for only a single IEKS iteration, which consists of one linearization of the model along a trajectory and one extended Kalman smoother step. To this end, we consider the logistic ordinary differential equation
\[\dot{y}(t)=y(t)\left(1-y(t)\right),\qquad t\in[0,10],\qquad y(0)=0.01; \tag{43}\]
though, since here we only investigate the runtime of a single IEKS iteration and thus do not actually solve the problem by iteratively re-linearizing, the precise choice of ODE is not very important. We then compare the runtime of the sequential and parallel EKS formulation for different grid sizes, resulting from time discretizations with step sizes \(h=2^{0},2^{1},\ldots,2^{14}\), and for multiple GPUs with varying numbers of CUDA cores. Figure 1 shows the results.
First, we observe the expected logarithmic scaling of the parallel EKS with respect to the grid size, for grids of size up to around \(\sim 5\cdot 10^{3}\) (Figure 0(a)). For larger grid sizes the runtime of the parallel EKS starts to grow linearly. This behaviour is expected: The NVIDIA V100 GPU used in this experiment has only 5120 CUDA cores, so for larger grids the filter and smoother pass can not be fully parallelized anymore and additional grid points need to be processed sequentially. But, the overall runtime of the parallel EKS is still significantly lower than the runtime of the sequential EKS throughout all grid sizes.
Figure 0(b). shows runtimes for different GPUs with varying numbers of CUDA cores for a grid of size \(N=81920\). We observe that both the sequential EKS, as well as the classic Dopri5 and Kvaerno5 solvers (Dormand and Prince, 1980; Shampine, 1986; Kvaerno, 2004), do not show a benefit from the improved GPU hardware. This is expected as these methods do not explicitly aim to leverage parallelizaton. On the other hand, the runtime of the parallel EKS decreases as the number of CUDA cores increases, and we observe speed-ups of up to an order of magnitude by using a different GPU. Be reminded once more that these evaluations only considered a single IEKS step, so they do not show the runtimes for computing the actual probabilistic numerical ODE solutions--these will be the subject of interest in the next sections.
### The Parallel-IEKS ODE Filter Compared to its Sequential Version
In this experiment we compare the proposed parallel-in-time ODE solver to a probabilistic solver based on the sequential implementation of the IEKS. In addition to the logistic ODE as introduced in Equation (43), we consider two more problems: An initial value problem based on the rigid body dynamics (Hairer et al., 1993)
\[\dot{y}(t)=\begin{bmatrix}-2y_{2}(t)y_{3}(t)\\ 1.25y_{1}(t)y_{3}(t)\\ -0.5y_{1}(t)y_{2}(t)\end{bmatrix},\qquad t\in[0,20],\qquad y(0)=\begin{bmatrix} 1\\ 0\\ 0.9\end{bmatrix}, \tag{44}\]
and the Van der Pol oscillator (Van der Pol, 1920)
\[\dot{y}(t)=\begin{bmatrix}y_{2}(t)\\ \mu\left(\left(1-y_{1}(t)^{2}\right)y_{2}(t)-y_{1}(t)\right)\end{bmatrix}, \qquad t\in[0,6.3],\qquad y(0)=\begin{bmatrix}2\\ 0\end{bmatrix}, \tag{45}\]
here in a non-stiff version with parameter \(\mu=1\).
We first solve the three problems with the parallel IEKS on grids of sizes 30, 200, and 100, respectively for the logistic, rigid body, and Van der Pol problem, with a two-times integrated Wiener process prior. Reference solutions are computed with diffrax's Kvaerno5
Figure 1: _The parallel EKS shows logarithmic scaling and benefits from GPU improvements_. In comparison, the sequential EKS and the classic Dopri5 and Kvaerno5 solvers show the expected linear runtime complexity (left). They also do not show relevant changes in runtime for GPUs with higher numbers of CUDA cores (right).
solver using adaptive steps and very low tolerances \(\tau_{\{\text{abs},\text{rel}\}}=10^{-12}\)(Kidger, 2021; Kvaerno, 2004). Figure 2 shows the resulting solution trajectories, together with numerical errors and error estimates. For these grid sizes, the parallel IEKS computes accurate solutions on all three problems. Regarding calibration, the posterior appears underconfident for the logistic and Van der Pol problems as it overestimates the numerical error by more than one order of magnitude. This is likely not due to the proposed method itself as underconfidence of ODE filters in low-error regimes has been previously observed (Bosch et al., 2021). For the rigid body problem, the posterior appears reasonably confident and the error estimate is of similar magnitude as the numerical error.
Next, we investigate the performance of the parallel IEKS and compare it to its sequential implementation. We solve the three problems with the parallel and sequential IEKS on a range of grid sizes, with both a one- and two-times integrated Wiener process prior. Reference solutions are computed with diffrax's Kvaerno5 solver using adaptive steps and very low tolerances (\(\tau_{\text{abs}}=10^{-16},\tau_{\text{rel}}=10^{-13}\)). Figure 3 shows the achieved root-mean-square errors (RMSE) for different grid sizes in a work-precision diagram. As expected, both the parallel and the sequential IEKS always achieve the same error for each problem and grid size, as both versions compute the same quantities and only differ in their implementation. However, the methods differ significantly in actual runtime, as shown in Figure 4. In our experiments on an NVIDIA V100 GPU, the parallel IEKS is always strictly faster than the sequential implementation across all problems, grid sizes, and priors, and we observe speed-ups of multiple orders of magnitude. Thus, when working with a GPU, the parallel IEKS appears to be strictly superior to the sequential IEKS.
Figure 2: Trajectories, errors, and error estimates computed by the parallel-in-time solver. Top row: ODE solution trajectories. Visually, all three test problems seem to be solved accurately. Bottom row: Numerical errors (lines) and error estimates (shaded area). Ideally, for good calibration, the error should be of similar magnitude than the error estimate. The posterior appears underconfident on the logistic and Van der Pol ODE, and reasonably confident for the rigid body problem.
Figure 4: _Work-precision diagrams for the sequential and parallel IEKS-based ODE solver._ Top row: Runtime in seconds per error (lower-left is better). Bottom row: Speed-up of the parallel over the sequential IEKS (higher is better). Accross all problems, grid sizes, and priors, the parallel IEKS outperforms the sequential IEKS.
Figure 3: _The sequential and parallel IEKS compute numerically identical solutions._ For all three problems and all considered grid sizes, the sequential and parallel IEKS achieve (numerically) identical errors. This is expected, as both versions compute the same quantities and only differ in their implementation.
### Benchmarking the Parallel-IEKS ODE Filter
Finally, we compare the proposed method to a range of well-established ODE solvers, including both classic and probabilistic numerical methods: we compare against the implicit Euler method, the Kvaerno3 (KV3) and Kvaerno5 (KV5) solvers (Kvaerno, 2004) provided by Diffrax (Kidger, 2021), as well as the sequential EKS with local linearization, which is one of the currently most popular probabilistic ODE solvers. Note that since the IEKS is considered to be an implicit solver (Tronarp et al., 2021), we only compare to other implicit and semi-implicit methods, and therefore neither include explicit Runge-Kutta methods nor the EKS with zeroth order linearization (also known as EK0) in our comparison. Reference solutions are computed with diffrax's Kvaerno5 solver with adaptive steps and a very low error tolerance setting (\(\tau_{\mathrm{abs}}=10^{-16},\tau_{\mathrm{rel}}=10^{-13}\)).
Figure 5 shows the results as work-precision diagrams. For small grid sizes (low accuracy), the logarithmic time complexity of the parallel IEKS seems to not be very relevant and the IEKS is outperformed by the non-iterated EKS. In the particular case of the logistic ODE, it further seems that the MAP estimate differs significantly from ODE solution and thus the error on coarse grids is high (lower left figure). However, for larger grid sizes (medium-to-high accuracy), the parallel IEKS outperforms both its sequential, non-iterated counterpart, as
Figure 5: _Benchmarking the parallel IEKS against other common numerical ODE solvers._ Top row: Work-precision diagrams showing runtimes per error for a range of different ODE solvers (lower-left is better). Bottom row: Errors per specified grid size (lower-left is better). Per grid size, the closely related EKS and IEKS solvers often coincide; KV5 achieves the lowest error per step as it has the highest order. In terms of runtime, the IEKS outperforms both the EKS and KV5 on medium-to-high accuracy settings due to its logarithmic time complexity.
well as the classic methods. In particular, the parallel IEKS with IWP(2) prior often shows runtimes lower than those of the classic KV5 method, even though it has a lower order of convergence and is an iterative method; see also Figure 6 for runtimes per grid size and for the number of iterations performed by the IEKS. Overall, the logarithmic time complexity of the proposed parallel IEKS appears to be very beneficial for high accuracy settings on GPUs and makes the parallel IEKS a very competitive ODE solver in this comparison.
## 5 Conclusion
In this work, we have developed a _parallel-in-time_ probabilistic numerical ODE solver. The method builds on iterated extended Kalman smoothing to compute the maximum a posteriori estimate of the probabilistic ODE solution, and by using the time-parallel formulation of the IEKS it is able to efficiently leverage modern parallel computer hardware such as GPUs to parallelize its computations. Given enough processors or cores, the proposed algorithm shares the logarithmic cost per time step of the parallel IEKS and the underlying parallel prefix-sum algorithm, as opposed to the linear time complexity of standard, sequentially-operating ODE solvers. We evaluated the performance of the proposed method in a number of experiments, and have seen that the proposed parallel-in-time solver can provide speed-ups of multiple
Figure 6: _Runtimes of the ODE solvers for each grid size, and number of IEKS iterations._ While all sequential solvers demonstrate linear scaling with the number of grid points, the parallel IEKS shows sub-linear scaling up to a certain grid size (top). The number of IEKS iterations until convergence can vary with the grid size and the problem, but it seems that in many cases ~10 iterations suffice (bottom). The sequential methods solve the ODE in one sweep.
orders of magnitude over the sequential IEKS-based solver. We also compared the proposed method to a range of well-established, both probabilistic and classical ODE solvers, and we have shown that the proposed parallel-in-time method is competitive with respect to the state-of-the-art in both accuracy and runtime.
This work opens up a number of interesting avenues for future research in the intersection of probabilistic numerics and parallel-in-time methods. Potential opportunities for improvement include the investigation of other optimization algorithms, such as Levenberg-Marquart or ADMM, or the usage of line search, all of which have been previously proposed for the sequential IEKS. Furthermore, combining the solver with adaptive grid refinement approaches could also significantly improve its performance in practice. A different avenue would be to extend the proposed method to other related differential equation problems for which sequentially-operating probabilistic numerical methods already exist, such as higher-order ODEs, differential-algebraic equations, or boundary value problems. Finally, the improved utilization of GPUs by our parallel-in-time method could be particularly beneficial to applications in the field of machine learning, where GPUs are often required to accelerate the computations of deep neural networks. In summary, the proposed parallel-in-time probabilistic numerical ODE solver not only advances the efficiency of probabilistic numerical ODE solvers, but also paves the way for a range of future research on parallel-in-time probabilistic numerical methods and their application across various scientific domains.
## Acknowledgments
The authors gratefully acknowledge financial support by the German Federal Ministry of Education and Research (BMBF) through Project ADIMEM (FKZ 01IS18052B), and financial support by the European Research Council through ERC StG Action 757275 / PANAMA; the DFG Cluster of Excellence "Machine Learning - New Perspectives for Science", EXC 2064/1, project number 390727645; the German Federal Ministry of Education and Research (BMBF) through the Tubingen AI Center (FKZ: 01IS18039A); and funds from the Ministry of Science, Research and Arts of the State of Baden-Wurttemberg. The authors would like to thank Research Council of Finland for funding. Filip Tronarp was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Nathanael Bosch. The authors are grateful to Nicholas Kramer for many valuable discussion and to Jonathan Schmidt for feedback on the manuscript.
## Individual Contributions
The original idea for this article came independently from SS and from discussions between FT and NB. The joint project was initiated and coordinated by SS and PH. The methodology was developed by NB in collaboration with AC, FT, PH, and SS. The implementation is primarily due to NB, with help from AC. The experimental evaluation was done by NB with support from FT and PH. The first version of the article was written by NB, after which all authors reviewed the manuscript. |
2308.03518 | Off-the-grid Blind Deconvolution and Demixing | We consider the problem of gridless blind deconvolution and demixing (GB2D)
in scenarios where multiple users communicate messages through multiple unknown
channels, and a single base station (BS) collects their contributions. This
scenario arises in various communication fields, including wireless
communications, the Internet of Things, over-the-air computation, and
integrated sensing and communications. In this setup, each user's message is
convolved with a multi-path channel formed by several scaled and delayed copies
of Dirac spikes. The BS receives a linear combination of the convolved signals,
and the goal is to recover the unknown amplitudes, continuous-indexed delays,
and transmitted waveforms from a compressed vector of measurements at the BS.
However, in the absence of any prior knowledge of the transmitted messages and
channels, GB2D is highly challenging and intractable in general. To address
this issue, we assume that each user's message follows a distinct modulation
scheme living in a known low-dimensional subspace. By exploiting these subspace
assumptions and the sparsity of the multipath channels for different users, we
transform the nonlinear GB2D problem into a matrix tuple recovery problem from
a few linear measurements. To achieve this, we propose a semidefinite
programming optimization that exploits the specific low-dimensional structure
of the matrix tuple to recover the messages and continuous delays of different
communication paths from a single received signal at the BS. Finally, our
numerical experiments show that our proposed method effectively recovers all
transmitted messages and the continuous delay parameters of the channels with a
sufficient number of samples. | Saeed Razavikia, Sajad Daei, Mikael Skoglund, Gabor Fodor, Carlo Fischione | 2023-08-07T12:11:53Z | http://arxiv.org/abs/2308.03518v1 | # Off-the-grid Blind Deconvolution and Demixing
###### Abstract
We consider the problem of gridless blind deconvolution and demixing (GB2D) in scenarios where multiple users communicate messages through multiple unknown channels, and a single base station (BS) collects their contributions. This scenario arises in various communication fields, including wireless communications, the Internet of Things, over-the-air computation, and integrated sensing and communications. In this setup, each user's message is convolved with a multi-path channel formed by several scaled and delayed copies of Dirac spikes. The BS receives a linear combination of the convolved signals, and the goal is to recover the unknown amplitudes, continuous-indexed delays, and transmitted waveforms from a compressed vector of measurements at the BS. However, in the absence of any prior knowledge of the transmitted messages and channels, GB2D is highly challenging and intractable in general. To address this issue, we assume that each user's message follows a distinct modulation scheme living in a known low-dimensional subspace. By exploiting these subspace assumptions and the sparsity of the multipath channels for different users, we transform the nonlinear GB2D problem into a matrix tuple recovery problem from a few linear measurements. To achieve this, we propose a semidefinite programming optimization that exploits the specific low-dimensional structure of the matrix tuple to recover the messages and continuous delays of different communication paths from a single received signal at the BS. Finally, our numerical experiments show that our proposed method effectively recovers all transmitted messages and the continuous delay parameters of the channels with a sufficient number of samples.
Atomic norm minimization, blind channel estimation, blind data recovery, blind deconvolution, blind demixing.
## I Introduction
In the near future, the Internet of Things (IoT) is expected to connect billions of wireless devices, surpassing the capacity of the current fifth-generation (5G) wireless system both technically and economically. One of the primary challenges that 6G, the future wireless communication system, will face is managing the massive number of IoT devices that generate sporadic traffic. As the 6G market grows, this sporadic traffic will significantly increase, and it is generally agreed among communications engineers that the current 5G channel access procedures cannot handle this volume of traffic.
Traditional channel access methods, which rely on classical information and communication theory, require a large number of pilots or training signals to estimate the channel, leading to significant resource waste that does not scale towards IoT requirements. Thus, minimizing the overhead caused by exchanging certain types of training information, such as channel estimation and data slot assignment, is necessary. This is especially critical for communications over dynamic channels, such as millimeter-wave or terahertz, where channel coherence times are short, and the channel state information changes rapidly. In these cases, the assumption of block fading no longer holds. One approach to addressing this issue is to incorporate channel aging effects into the channel estimation process to maximize spectral efficiency (see, e.g., [1]), but this requires knowledge of the channel correlation structure at different times, which might be challenging to obtain in general channel environments. Therefore, for situations where a large number of devices transmit small amounts of data sporadically over dynamic channels, and the channel correlation structure is unknown, it is crucial to avoid transmitting a signal with much longer overhead information than actual data. This raises the question of whether this is feasible.
To facilitate explanation, we consider a scenario where multiple users transmit messages through multiple frequency-selective channels towards a central BS (as described in [2, Eq. 19]). The BS receives a combined signal comprising contributions from all users, which is then processed through a sensing filter (see Fig. 1). The goal is to simultaneously estimate the transmitted messages and channels from the received measurements at the BS, which is a challenging nonlinear problem.
This scenario appears in a variety of applications, including over-the-air computation [3, 4], super-resolution single-molecule imaging [5, 6, 7, 8, 9], multi-user multipath channel estimation [10, 11], blind calibration in multi-channel sampling systems [12, 13], random access [14] and integrated (radar) sensing and communications [15, 16, 17].
### _Related work_
The problem of recovering messages and channels in the model described above falls into the class of blind deconvolution techniques used to solve inverse problems. These techniques have made notable progress in addressing blind deconvolution problems, with a focus on sparse signals consisting of a single user [18, 19, 20, 21, 22]. The conventional method involves assuming that the continuous channel parameters lie on a predefined domain of grids, which can be estimated |
2308.04277 | Topologically protected subradiant cavity polaritons through linewidth
narrowing enabled by dissipationless edge states | Cavity polaritons derived from the strong light-matter interaction at the
quantum level provide a basis for efficient manipulation of quantum states via
cavity field. Polaritons with narrow linewidth and long lifetime are appealing
in applications such as quantum sensing and storage. Here, we propose a
prototypical arrangement to implement a whispering-gallery-mode resonator with
topological mirror moulded by one-dimensional atom array, which allows to boost
the lifetime of cavity polaritons over an order of magnitude. This considerable
enhancement attributes to the coupling of polaritonic states to dissipationless
edge states protected by the topological bandgap of atom array that suppresses
the leakage of cavity modes. When exceeding the width of Rabi splitting,
topological bandgap can further reduce the dissipation from polaritonic states
to bulk states of atom array, giving arise to subradiant cavity polaritons with
extremely sharp linewidth. The resultant Rabi oscillation decays with a rate
even below the free-space decay of a single quantum emitter. Inheriting from
the topologically protected properties of edge states, the subradiance of
cavity polaritons can be preserved in the disordered atom mirror with moderate
perturbations involving the atomic frequency, interaction strengths and
location. Our work opens up a new paradigm of topology-engineered quantum
states with robust quantum coherence for future applications in quantum
computing and network. | Yuwei Lu, Jingfeng Liu, Haoxiang Jiang, Zeyang Liao | 2023-08-08T14:20:35Z | http://arxiv.org/abs/2308.04277v1 | Topologically protected subradiant cavity polaritons through linewidth narrowing enabled by dissipationless edge states
###### Abstract
Cavity polaritons derived from the strong light-matter interaction at the quantum level provide a basis for efficient manipulation of quantum states via cavity field. Polaritons with narrow linewidth and long lifetime are appealing in applications such as quantum sensing and storage. Here, we propose a prototypical arrangement to implement a whispering-gallery-mode resonator with topological mirror moulded by one-dimensional atom array, which allows to boost the lifetime of cavity polaritons over an order of magnitude. This considerable enhancement attributes to the coupling of polaritonic states to dissipationless edge states protected by the topological bandgap of atom array that suppresses the leakage of cavity modes. When exceeding the width of Rabi splitting, topological bandgap can further reduce the dissipation from polaritonic states to bulk states of atom array, giving arise to subradiant cavity polaritons with extremely sharp linewidth. The resultant Rabi oscillation decays with a rate even below the free-space decay of a single quantum emitter. Inheriting from the topologically protected properties of edge states, the subradiance of cavity polaritons can be preserved in the disordered atom mirror with moderate perturbations involving the atomic frequency, interaction strengths and location. Our work opens up a new paradigm of topology-engineered quantum states with robust quantum coherence for future applications in quantum computing and network.
+
Footnote †: Corresponding Author: [email protected]
+
Footnote †: Corresponding Author: [email protected]
## I Introduction
Cavity quantum electrodynamics (QED) constitutes one of the cornerstones of quantum optics, where the coherent exchange of single photon between the quantum emitter (QE) and the cavity mode, known as Rabi oscillation, can take place in the strong-coupling regime and results in the formation of polaritonic states consisting of entangled atom and photon components [1; 2]. The corresponding bosonic quasiparticle, termed cavity polaritons, offers a scheme for controllable storage and transfer of quantum states and a rich variety of technologies and applications, such as on-chip quantum light source [3; 4], quantum sensing [5; 6], scalable quantum computing and quantum information processing [7; 8; 9; 2]. Great effort has been devoted into achieving strong coupling in various QED platforms [10; 11; 12; 13], while less attention has been paid to reduce the linewidth of cavity polaritons [14; 15; 16; 17], which is beneficial for diverse quantum-optics applications [18; 19; 20; 21; 22; 23; 24]. For instance, reducing the linewidth of resonant systems enables to detect weak signals and achieves better measurement sensitivity for precision sensing in experiments [22; 23; 24; 25; 26; 27]. Moreover, linewidth represents decay rate, thereby quantum states with narrower linewidth means longer lifetime, a feature highly desirable for quantum storage and quantum memory [28; 29; 30; 31; 32]. The lifetime of cavity polaritons is often limited by the quality (\(Q\)) factor of cavity, since the linewidth of QE is usually smaller than that of cavity in many cavity-QED systems [11; 13; 33; 34]. However, a high-\(Q\) cavity in general features a large volume [35; 36; 37] or requires sophisticated design [38; 39] that is demanding for nanofabrication.
Beyond the conventional quantum optics, recently topological quantum optics appeared as a rapidly growing field for controlling light-matter interaction in many-body quantum systems by exploiting the concept of topology [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. In analogy to photonic topological insulators, the emergence of exotic topological states in quantum systems, characterized by localized edge states and interface states, demonstrates intriguing optical response and has motivated the development of functional quantum devices with robustness against the structural disorder and impurities, such as topological single-photon circulator [51], topologically protected qubits [42; 45; 52], unconventional photon transport [47; 49; 53], fault-tolerant topological quantum computing [54; 55; 56], to mention a few. Among these topological quantum systems, atom arrays can serve as a versatile platform for topological light manipulation, with functionalities beyond the classical mirror that reflects the light [42; 44; 57; 58]. In particular, topological quantum states can become sub
radiant through the collective interference [42; 44; 52], whose radiative loss can be strongly suppressed and significantly smaller than the free-space decay rate of a single atom. This unique feature of atom arrays combined with topological protection provides extra degrees of freedom to manipulate the quantum states.
Triggered by the prospect of manipulating cavity polaritons through topological effects, we propose a topological edge states-engineered cavity QED system consisting of a whispering-gallery-mode (WGM) resonator coupled to a one-dimensional (1D) topological atom mirror with long-range hoppings mediated by waveguide. With sufficiently strong atom-waveguide coupling, edge states become dissipationless through topological phase transition [46]. By virtue of the exponential localization and topological protection of dissipationless edge states, this simple configuration enables unprecedented linewidth narrowing and decay suppression of polaritonic states with small atom array. By analyzing the energy spectrum and spectral properties of the composed system, we predict that typically a dozen atoms are adequate to produce subradiance for cavity polaritons and the resultant subnatural linewidth can be experimentally evidenced from either the reflection spectrum of waveguide or the fluorescence of QE. Our scheme can provide a viable approach to realize the long-time storage of quantum states in QED systems with cavities of moderate \(Q\) factor and explore the topological manipulation of quantum states on integrated optoelectronic platform.
## II Results and Discussion
### Model and Theory
The system under investigation is depicted in Fig. 1 and comprises of a hybrid cavity QED system based on a WGM ring resonator and a waveguide QED system [59]. The resonator supports a series of WGM resonances, but only a pair of degenerate clockwise (CW) and counterclockwise (CCW) modes with the same WGM order is considered. This simplification is reasonable for a realistic WGM resonator operated in the visible and near-infrared ranges, where the linewidth of QE can be much smaller than the frequency spacing between adjacent WGM resonances [60; 61; 20; 33]. A QE is embedded inside the resonator, which couples to a waveguide with a topological atom mirror at the right end. The nearest-neighbor interactions between topological atoms change alternatively to form 1D diatomic chain by mimicking the Su-Schrieffer-Heeger (SSH) model [62; 40], in addition to the long-range interactions mediated by waveguide. An extended cascaded quantum master equation is derived in Appendix A and employed to describe the quantum dynamics of the composed system (see also Refs. [19; 59] for details)
\[\dot{\rho}=-i[H,\rho]+\mathcal{D}[\rho] \tag{1}\]
with Lindblad operator
Figure 1: Schematic of a whispering-gallery-mode (WGM) ring cavity coupled to a quantum emitter (QE) and a waveguide with a one-dimensional topological atom mirror at the right end. \(x_{j}\) indicates the location of the \(j\)th element. The staggered hoppings between nearest-neighbor sites in topological atom mirror simulate the Su-Schrieffer-Heeger (SSH) chain that supports the topological edge states. The pair of sites with stronger coupling defines a unit cell, as the pink translucent box indicates (\(J_{+}>J_{-}\)). The inset shows the real energy spectrum of topological atom mirror versus atom spacing \(d\) for nine atoms. Vertical dashed line indicates the dissipationless topological edge state at \(d=3\lambda_{0}/4\) under investigation. Other parameters are \(J_{0}=8\Gamma\), \(\phi_{1}=0\) and \(\phi=0.3\pi\). \(a_{in}\) and \(a_{out}\), \(b_{out}\) stand for the input and output fields for planewave excitation, respectively.
\[\begin{split}\mathcal{D}[\rho]&=\frac{\kappa_{R}}{2} \mathcal{L}\left[c_{cew}\right]\rho+\frac{\kappa_{L}}{2}\mathcal{L}\left[c_{cw} \right]\rho+\sum_{j=0}^{N}\frac{\gamma_{0}}{2}\mathcal{L}\left[\sigma_{-}^{(j )}\right]\rho+\sum_{\lambda=R,L}\sum_{j=1}^{N}\frac{\gamma_{\lambda}}{2} \mathcal{L}\left[\sigma_{-}^{(j)}\right]\rho\\ &\qquad+\sum_{\lambda=R,L}\sum_{\begin{subarray}{c}j,l\geq 1\\ k_{\lambda}x_{j}>k_{\lambda}x_{l}\end{subarray}}^{N}\gamma_{\lambda}\left(e^{ ik_{\lambda}(x_{j}-x_{l})}\left[\sigma_{-}^{(l)}\rho,\sigma_{+}^{(j)}\right]+e^{- ik_{\lambda}(x_{j}-x_{l})}\left[\sigma_{-}^{(j)},\rho\sigma_{+}^{(l)}\right]\right)\\ &\qquad+\sum_{j=1}^{N}\sqrt{\kappa_{R}\gamma_{R}}\left(e^{ik_{R}x_ {j}}\left[c_{cew}\rho,\sigma_{+}^{(j)}\right]+e^{-ik_{R}x_{j}}\left[\sigma_{-} ^{(j)},\rho c_{cew}^{\dagger}\right]\right)\\ &\qquad+\sum_{j=1}^{N}\sqrt{\kappa_{L}\gamma_{L}}\left(e^{-ik_{L}x _{j}}\left[c_{cw}\rho,\sigma_{+}^{(j)}\right]+e^{ik_{L}x_{j}}\left[\sigma_{-} ^{(j)},\rho c_{ew}^{\dagger}\right]\right)\end{split} \tag{2}\]
where the first line introduces the dissipation for individuals, the second line describes the waveguide-mediated interaction between atoms, and the third (four) line accounts for the chiral coupling between the atoms and the CCW (CW) mode through the right-propagating (left-propagating) guided mode of waveguide. \(\mathcal{L}[O]\rho=2\rho O^{\dagger}-O^{\dagger}O\rho-\rho O^{\dagger}O\) is the Liouvillian superoperator for the dissipation of operator \(O\). \(c_{cew}\) (\(c_{cw}\)) is the bosonic annihilation operator of CCW (CW) mode, while \(\kappa_{R}\) (\(\kappa_{L}\)) is the corresponding decay rate stemming from the evanescent coupling to the waveguide. The intrinsic decay of cavity modes is omitted in consideration of the high-\(Q\) feature of WGM resonators. \(k_{R}=-k_{L}=k_{0}\) is the wave vector of photons. \(\sigma_{-}^{(j)}\) is the lowering operator of the \(j\)th atom located at \(x_{j}\) and particularly, \(\sigma_{-}^{(0)}\) represents the atom inside the cavity, which we refer to QE hereafter to distinguish from the atoms in the mirror. \(N\) and \(x_{0}\) denote the number of atoms and the location of waveguide-cavity junction, respectively. \(\gamma_{0}\) and \(\gamma_{\lambda}\) (\(\lambda=R,L\)) stand for the free-space decay and the waveguide-induced decay of atoms, respectively. Throughout the paper, we consider symmetric coupling of atoms (\(\gamma_{R}=\gamma_{L}=\Gamma\)) and cavity modes (\(\kappa_{R}=\kappa_{L}=\kappa\)) to two chiral guided modes of waveguide. Meanwhile, the coherent interaction between atoms can be tailored by adjusting the atom spacing [63; 64]. Without loss of generality, we focus on the case of equal atom spacing, i.e., \(x_{j+1}-x_{j}=d\) for \(j\geq 1\). The total Hamiltonian reads
\[H=H_{0}+H_{I}+H_{\text{topo}} \tag{3}\]
with the free Hamiltonian
\[H_{0}=\omega_{c}c_{ccw}^{\dagger}c_{cew}+\omega_{c}c_{cw}^{\dagger}c_{cw}+ \omega_{c}\sigma_{+}^{(0)}\sigma_{-}^{(0)}+\sum_{j=1}^{N}\omega_{j}\sigma_{+}^ {(j)}\sigma_{-}^{(j)} \tag{4}\]
and the interaction Hamiltonian for cavity QED system
\[H_{I}=g\left(c_{ccw}^{\dagger}\sigma_{-}^{(0)}+\sigma_{+}^{(0)}c_{ccw}\right) +g\left(c_{cw}^{\dagger}\sigma_{-}^{(0)}+\sigma_{+}^{(0)}c_{cw}\right) \tag{5}\]
and the Hamiltonian describing the coherent coupling between adjacent atoms
\[H_{\text{topo}}=\sum_{j=1}^{N-1}J_{j}\left(\sigma_{+}^{(j)}\sigma_{-}^{(j+1)} +\sigma_{+}^{(j+1)}\sigma_{-}^{(j)}\right) \tag{6}\]
where \(\omega_{c}\) is the frequency of cavity modes, which resonantly couples to QE with strength \(g\). \(\omega_{j}\) is the transition frequency of the \(j\)th atoms and we assume \(\omega_{j}=\omega_{c}\) unless specially noted. The staggered hoppings \(J_{j}=J_{-}\) (\(J_{+}\)) for an odd (even) \(j\) result in the dimerized interactions between atoms (see schematic presented in Fig. 1). Explicitly, the staggered hoppings can be written as \(J_{\pm}=J_{0}[1\pm\cos(\phi)]\), with \(J_{0}\) and \(\phi\) being the interaction strength and the tunable parameter of dimerization strength that control the bandgap and localization of edge states, respectively. In absence of dimerized interactions (\(J_{0}=0\)), the band structure of atom mirror is topologically trivial, which is centrosymmetric with respect to \(d=\lambda_{0}/2\) and plotted in the inset of Fig. 1. The band structure is modified by the dimerized interactions and gives rise to localized edge states in the strong topological regime with \(J_{0}\gg\gamma_{0}\), which exhibits the periodicity of \(\lambda_{0}=2\pi/k_{0}\) in \(d\). The inset of Fig. 1 also plots the band structure of topological atom mirror with an odd number of sites and \(J_{0}=8\Gamma\), where it shows that a single edge state survives and is isolating from the bulk states due to the presence of energy gap. It also shows that the edge state is exactly protected from the waveguide-mediated interaction for two atom separations, \(d=\lambda_{0}/4\) and \(3\lambda_{0}/4\) (indicated by the vertical dashed line in the inset of Fig. 1), where the coupling between topological atoms is fully dispersive [63; 64] but no energy shift is observed. This protection stems from the chiral symmetry of SSH chain; however, the topological phases of \(d=\lambda_{0}/4\) and \(3\lambda_{0}/4\) are distinct [46]: the former is dissipative while the latter is dissipationless. A brief discussion on topological phase transition can be found in Appendix C. Hereafter, atom mirrors with and without the dimerized interactions are called the topological and trivial atom mirrors, respectively, their interac
tion with the polaritonic states of strong-coupling cavity QED system yields the topological and non-topological cavity polaritons. In the following study, we focus on the case of \(d=3\lambda_{0}/4\), where the dissipationless edge state can produce prominent anisotropic scattering of photon [47]. The coupling of cavity QED system to the dissipationless edge state can suppress the cavity dissipation and results in significant linewidth narrowing of cavity polaritons in both weak- and strong-coupling regimes.
We consider a single excitation in the composed system, where the subradiant single-photon states hold a great promise for applications related to quantum memory and quantum information storage [28; 30]. To better understand how the topological edge state affects the quantum dynamics and photon transport, we derive the effective Hamiltonian from Eqs. (1)-(6) under the open boundary condition for atom mirror, which is given by
\[H_{\text{eff}}=H_{0}^{n}+H_{I}+H_{\text{top}}\,+H_{\text{vp}} \tag{7}\]
with the non-Hermitian free Hamiltonian
\[H_{0}^{n} =\left(\omega_{c}-i\frac{\kappa}{2}\right)c_{ccw}^{\dagger}c_{ ccw}+\left(\omega_{c}-i\frac{\kappa}{2}\right)c_{cw}^{\dagger}c_{cw}\] \[+\left(\omega_{c}-i\frac{\gamma_{0}}{2}\right)\sigma_{+}^{(0)} \sigma_{-}^{(0)}+\sum_{j=1}^{N}\left[\omega_{c}-i\left(\Gamma+\frac{\gamma_{0} }{2}\right)\right]\sigma_{+}^{(j)}\sigma_{-}^{(j)} \tag{8}\]
and the virtual-photon interaction Hamiltonian accounting for the waveguide-mediated long-range hoppings
\[H_{\text{vp}} =-i\sum_{\begin{subarray}{c}j,l=1\\ j\neq l\end{subarray}}^{N}\Gamma e^{i|\phi_{j}-\phi_{l}|}\left(\sigma_{+}^{(j)} \sigma_{-}^{(l)}+\sigma_{+}^{(l)}\sigma_{-}^{(j)}\right) \tag{9}\] \[-i\sum_{j=1}^{N}\sqrt{\kappa\Gamma}e^{i\phi_{j}}\left(\sigma_{+}^ {(j)}c_{ccw}+c_{cw}\sigma_{-}^{(j)}\right)\]
where the first and second lines characterize the non-local interactions between atoms and between the cavity modes and the atoms, respectively. \(\phi_{j}=k_{0}x+(j-1)\varphi\) is the effective phase of light propagating from the waveguide-cavity junction to the \(j\)th atom, where \(\varphi=k_{0}d\). Due to the open boundaries of the system, we directly diagonalize \(H_{\text{eff}}\) to obtain the single-photon band structure and the corresponding eigenstates. For the case of \(N=31\) atoms, Fig. 2(a) displays the probability distributions of all eigenstates, which is indexed as \(m=1,2,\ldots,34\) by increasingly sorting the decay rates (i.e., \(-\text{Im}[E]\), the imaginary parts of eigenenergies \(E\)) versus system components, including the QE and two cavity modes of cavity QED system and the dimer cells of topological atom mirror. A remarkable feature is that the probability presents a substantial atom content for
most of the eigenstates, while concentrates in the cavity QED system for eigenstates labelled \(m=1,2\) and \(32-34\), i.e., the first two eigenstates with smallest decay rate and the last three with fastest decay. The eigenenergies shown in Fig. 2(b) reveal that the first two eigenstates are essentially the same as the cavity polaritons of bare cavity QED system (i.e., without the atom mirror), but their decay rates are significantly reduced by over an order of magnitude and even smaller than the free-space decay rate \(\gamma_{0}\) of QE, referred to _subradiant cavity polaritons_. On the contrary, the subradiance cannot be generated for a non-topological cavity polaritons and its decay rate is nearly triple to the topological one.
Fig. 2(c) compares the decay rates of topological and non-topological cavity polaritons versus the number of atom \(N\), where it shows that the non-topological cavity polaritons has the advantage of slow decay with a few atoms; however, the decay rate of topological cavity polaritons rapidly drops with increased \(N\) and becomes smaller than that of non-topological cavity polaritons with around 10 atoms. For \(J_{0}=8\Gamma\), 21 atoms are sufficient to reduce the decay rate of topological cavity polaritons to the minimum, and this minimum value is stable as the atom number increases. While the decay rate of non-topological cavity polaritons gradually increases after an optimal \(N\) due to the accumulated loss in the system. Fig. 2(c) also indicates that there exists an optimal interaction strength \(J_{0}\approx 8\Gamma\), denoted as \(J_{0}^{\rm opt}\), corresponding to the smallest decay rate of topological cavity polaritons, which is about \(\gamma_{0}/2\), a half of the decay rate of a bare atom. It implies that the reduction of decay is achieved by suppressing the dissipation of cavity modes, since on resonance the decay rate of cavity polaritons is the average of atom and photon components. Therefore, cavity polaritons can acquire permanent coherence with a QE whose free-space decay vanishes (i.e., \(\gamma_{0}=0\)). This claim is confirmed by the results presented in the inset of Fig. 2(c), where it shows that in such a case, the decay of topological cavity polaritons can be completely suppressed for different \(\kappa\). It reveals the formation of _bound_ cavity polaritons in a fully open architecture, which is not found with a trivial atom mirror.
Besides the decay rate, the probability distributions of non-topological and topological cavity polaritons are also distinct, as Fig. 2(d) shows. For non-topological cavity polaritons, the probability is uniformly distributed at each cell of atom mirror due to the translational symmetry of homogeneous chain, while localizes at the left boundary for topological cavity polaritons with \(J_{0}=8\Gamma\) and converges to zero after 10 cells. Furthermore, the probability distributions of topological cavity polaritons manifest the behavior of exponential decay from the left boundary, except for the first two cells since the chiral symmetry is broken in our model. These features indicate the formation of edge state in topological atom mirror and its efficient coupling to the cavity QED system, which is the foundation to realize the topology-engineered cavity polaritons. Similar phenomena are also observed for \(J_{0}=10\Gamma\), but the probability distribution is less concentrated and extended closer to the right boundary with a slow decay. The strong delocalization of probability distributions for a large \(J_{0}\) is a consequence of waveguide-mediated long-range hoppings between topological atoms [47, 53, 65]. The delocalization tends to populate all topological atoms and leads to the significant decline of reflection when \(J_{0}\) exceeds a critical value where the probability is extended to the last cell of topological atom mirror, as illustrated in Fig. 6(b) of Appendix C. As a result, the delocalization weakens the ability of topological atom mirror in suppressing the leakage of cavity modes. In this situation, more atoms are required to hinder the extension of probability to the right boundary and reduce the decay of topological cavity polaritons. On the other hand, Fig. 2(c) also shows the increased decay of topological cavity polaritons for a small \(J_{0}\). It is attributed to the coupling of topological cavity polaritons to bulk states and yields \(J_{0}^{\rm opt}\) for the minimum decay, which we will discuss in the later part of the work.
### Linewidth narrowing and the enhanced lifetime of subradiant cavity polaritons
The subradiance of topological cavity polaritons enables to enhance the quantum coherence, demonstrating the features of slow population decay and linewidth narrowing in spectrum. To investigate the quantum dynamics, we derive the equations of motion from the extended cascaded quantum master equation [Eqs. (1)-(6)]
\[\frac{d}{dt}\tilde{\sigma}_{-}^{(0)}=-\frac{\gamma_{0}}{2}\tilde{\sigma}_{-}^ {(0)}-ig\left(\tilde{c}_{cw}+\tilde{c}_{ccw}\right) \tag{10}\]
\[\frac{d}{dt}\tilde{c}_{ccw}=-\frac{\kappa}{2}\tilde{c}_{ccw}-ig\tilde{\sigma}_ {-}^{(0)} \tag{11}\]
\[\frac{d}{dt}\tilde{c}_{cw}=-\frac{\kappa}{2}\tilde{c}_{ew}-ig\tilde{\sigma}_{ -}^{(0)}-\sqrt{\kappa\Gamma}\sum_{j=1}^{N}\tilde{\sigma}_{-}^{(j)}e^{i\phi_{ j}} \tag{12}\]
\[\frac{d}{dt}\tilde{\sigma}_{-}^{(j)}= -\frac{\gamma_{0}}{2}\tilde{\sigma}_{-}^{(j)}-\sqrt{\kappa \Gamma}e^{i\phi_{j}}\tilde{c}_{ccw}-\Gamma\sum_{l=1}^{N}\tilde{\sigma}_{-}^{(l )}e^{i|\phi_{j}-\phi_{l}|} \tag{13}\] \[-J_{j-1}\tilde{\sigma}_{-}^{(j-1)}-J_{j+1}\tilde{\sigma}_{-}^{(j+ 1)}\]
where the substitution \(\langle O\rangle=\text{Tr}[O\rho]=\tilde{O}e^{-i\omega_{c}t}\) is applied and the single-photon approximation \([\sigma_{+},\sigma_{-}]=\sigma_{z}=-1\) is imposed. Figs. 3(a) and (b) plot the population dynamics \(\left|\tilde{\sigma}_{-}^{(0)}\right|^{2}\) of an initially excited QE with parameters of the weak- and strong-coupling regimes for bare cavity QED system, respectively. By reducing the losses associated with the leakage of cavity modes through the topological atom mirror, a weak-coupling
cavity QED system can enter into the strong-coupling regime, which is evidenced by Rabi oscillation in the population dynamics of both QE and cavity modes and shown in Fig. 3(a). While for a bare cavity QED system already in the strong-coupling regime, Fig. 3(b) shows that the period of Rabi oscillation is almost unchanged after introducing the topological atom mirror, while its decay is strongly suppressed and even slower than a bare QE in the free space. It implies that the coupling strengths of QE-cavity interaction are comparable in two configurations but the linewidth of cavity polaritons is significantly narrowed. As a consequence, the lifetime of cavity polaritons can be prolonged by over an order of magnitude, see the lifetime enhancement \(\tau_{\rm TO}/\tau_{0}\) shown in Figs. 3(b) and (c), where \(\tau_{\rm TO}\) and \(\tau_{0}\) are the lifetimes of topological and non-topological cavity polaritons, respectively. We find that for topological cavity polaritons, \(\tau_{\rm TO}\) allows more than 11 cycles of energy exchange between the QE and the cavity, while the non-topological cavity polaritons cannot accomplish a complete period of Rabi oscillation within \(\tau_{0}\). We also find that \(\tau_{\rm TO}/\tau_{0}\) depends on the choice of \(\varphi\) and there is a narrow window of \(\varphi\) for significant enhancement of lifetime, offering a degree of tuning tolerance to fabrication errors and experimental uncertainties. The maximum enhancement of lifetime, or equivalently, the greatest linewidth narrowing, is found at \(\varphi=3\pi/2\).
To observe the phenomenon of linewidth narrowing associated with the subradiance of topological cavity polaritons, we calculate the spectra of the composed system for two excitation configurations, the reflection and transmission for left-incident planewave and the fluorescence of QE addressed through the free space, which are both experimentally relevant. For the first configuration, the dynamics of system can be written in a compacted form (see Appendix B and also Ref. [67] for detailed derivation)
\[\frac{d}{dt}\langle\mathbf{s}(t)\rangle=-iV^{-1}EV\langle\mathbf{s}(t)\rangle- P_{in} \tag{14}\]
with
\[\mathbf{s}(t)=\left[\sigma_{-}^{(0)}(t),c_{ccw}(t),c_{cw}(t),\sigma_{-}^{(1)}, \cdots,\sigma_{-}^{(N)}(t)\right]^{T} \tag{15}\]
and
\[P_{in}=\left[0,\sqrt{\kappa},0,\sqrt{\Gamma}e^{i\phi_{1}},\cdots,\sqrt{\Gamma} e^{i\phi_{N}}\right]^{T}a_{in} \tag{16}\]
Figure 3: (a) and (b) Population dynamics of QE and cavity mode with and without the topological atom mirror in the weak- and strong-coupling regimes, respectively. The inset in (b) shows the short-time dynamics. The lifetime of cavity polaritons is defined as the time that the population decays from 1 to \(e^{-1}\). The black dashed line shows the dynamics of initially excited bare QE in the free space. (c) Topology-enhanced lifetime \(\tau_{\rm TO}/\tau_{0}\) versus \(\varphi=k_{0}d\) in the strong coupling. (d) and (e) Reflection and transmission for left-incident planewave (upper panel) and normalized emission spectrum (lower panel) corresponding to the parameters of (a) and (b), respectively. The non-shaded region indicates the topological bandgap. The light gray line in (e) shows the emission spectrum of QE with \(\phi=0.85\pi\). (f) Reflection with topological atom mirror versus \(\phi\). Parameters for weak coupling are \(g=5\gamma_{0}\), \(\kappa=20\gamma_{0}\), \(J_{0}=5\Gamma\), \(\Gamma=5\gamma_{0}\), \(N=31\), \(\phi_{1}=0\) and \(\phi=0.3\pi\). While the strong coupling is \(g=20\gamma_{0}\), \(J_{0}=8\Gamma\) and other parameters remain unchanged. The critical coupling strength for strong coupling is \(g_{c}=\left(\kappa+\gamma_{0}\right)/2\sqrt{2}\)[66]. The subscripts ’TO’ and ’bare’ indicate the spectra with and without the topological atom mirror, respectively.
where \(E\) and \(V\) are the eigenvalues and the corresponding left eigenvectors of \(H_{\text{eff}}\). \(a_{\text{in}}\) is the amplitude of input field. The solution in the frequency domain is given by
\[\mathbf{s}(\Delta)=iV^{-1}(\Delta I-E)^{-1}VP_{in} \tag{17}\]
where \(\Delta=\omega-\omega_{c}\) is the frequency detuning. The output fields of waveguide are given by
\[a_{\text{out}}\,=R_{\text{out}}\,\,\mathbf{s}(\Delta) \tag{18}\]
\[b_{\text{out}}\,=a_{in}e^{i\phi_{N}}+T_{\text{out}}\,\,\mathbf{s}(\Delta) \tag{19}\]
with
\[R_{\text{out}}\,=\left[0,0,\sqrt{\kappa}e^{i\phi_{N}},\sqrt{\Gamma}e^{i(N-1) \varphi},\cdots,\sqrt{\Gamma}e^{i\varphi},\sqrt{\Gamma}\right]^{T} \tag{20}\]
\[T_{\text{out}}\,=\left[0,\sqrt{\kappa}e^{i\phi_{N}},0,\sqrt{\Gamma}e^{i(N-1) \varphi},\cdots,\sqrt{\Gamma}e^{i\varphi},\sqrt{\Gamma}\right]^{T} \tag{21}\]
Subsequently, we can obtain the reflection and transmission spectra as \(R(\Delta)=a_{\text{out}}^{\dagger}(\Delta)a_{\text{out}}(\Delta)/\left|a_{ \text{in}}\right|^{2}\) and \(T(\Delta)=b_{\text{out}}^{\dagger}(\Delta)b_{\text{out}}(\Delta)/\left|a_{ \text{in}}\right|^{2}\), respectively. Eqs.(18)-(21) indicate that in this configuration, the pump photons can interfere with the scattering photons. While for the configuration of fluorescence, only the photons emitted by QE are detected. The emission spectrum of QE is defined as \(S(\omega)=\lim_{t\rightarrow\infty}\text{Re}\left[\int_{0}^{\infty}d\tau \left<\sigma_{+}^{(0)}(t)\sigma_{-}^{(0)}(t+\tau)\right>e^{i\omega\tau}\right]\), where the two-time correlation function \(\left<\sigma_{+}^{(0)}(t)\sigma_{-}^{(0)}(t+\tau)\right>\) can be obtained by using the quantum regression theorem [19, 1]. The equation for two-time correlation functions is as follows:
\[\frac{d}{d\tau}\mathbf{c}(\tau)=-iV^{-1}EV\mathbf{c}(\tau) \tag{22}\]
where \(\mathbf{c}(\tau)=\left[\left<\sigma_{+}^{(0)}(0)\mathbf{s}(\tau)\right>\right] ^{T}\), with \(\mathbf{s}(\tau)\) given in Eq. (15). With initial condition \(c_{0}=[1,0,\cdots,0]^{T}\), we have \(\mathbf{c}(\omega)=iV^{-1}(\omega I-E)^{-1}Vc_{0}\), which yields the solution of \(\left<\sigma_{+}^{(0)}\sigma_{-}^{(0)}(\omega)\right>\) in the frequency domain. On the other hand, the emission spectrum of QE can be expressed as \(S(\omega)=\text{Re}\left[\left<\sigma_{+}^{(0)}\sigma_{-}^{(0)}(\omega)\right>\right]\) by using the Fourier transform relations. We thus can obtain the emission spectrum of QE by substituting \(\left<\sigma_{+}^{(0)}\sigma_{-}^{(0)}(\omega)\right>\) into \(S(\omega)\).
For the weak-coupling cavity QED system corresponding to Fig. 3(a), no spectral splitting is observed in both the transmission/reflection spectrum and the emission spectrum of QE for the bare cavity-QED without topological atom mirror, which are shown in the dashed curves in Fig. 3(d). However, we note that the emission spectrum of QE is obviously broaden, which is the superposition of two eigenmodes and it can be called as 'dark' strong coupling [68]. In contrast, the topological atom mirror brings the system into the strong-coupling regime, characterized by resolvable Rabi splitting with a width of \(\sim 2\sqrt{2}g\) seen in both the reflection spectrum and the emission spectrum of QE. In this case, the transmission is approximately zero since the edge state is localized at the left boundary of atom mirror [see Fig. 2(c) and Fig. 6(c) in Appendix C]. While for a strong-coupling cavity QED system, Fig. 3(e) shows that the Rabi peaks corresponding to topological cavity polaritons exhibit extremely sharp linewidth, which can be apparently observed in both the reflection spectrum and the emission spectrum of QE. Besides the Rabi splitting, multiple resonances located at the left and right sides of the reflection and transmission spectra are reminiscent of bulk states.
The topological bandgap separates the edge states and bulk states and plays a central role in determining the optical response of atom mirror. The gap of topological band can be controlled by the parameter \(\phi\). It is interesting to see the variation of reflection spectrum by tuning \(\phi\), which can reveal how the change of underlying band structure alters the decay of topological cavity polaritons. As the reflection spectrum of Fig. 3(f) shows, the locations of upper and lower bands can be clearly identified through the boundary where the reflection suddenly drops. Two bands are symmetrical with respect to \(\phi=\pi/2\), but the linewidth of topological cavity polaritons is obviously increased as \(\phi\) goes though \(\pi/2\). In the parameter range of \(\pi/2<\phi\leq\pi\), the coupling of strong-coupling cavity QED system to topological edge state slightly broadens instead of narrowing the linewidth of polaritonic states, as the light gray line in the lower panel of Fig. 3(e) shows; meanwhile, there are two hybrid edge modes with a finite gap around the zero energy, similar to a SSH chain with even sites [40, 53]. The dramatic change of linewidth results from the opposite localization of edge states, which localize at the left (right) boundary of atom mirror for \(0\leq\phi\leq\pi/2\) (\(\pi/2<\phi\leq\pi\)), see the examples shown in Figs. 6(c) and (d) in Appendix C. The results presented in Fig. 3(f) demonstrate the capacity of topological edge states in efficiently tuning the lifetime and linewidth of cavity polaritons at the single-quantum level. The emission spectrum of QE shown in Fig. 7(a) of Appendix D demonstrates the similar features observed in the reflection spectrum, but the signal of multiple resonances corresponding to bulk states are weak. Therefore, it is preferable to investigate the properties of topological cavity polaritons through the fluorescence of QE.
\(J_{0}\) is another important parameter that determines the topological bandgap, and hence the lifetime \(\tau_{\text{TO}}\) exhibits strong dependence on \(J_{0}\). Fig. 4(a) plots the lifetime enhancement \(\tau_{\text{TO}}/\tau_{0}\) as the function of interaction strength \(J_{0}\) and the number of atoms \(N\), where it shows that \(\tau_{\text{TO}}>\gamma_{0}^{-1}\) can be achieved in a wide range of \(J_{0}\) with a few tens of atoms in mirror. Remarkably, \(\tau_{\text{TO}}/\tau_{0}\) demonstrates an abrupt increase at a critical interaction strength \(J_{0}^{c}\) regardless of \(N\). This phenomenon can be understood by inspecting the emission spectrum of QE versus \(J_{0}\), as shown in Fig. 4(b). We can see that the sig
nificant linewidth narrowing of Rabi peaks occurs at \(J_{0}^{c}\) corresponding to the topological bandgap slightly larger than the width of Rabi splitting, i.e., \(J_{0}^{c}\gtrsim g/\sqrt{2}\cos(\varphi)\). The reason is that for \(J_{0}>J_{0}^{c}\), the cavity polaritons are detuned from the superradiant bulk states and the corresponding dissipation is strongly suppressed, resulting in the significant enhancement of lifetime. However, a large \(J_{0}\) is not always beneficial for enhancing the lifetime. As is seen in Fig. 4(a) and discussed earlier, there exists an optimal interaction strength \(J_{0}^{\mathrm{opt}}\) for maximal \(\tau_{\mathrm{TO}}/\tau_{0}\) as a consequence of the tradeoff between the delocalization of edge states and the dissipation induced by bulk states. We also notice that the anticrossing behavior can be observed at \(J_{0}\sim 4\Gamma\) [see Fig. 7(c) in Appendix D for a closeup and further discussion], implying the strong coupling between the topological cavity polaritons and the bulk states.
To shed insights into the dissipative properties of topological cavity polaritons, we diagonalize the Lindblad operator [Eq. (2)] to obtain the dissipative matrix \(\gamma=\sum_{m}\chi_{m}\ket{v_{m}}\bra{v_{m}}\), with \(\chi_{m}\) being the dissipation spectrum, which is shown in Fig. 8(a) (see Appendix E for more details). Four dissipative modes are found to have large dissipation rate, which are two polarized radiating modes corresponding to the even and odd sites in topological atom mirror and other two are related to cavity modes, as we see from the wave function shown in Fig. 8(b) of Appendix E. In our model, the dissipation of odd-polarized radiating mode is greater than that of even-polarized radiating mode due to the odd number of atoms. The dissipation rate \(\Gamma_{\pm}\) from topological cavity polaritons to environment is given by the overlap between the polaritonic states and the radiating modes in the Lindblad operator, which is evaluated as [46]
\[\Gamma_{\pm}=\bra{\psi_{\pm}}\gamma\ket{\psi_{\pm}}=\sum_{m}\chi_{m}\bra{\psi_ {\pm}}^{2} \tag{23}\]
where \(\ket{\psi_{\pm}}\) is the eigenstate of \(\mathrm{Re}\left[H_{\mathrm{eff}}\right]\) corresponding to the cavity polaritons. The results are plotted in Fig. 4(c), where it shows that the dissipation rate of odd-polarized radiating mode approaches to zero around \(J_{0}^{\mathrm{opt}}\), while the dissipation rate contributed by the even-polarized radiating mode (cavity modes) is monotonically decreasing (increasing) with increased \(J_{0}\). We also find that the dissipation of radiating modes dramatically increases as \(J_{0}\) approaches to \(J_{0}^{c}\). In addition, a small \(\phi\) that produces a large topological bandgap [see Fig. 3(f)] can reduce the dissipation rate of radiating modes. These results suggest that the system energy mainly dissipates from bulk states to environment in parameter range of \(J_{0}<J_{0}^{\mathrm{opt}}\). While for \(J_{0}>J_{0}^{\mathrm{opt}}\), the dissipation of cavity modes is dominated over the radiating modes and as a result, the minimum \(\Gamma_{\pm}\) achieves around \(J_{0}\) corresponding to zero dissipation rate (\(\sim 10^{-3}\gamma_{0}\)) of odd-polarized radiating mode.
### Robustness against the disorder in topological atom mirror
In practice, the perturbations on system and imperfections of structure are inevitable. In this subsection, we investigate the impact of local disorder on the enhancement of lifetime for topological cavity polaritons. In Fig. 5(a), we plot \(\tau_{\mathrm{TO}}/\tau_{0}\) with disordered positions for topological atoms, where the position of the \(j\)th atom is \(d_{j}+\Delta d_{j}\), with \(-0.02d\leq\Delta d_{j}\leq 0.02d\). It shows that the disorder in atom positions has small impact on the lifetime for \(J_{0}<J_{0}^{\mathrm{opt}}\), especially when \(\tau_{\mathrm{TO}}/\tau_{0}<10\). Though it affects the lifetime in a negative manner, we find that \(\tau_{\mathrm{TO}}/\tau_{0}>10\) can still be obtained around \(J_{0}^{\mathrm{opt}}\). Different from the disorder in atom positions, the positive impact on lifetime is observed for disorder in interaction
Figure 4: (a) Topology-enhanced lifetime of cavity polaritons \(\tau_{\mathrm{TO}}/\tau_{0}\) versus the number of atoms \(N\) and the interaction strength \(J_{0}\). Orange dashed dotted line indicates the critical \(J_{0}\) (i.e., \(J_{0}^{c}\)) for topological bandgap with a width of \(E_{\mathrm{gap}}=2\sqrt{2}g\). Dashed white line surrounds the parameters range of \(\tau_{\mathrm{TO}}>\gamma_{0}^{-1}\). Orange star denotes \(\tau_{\mathrm{TO}}/\tau_{0}\) for dynamics shown in Fig. 3(b). (b) Emission spectrum of QE versus \(J_{0}\). The horizontal and vertical white dashed lines indicate the locations of cavity polaritons and \(J_{0}^{c}\), respectively. (c) Dissipation from topological cavity polaritons to two cavity modes (blue lines) and the even- and odd-polarized radiating modes (light gray and red lines) for \(\phi=0.3\pi\) (solid lines with dots) and \(0.2\pi\) (dashed lines with circles). Parameters not mentioned are the same as Fig. 3(b).
strengths with moderate disorder \(-0.2J_{0}\leq\Delta J_{j}\leq 0.2J_{0}\) and \(J_{0}>J_{0}^{\rm opt}\), as Fig. 5(b) shows. In this case, the interaction strength between the \(j\)th and \((j+1)\)th atoms is given by \(J_{j}+\Delta J_{j}\). The inset of Fig. 5(b) displays that \(\tau_{\rm TO}/\tau_{0}\) at \(J_{0}=J_{0}^{\rm opt}\) manifests high robustness against the disorder in interaction strengths for various atom spacing. The results presented in Figs. 5(a) and (b) indicate that the disorder in atom positions has greater impact on the lifetime of topological cavity polaritons, as we can see that the lifetime variation of \(2\%\) disorder in atom positions is comparable and even slightly larger than that of \(20\%\) disorder in atom interactions around \(J_{0}^{\rm opt}\). It is because the impact of disorder in atom positions is nonlocal, which affects the coupling between the disordered atom and all other atoms through the waveguide-mediated long-range hoppings. On the contrary, the disorder in atom interactions is local perturbation, which only alters the coupling between neighboring sites. Therefore, the lifetime enhancement manifests higher robustness against disorder in atom interactions. As for disorder in atom frequencies, its main effect is on the energies of edge and bulk states, thus the impact on the lifetime of topological cavity polaritons is not obvious if the topological bandgap is sufficiently large. Fig. 5(c) shows the lifetime enhancement in presence of disorder in atom frequencies, where the strong disorder strength is considered. The frequency of the \(j\)th atom is randomly distributed in range of \(\omega_{j}\in[\omega_{c}-g/\sqrt{2},\omega_{c}+g/\sqrt{2}]\), where the maximal disorder strength is a half of the width of Rabi splitting. We can see that similar to Fig. 5(b), disorder in atom frequencies also begins to have noticeable effect around \(J_{0}^{c}\) (vertical dashed dotted line), but in this case the lifetime of topological cavity polaritons is less sensitive to disorder when \(J_{0}>J_{0}^{\rm opt}\) (vertical dashed line) as expected. Therefore, it implies that disorder in atom interactions and frequencies mainly affect the edge states and bulk states of topological atom mirror, respectively. We conclude from Fig. 5 that the presence of moderate disorder in atom mirror will not severely spoil the enhanced lifetime of topological cavity polaritons.
It should be emphasized that the parameters used in this work are attainable in nanophotonic platform with semiconductor QEs. Taking InGaAs quantum dots for an example, the intrinsic decay is \(\gamma_{0}\approx 10\mu\)eV at cryogenic temperatures [69]. For the strong-coupling regime under investigation, \(\kappa=20\gamma_{0}\) corresponds to a cavity with \(Q\) factor of \(\sim 6\times 10^{3}\). The QE-cavity coupling strength \(g=20\gamma_{0}=200\mu\)eV can be obtained, for instance, in a WGM microdisk with \(3\mu\)m radius [20; 33]. For quantum dot arrays, the switchable coupling between two neighboring sites is usually provided by the tunnel barrier of electrostatic potential, which can be tuned through control gate [43; 70; 71]. Very recently, the experimental realization of SSH chain based on ten semiconductor QEs with tunable interaction strengths has been reported [43]. With state-of-art technology, the precision of positioning a QE can reach \(\sim 15\)nm [72], which is less than \(2\%\) compared to the emission wavelength of QE. As the results of Fig. 5(a) indicates, such experimental uncertainties has limited impact on the lifetime of topological cavity polaritons. In addition, for the case of single-photon excitation as we study in this work, the topological atom mirror can be replaced by cavity counterpart [51], which is a more feasible experimental configuration to tune the system parameters. Besides the solid-state QEs, the technology of optical tweezers has already been applied to construct one- and two-dimensional atom arrays with the number of cold atoms upto \(200\)[73; 74; 75]. Alternatively, the cavity-magnon systems [13] and superconducting circuits [30; 42] are also promising candidates to implement topological atom mirror for the advance in realizing multi-atom interactions over extended distances. Therefore,
the considerable enhancement of lifetime by over an order of magnitude predicted here is achievable for cavity polaritons in diverse quantum systems.
## III Conclusion
In summary, we propose a scheme for narrowing the linewidth of cavity polaritons combined with robustness by coupling a one-dimensional topological atom mirror to the cavity QED system based on WGM resonator. The cavity polaritons become subradiant, with a linewidth smaller than that of a single QE through the coupling of cavity mode to edge states in dissipationless topological phase. Accordingly, the lifetime can be improved by over an order of magnitude. The subradiance of cavity polaritons are protected by the topological bandgap and hence can survive in the disordered atom mirror.
Our architecture exhibits prominent advantages in at least three aspects. Firstly, the maximal enhancement of lifetime is achieved in a cavity with moderate \(Q\) factor of \(10^{3}-10^{4}\), which gets rid of the drawback of poor excitation and collection efficiencies in conventional approach that reduces the linewidth by the use of a high-\(Q\) cavity. This feature combined with the openness of semi-infinite waveguide benefits the practical applications. Importantly, several unit cells, typically \(10-20\) atoms, are sufficient to narrow the linewidth of cavity polaritons to a value comparable to a single QE in the free space. Topological atom mirror of this scale has been demonstrated with state-of-art technology of nanofabrication. Last but not least, the property of topological protection empowers the subradiant cavity polaritons to have high tolerance for fabrication imperfections and experimental uncertainties. Moving forward, future endeavors can devote to explore the effects of coherent time-delayed feedback on lifetime enhancement [76], or conceive the scheme of _in situ_ and dynamical topological manipulation of quantum states [70]. Therefore, our scheme offers a promising platform for exploring topological quantum optics and may be potentially used for long-time storage of quantum states in experiments, which is crucial to push quantum technologies toward practical applications.
###### Acknowledgements.
Y.W. Lu acknowledges the support of National Natural Science Foundation of China (Grant Nos. 62205061, 12274192). Z. Liao acknowledges the support of National Key R&D Program of China (Grant No. 2021YFA1400800), the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023B1515040023) and the Natural Science Foundations of Guangdong (Grant No. 2021A1515010039).
## Appendix A Derivation of the extended cascaded quantum master equation
We derive the extended cascaded quantum master equation [Eqs. (1)-(2)] by tracing out the degrees of freedom of the waveguide. The system Hamiltonian including the waveguide modes is given by (\(\hbar=1\))
\[H_{S}=H+H_{w}+H_{sw} \tag{10}\]
where \(H=H_{0}+H_{I}+H_{\text{topo}}\) is given in Eqs. (3)-(6). \(H_{w}\) is the free Hamiltonian of waveguide
\[H_{w}=\sum_{\lambda=R,L}\int d\omega\omega b_{\lambda}^{\dagger}b_{\lambda} \tag{11}\]
and \(H_{sw}\) is the interaction Hamiltonian that describes the cavity-waveguide and atom-waveguide interactions
\[H_{sw}=i\sum_{\lambda=R,L}\int d\omega\sqrt{\frac{\kappa_{\lambda}}{2\pi}}b_{ \lambda}^{\dagger}e^{-ik_{\lambda}x_{0}}c_{\lambda}+i\sum_{\lambda=R,L}\sum_{ j=1}^{N}\int d\omega\sqrt{\frac{\gamma_{\lambda}}{2\pi}}b_{\lambda}^{\dagger}e^{-ik_{ \lambda}x_{j}}\sigma_{-}^{(j)}+H.c. \tag{12}\]
where \(b_{L}\) (\(b_{R}\)) is the bosonic annihilation operator of the left-propagating (right-propagating) waveguide mode with frequency \(\omega\) and wave vector \(k_{R}=-k_{L}=k_{0}\equiv\omega_{c}/v\) with \(v\) being the group velocity. Note that for the sake of convenience, we have used the notions \(c_{R}\)=\(c_{ccw}\) and \(c_{L}=c_{cw}\) since the CCW and CW modes are coupled to the right- and left-propagating guided modes, respectively. \(x_{0}\) is the location of cavity-waveguide junction and \(x_{j}\) indicates the location of \(j\)th atom that couples to the waveguide. Applying the transformation \(\widetilde{H}=UHU^{\dagger}-idU/dtU^{\dagger}\) with \(U=\exp\left[i\left(\omega_{c}\sum_{\lambda=R,L}c_{\lambda}^{\dagger}c_{ \lambda}+\sum_{j=1}^{N}\omega_{j}\sigma_{+}^{(j)}\sigma_{-}^{(j)}+\sum_{ \lambda=R,L}\int d\omega\omega b_{\lambda}^{\dagger}b_{\lambda}\right)t\right]\), we have
\[\widetilde{H}_{sw}(t)=i\sum_{\lambda=R,L}\left[\int d\omega\sqrt{\frac{\kappa_ {\lambda}}{2\pi}}b_{\lambda}^{\dagger}e^{i(\omega-\omega_{c})t}e^{-i\omega x_ {0}/v}c_{\lambda}+\sum_{j=1}^{N}\int d\omega\sqrt{\frac{\gamma_{\lambda}}{2 \pi}}b_{\lambda}^{\dagger}e^{i(\omega-\omega_{j})t}e^{-i\omega x_{j}/v}\sigma _{-}^{(j)}\right]+H.c. \tag{13}\]
The equation of \(b_{\lambda}\) can be obtained from the Heisenberg equation
\[\frac{d}{dt}b_{\lambda}(t)=\sqrt{\frac{\kappa_{\lambda}}{2\pi}}c_{\lambda}(t)e^{i (\omega-\omega_{c})t}e^{-i\omega x_{0}/v}+\sum_{j=1}^{N}\sqrt{\frac{\gamma_{ \lambda}}{2\pi}}\sigma_{-}^{(j)}(t)e^{i(\omega-\omega_{j})t}e^{-i\omega x_{j}/v} \tag{10}\]
Formally integrating the above equation, we have
\[b_{\lambda}(t)=\int_{0}^{t}d\tau\sqrt{\frac{\kappa_{\lambda}}{2\pi}}c_{\lambda} (\tau)e^{i(\omega-\omega_{c})\tau}e^{-i\omega x_{0}/v}+\sum_{j=1}^{N}\int_{0}^{ t}d\tau\sqrt{\frac{\gamma_{\lambda}}{2\pi}}\sigma_{-}^{(j)}(\tau)e^{i(\omega- \omega_{j})\tau}e^{-i\omega x_{j}/v} \tag{11}\]
where the initial condition \(b_{\lambda}(0)=0\) is imposed since the waveguide is in the vacuum state. On the other hand, the equation of motion of arbitrary operator \(O\) reads
\[\begin{split}\frac{d}{dt}O(t)&=\sum_{\lambda=R,L} \int d\omega\sqrt{\frac{\kappa_{\lambda}}{2\pi}}\left\{b_{\lambda}^{\dagger}(t )e^{i(\omega-\omega_{c})t}e^{-i\omega x_{0}/v}\left[O(t),c_{j}(t)\right]- \left[O(t),c_{j}^{\dagger}(t)\right]b_{\lambda}(t)e^{-i(\omega-\omega_{c})t}e ^{i\omega x_{0}/v}\right\}\\ &+\sum_{\lambda=R,L}\sum_{j=1}^{N}\int d\omega\sqrt{\frac{\gamma _{\lambda}}{2\pi}}\left\{b_{\lambda}^{\dagger}(t)e^{i(\omega-\omega_{j})t}e^{ -i\omega x_{j}/v}\left[O(t),\sigma_{-}^{(j)}(t)\right]\right.-\left[O(t), \sigma_{+}^{(j)}(t)\right]b_{\lambda}(t)e^{-i(\omega-\omega_{j})t}e^{i\omega x _{j}/v}\right\}\end{split} \tag{12}\]
Substituting \(b_{\lambda}(t)\) into the above equation, we obtain
\[\begin{split}\frac{d}{dt}O(t)&=\sum_{\lambda=R,L} \int_{0}^{t}d\tau\int d\omega\left\{\left[\frac{\kappa_{\lambda}}{2\pi}c_{ \lambda}^{\dagger}(\tau)+\sum_{j=1}^{N}\frac{\sqrt{\kappa_{\lambda}\gamma_{ \lambda}}}{2\pi}\sigma_{+}^{(j)}(\tau)e^{-i\omega x_{0j}/v}\right]\left[O(t),c _{\lambda}(t)\right]e^{i(\omega-\omega_{c})(t-\tau)}\\ &-\left[O(t),c_{\lambda}^{\dagger}(t)\right]\left[\frac{\kappa_{ \lambda}}{2\pi}c_{\lambda}(\tau)+\sum_{j=1}^{N}\frac{\sqrt{\kappa_{\lambda} \gamma_{\lambda}}}{2\pi}\sigma_{-}^{(j)}(\tau)e^{i\omega x_{0j}/v}\right]e^{- i(\omega-\omega_{c})(t-\tau)}\right\}\\ &+\sum_{\lambda=R,L}\sum_{j=1}^{N}\int_{0}^{t}d\tau\int d\omega \left\{\left[\frac{\sqrt{\kappa_{\lambda}\gamma_{\lambda}}}{2\pi}c_{\lambda}^ {\dagger}(\tau)e^{-i\omega x_{j0}/v}\right.\right.\left.+\sum_{l=1}^{N}\frac{ \gamma_{\lambda}}{2\pi}\sigma_{+}^{(l)}(\tau)e^{-i\omega x_{jl}/v}\right] \left[O(t),\sigma_{-}^{(j)}(t)\right]e^{i(\omega-\omega_{c})(t-\tau)}\right] \\ &-\left[O(t),\sigma_{+}^{(j)}(t)\right]\left[\frac{\sqrt{\kappa_{ \lambda}\gamma_{\lambda}}}{2\pi}c_{\lambda}(\tau)e^{i\omega x_{j0}/v}+\sum_{l =1}^{N}\frac{\gamma_{\lambda}}{2\pi}\sigma_{-}^{(l)}(\tau)e^{i\omega x_{jl}/v} \right]e^{-i(\omega-\omega_{c})(t-\tau)}\right\}\end{split} \tag{13}\]
where \(x_{j0}=x_{j}-x_{0}\) and \(x_{jl}=x_{j}-x_{l}\). We perform the Markov approximation by assuming the time delay \(x_{jl}/v\) between the atoms and \(x_{j0}\) between the cavity modes and the atoms are sufficiently small and can be neglected. Therefore, we have
\[\begin{split}\frac{\kappa_{\lambda}}{2\pi}\int_{0}^{t}d\tau\int d \omega e^{i(\omega-\omega_{c})(t-\tau)}c_{\lambda}^{\dagger}(\tau)=\kappa_{ \lambda}\int_{0}^{t}d\tau\delta(t-\tau)c_{\lambda}^{\dagger}(\tau)=\frac{ \kappa_{\lambda}}{2}c_{\lambda}^{\dagger}(t)\end{split} \tag{14}\]
\[\begin{split}\frac{\sqrt{\kappa_{\lambda}\gamma_{\lambda}}}{2\pi} \int_{0}^{t}d\tau\int d\omega e^{i(\omega-\omega_{c})(t-\tau)}e^{-i\omega x_{j0}/v }c_{\lambda}^{\dagger}(\tau)=\sqrt{\kappa_{\lambda}\gamma_{\lambda}}\int_{0}^{ t}d\tau\delta\left(t-\frac{x_{j0}}{v}-\tau\right)e^{-i\omega_{c}x_{j0}/v}c_{ \lambda}^{\dagger}(\tau)\\ \approx\sqrt{\kappa_{\lambda}\gamma_{\lambda}}\Theta\left(t-\frac{ x_{j0}}{v}\right)e^{-ik_{\lambda}x_{j0}}c_{\lambda}^{\dagger}(t)\end{split} \tag{15}\]
\[\begin{split}\frac{\gamma_{\lambda}}{2\pi}\sum_{l=1}^{N}\int_{0}^{ t}d\tau\int d\omega e^{i(\omega-\omega_{c})(t-\tau)}e^{-i\omega x_{jl}/v}\sigma_{+}^{(l)}(\tau)= \gamma_{\lambda}\sum_{l=1}^{N}\int_{0}^{t}d\tau\delta\left(t-\frac{x_{jl}}{v}- \tau\right)e^{-i\omega_{c}x_{jl}/v}\sigma_{+}^{(l)}(\tau)\\ \approx\frac{\gamma_{\lambda}}{2}\sigma_{+}^{(j)}(t)+\gamma_{ \lambda}\sum_{l=1}^{N}\Theta\left(t-\frac{x_{jl}}{v}\right)e^{-ik_{\lambda}x_{ jl}}\sigma_{+}^{(l)}(t)\end{split} \tag{16}\]
where \(x_{j0},x_{jl}>0\). \(\Theta(t)\) is the step function. Substituting Eqs. (107)-(110) into Eq. (108) and taking the averages, we obtain
\[\begin{split}\frac{d}{dt}\langle O(t)\rangle=& \sum_{\lambda=R,L}\frac{\kappa_{\lambda}}{2}\left\{\left\langle c_{ \lambda}^{\dagger}(t)\left[O(t),c_{\lambda}(t)\right]\right\rangle-\left\langle \left[O(t),c_{\lambda}^{\dagger}(t)\right]c_{\lambda}(t)\right\rangle\right\}\\ &+\sum_{\lambda=R,L}\sum_{j=1}^{N}\frac{\gamma_{\lambda}}{2} \left\{\left\langle\sigma_{+}^{(j)}(t)\left[O(t),\sigma_{-}^{(j)}(t)\right] \right\rangle-\left\langle\left[O(t),\sigma_{+}^{(j)}(t)\right]\sigma_{-}^{(j) }(t)\right\rangle\right\}\\ &+\sum_{\lambda=R,L}\sum_{j,l=1}^{N}\gamma_{\lambda}\left\{e^{- ik_{\lambda}x_{jl}}\left\langle\sigma_{+}^{(l)}\left[O(t),\sigma_{-}^{(j)}(t) \right]\right\rangle-e^{ik_{\lambda}x_{jl}}\left\langle\left[O(t),\sigma_{+}^ {(j)}(t)\right]\sigma_{-}^{(l)}(t)\right\rangle\right\}\\ &+\sum_{\lambda=R,L}\sum_{j=1}^{N}\sqrt{\kappa_{\lambda}\gamma_{ \lambda}}\left\{e^{-ik_{\lambda}x_{j0}}\left\langle c_{\lambda}^{\dagger}(t) \left[O(t),\sigma_{-}^{(j)}(t)\right]\right\rangle-e^{ik_{\lambda}x_{j0}}\left \langle\left[O(t),\sigma_{+}^{(j)}(t)\right]c_{\lambda}(t)\right\rangle\right\} \end{split} \tag{112}\]
Since \(\langle O(t)\rangle=\text{Tr}\left[O(t)\rho(0)\right]=\text{Tr}\left[O\rho(t)\right]\), we can simplify the average of operators in the above equation by using the cyclic property of trace. For example, the terms in the first and last lines of Eq. (112) can be written as
\[\left\langle c_{\lambda}^{\dagger}(t)\left[O(t),c_{\lambda}(t) \right]\right\rangle=\text{Tr}\left[c_{\lambda}^{\dagger}Oc_{\lambda}\rho(t)- c_{\lambda}^{\dagger}c_{\lambda}O\rho(t)\right]=\text{Tr}\left[Oc_{\lambda} \rho(t)c_{\lambda}^{\dagger}-O\rho(t)c_{\lambda}^{\dagger}c_{\lambda}\right]= \text{Tr}\left\{O\left[c_{\lambda},\rho(t)c_{\lambda}^{\dagger}\right]\right\} \tag{113}\] \[\left\langle\left[O(t),c_{\lambda}^{\dagger}(t)\right]c_{\lambda} (t)\right\rangle=\text{Tr}\left[Oc_{\lambda}^{\dagger}c_{\lambda}\rho(t)-c_{ \lambda}^{\dagger}Oc_{\lambda}\rho(t)\right]=\text{Tr}\left[Oc_{\lambda}^{ \dagger}c_{\lambda}\rho(t)-Oc_{\lambda}\rho(t)c_{\lambda}^{\dagger}\right]= \text{Tr}\left\{O\left[c_{\lambda}^{\dagger},c_{\lambda}\rho(t)\right]\right\}\] (114) \[\left\langle c_{\lambda}^{\dagger}(t)\left[O(t),\sigma_{-}^{(j)}( t)\right]\right\rangle=\text{Tr}\left[c_{\lambda}^{\dagger}O\sigma_{-}^{(j)} \rho(t)-c_{\lambda}^{\dagger}\sigma_{-}^{(j)}O\rho(t)\right]=\text{Tr}\left[O \sigma_{-}^{(j)}\rho(t)c_{\lambda}^{\dagger}-O\rho(t)c_{\lambda}^{\dagger} \sigma_{-}^{(j)}\right]\] (115) \[\left\langle\left[O(t),\sigma_{+}^{(j)}(t)\right]c_{\lambda}(t) \right\rangle=\text{Tr}\left[O\sigma_{+}^{(j)}c_{\lambda}\rho(t)-\sigma_{+}^{( j)}Oc_{\lambda}\rho(t)\right]=\text{Tr}\left[O\sigma_{+}^{(j)}c_{\lambda}\rho(t)-Oc_{ \lambda}\rho(t)\sigma_{+}^{(j)}\right]\] (116) \[=\text{Tr}\left\{O\left[\sigma_{+}^{(j)},c_{\lambda}\rho(t)\right]\right\}\]
Therefore, we can obtain a quantum master equation in the following form
\[\begin{split}\frac{d}{dt}\rho(t)=-i[H,\rho(t)]&+\sum_{ \lambda=R,L}\frac{\kappa_{\lambda}}{2}\left\{\left[c_{\lambda},\rho(t)c_{ \lambda}^{\dagger}\right]-\left[c_{\lambda}^{\dagger},c_{\lambda}\rho(t) \right]\right\}\\ &+\sum_{\lambda=R,L}\sum_{j=1}^{N}\frac{\gamma_{\lambda}}{2}\left\{ \left[\sigma_{-}^{(j)},\rho(t)\sigma_{+}^{(j)}\right]-\left[\sigma_{+}^{(j)}, \sigma_{-}^{(j)}\rho(t)\right]\right\}\\ &+\sum_{\lambda=R,L}\sum_{j,l=1}^{N}\gamma_{\lambda}\left\{e^{-ik_ {\lambda}x_{jl}}\left[\sigma_{-}^{(j)},\rho(t)\sigma_{+}^{(l)}\right]-e^{ik_{ \lambda}x_{jl}}\left[\sigma_{+}^{(j)},\sigma_{-}^{(l)}\rho(t)\right]\right\}\\ &+\sum_{\lambda=R,L}\sum_{j,l=1}^{N}\sqrt{\kappa_{\lambda}\gamma_{ \lambda}}\left\{e^{-ik_{\lambda}x_{j0}}\left[\sigma_{-}^{(j)},\rho(t)c_{ \lambda}^{\dagger}\right]-e^{ik_{\lambda}x_{j0}}\left[\sigma_{+}^{(j)},c_{ \lambda}\rho(t)\right]\right\}\end{split} \tag{117}\]
Note that we define \(x_{0}=0\) in the main text, thus \(x_{j0}=x_{j}\) in the last line on the right-hand side. In addition, the second and third terms on the right-hand side can be expressed using the Liouvillian superoperator. Taking into account the free-space decay \(\gamma_{0}\) of atoms, we arrive at the extended cascaded quantum master equation given in Eqs. (1) and (2) in the main text.
Appendix B Inclusion of planewave excitation in the extended cascaded quantum master equation and the input-output boundary condition
The extended cascaded quantum master equation in Eqs. (1)-(6) does not account for the excitation of the system. In this subsection, we consider a specific excitation configuration, i.e., planewave excitation through the right-propagating guided mode of waveguide (\(b_{R}\)), and derive the corresponding quantum master equation. To include the
excitation, we start by formally integrating Eq. (10) and obtain
\[b_{R}(t)=b_{R}(0)+\int_{0}^{t}d\tau\sqrt{\frac{\kappa_{R}}{2\pi}}c_{R}(\tau)e^{i (\omega-\omega_{c})\tau}e^{-i\omega x_{0}/v}+\sum_{j=1}^{N}\int_{0}^{t}d\tau \sqrt{\frac{\gamma_{R}}{2\pi}}\sigma_{-}^{(j)}(\tau)e^{i(\omega-\omega_{j})\tau }e^{-i\omega x_{j}/v} \tag{12}\]
here \(b_{R}(0)\neq 0\) due to the existence of incident waveguide photons. Substituting the above equation into Eq. (11), we find the additional terms compared to Eq. (12)
\[\sqrt{\frac{\kappa_{R}}{2\pi}}\int d\omega\left\{b_{R}^{\dagger}(0)e^{i(\omega -\omega_{c})t}e^{-i\omega x_{0}/v}\left[O(t),c_{R}(t)\right]-\left[O(t),c_{R}^ {\dagger}(t)\right]b_{R}(0)e^{-i(\omega-\omega_{c})t}e^{i\omega x_{0}/v}\right\} \tag{13}\]
and
\[\sqrt{\frac{\gamma_{R}}{2\pi}}\sum_{j=1}^{N}\int d\omega\left\{b_{R}^{\dagger }(0)e^{i(\omega-\omega_{j})t}e^{-i\omega x_{j}/v}\left[O(t),\sigma_{-}^{(j)}( t)\right]-\left[O(t),\sigma_{+}^{(j)}(t)\right]b_{R}(0)e^{-i(\omega-\omega_{j})t}e^{ i\omega x_{j}/v}\right\} \tag{14}\]
which yields the following Lindblad operator for excitation
\[\mathcal{D}_{p}[\rho]=[c_{R},\rho_{c}(t)]-\left[c_{R}^{\dagger},\rho_{c}^{ \dagger}(t)\right]+\sum_{j=1}^{N}\left\{\left[\sigma_{-}^{(j)},\rho_{e}(t) \right]-\left[\sigma_{+}^{(j)},\rho_{e}^{\dagger}(t)\right]\right\} \tag{15}\]
where \(\rho_{c}(t)=\rho(t)p_{c}^{\dagger}\) and \(\rho_{e}(t)=\rho(t)p_{c}^{\dagger}\) describe the driving from the incident source for CCW mode and atoms, respectively, with \(p_{c}(t)=\sqrt{\frac{\kappa_{R}}{2\pi}}\int d\omega b_{R}(0)e^{-i(\omega- \omega_{c})t}e^{i\omega x_{0}/v}\) and \(p_{e}(t)=\sqrt{\frac{\gamma_{R}}{2\pi}}\int d\omega b_{R}(0)e^{-i(\omega- \omega_{j})t}e^{i\omega x_{j}/v}\) accounting for the absorption of the incident waveguide photons. We can see that for a monochromatic planewave, \(p_{c}\) and \(p_{e}\) reduce to a complex number. In this case, we have
\[\mathcal{D}_{p}[\rho]=\left[c_{R},\rho(t)\right]p_{c}^{*}-\left[c_{R}^{ \dagger},\rho(t)\right]p_{c}+\sum_{j=1}^{N}\left\{\left[\sigma_{-}^{(j)},\rho (t)\right]p_{e}^{*}-\left[\sigma_{+}^{(j)},\rho(t)\right]p_{e}\right\} \tag{16}\]
Accordingly, the extended cascaded quantum master equation is given by
\[\frac{d}{dt}\rho=-i[H,\rho]+\mathcal{D}[\rho]+\mathcal{D}_{p}[\rho] \tag{17}\]
which yields Eqs. (14)-(16) in the main text.
To derive the input-output relations, we integrate Eq. (10) from \(t\) to \(t_{f}\) (i.e., \(t_{f}>t\)) and obtain
\[b_{R}(t)=b_{R}(t_{f})+\int_{t}^{t_{f}}d\tau\sqrt{\frac{\kappa_{R}}{2\pi}}c_{R} (\tau)e^{i(\omega-\omega_{c})\tau}e^{-i\omega x_{0}/v}+\sum_{j=1}^{N}\int_{t}^ {t_{f}}d\tau\sqrt{\frac{\gamma_{R}}{2\pi}}\sigma_{-}^{(j)}(\tau)e^{i(\omega- \omega_{j})\tau}e^{-i\omega x_{j}/v} \tag{18}\]
By comparing with Eq. (12), we have
\[b_{R}(t)=b_{R}(0)+\int_{0}^{t_{f}}d\tau\sqrt{\frac{\kappa_{R}}{2\pi}}c_{R}( \tau)e^{i(\omega-\omega_{c})\tau}e^{-i\omega x_{0}/v}+\sum_{j=1}^{N}\int_{0}^{ t_{f}}d\tau\sqrt{\frac{\gamma_{R}}{2\pi}}\sigma_{-}^{(j)}(\tau)e^{i(\omega- \omega_{j})\tau}e^{-i\omega x_{j}/v} \tag{19}\]
We use the following definition of input-output operators [1, 67]
\[b_{\text{out}}(t)=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}d\omega b_{R}\left(t_{ f}\right)e^{i(\omega-\omega_{c})x_{N}/v}e^{-i(\omega-\omega_{c})t} \tag{20}\]
\[b_{\text{in}}(t)=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}d\omega b_{R}\left(0 \right)e^{-i(\omega-\omega_{c})t} \tag{21}\]
where a phase factor corresponding to the light propagating from the waveguide-cavity junction to the rightmost atom appears in Eq. (20). It means that the right output field propagates freely after scattered by the rightmost atom.
Using Eqs. (14) and (15), we can obtain the input-output relation from Eq. (13)
\[\begin{split}\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}d\omega b_{R}\left( t_{f}\right)& e^{i(\omega-\omega_{c})x_{N}/v}e^{-i(\omega-\omega_{c})t}\\ &=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}d\omega b_{R}(0)e^{i( \omega-\omega_{c})x_{N}/v}e^{-i(\omega-\omega_{c})t}\\ &+\frac{1}{\sqrt{2\pi}}e^{i(\omega-\omega_{c})x_{N}/v}\int_{0}^{ t_{f}}d\tau\int_{0}^{\infty}d\omega\sqrt{\frac{\kappa_{R}}{2\pi}}c_{R}(\tau)e^{-i( \omega-\omega_{c})(t-\tau)}e^{-i\omega x_{0}/v}\\ &+\frac{1}{\sqrt{2\pi}}e^{i(\omega-\omega_{c})x_{N}/v}\sum_{j=1}^ {N}\int_{0}^{t}d\tau\int_{0}^{\infty}d\omega\sqrt{\frac{\gamma_{R}}{2\pi}} \sigma_{-}^{(j)}(\tau)e^{-i(\omega-\omega_{c})(t-\tau)}e^{-i\omega x_{j}/v} \end{split} \tag{16}\]
\[\begin{split} b_{\text{out}}(t)=b_{\text{in}}\left(t-\frac{x_{N}}{ v}\right)+\frac{1}{\sqrt{2\pi}}\int_{0}^{t_{f}}d\tau\int_{0}^{\infty}d\omega \sqrt{\frac{\kappa_{R}}{2\pi}}c_{R}(\tau)e^{-i(\omega-\omega_{c})(t-\tau-x_{N 0}/v)}e^{-i\omega_{c}x_{0}/v}\\ &\qquad+\frac{1}{\sqrt{2\pi}}\sum_{j=1}^{N}\int_{0}^{t_{f}}d\tau \int_{0}^{\infty}d\omega\sqrt{\frac{\gamma_{R}}{2\pi}}\sigma_{-}^{(j)}(\tau)e^ {-i(\omega-\omega_{c})(t-\tau-x_{Nj}/v)}e^{-i\omega_{c}x_{j}/v}\end{split} \tag{17}\]
\[\begin{split} b_{\text{out}}(t)=b_{\text{in}}\left(t-\frac{x_{N}}{ v}\right)+\sqrt{\kappa_{R}}\int_{0}^{tf}d\tau c_{R}(\tau)\delta\left(t-x_{N0}/v- \tau\right)e^{-i\omega_{c}x_{0}/v}\\ &\qquad+\sqrt{\gamma_{R}}\sum_{j=1}^{N}\int_{0}^{t_{f}}d\tau \sigma_{-}^{(j)}(\tau)\delta\left(t-x_{Nj}/v-\tau\right)e^{-i\omega_{c}x_{j}/v }\end{split} \tag{18}\]
Applying the Markovian approximation, the above equation becomes
\[b_{\text{out}}(t)\approx b_{\text{in}}(t)+\sqrt{\kappa_{R}}c_{R}(t)e^{-ik_{R} x_{0}}+\sqrt{\gamma_{R}}\sum_{j=1}^{N}\sigma_{-}^{(j)}(t)e^{-ik_{R}x_{j}} \tag{19}\]
With a fashion similar to Eqs. (13)-(19), we can obtain the input-output relation for left-propagating guided mode
\[a_{\text{out}}(t)\approx a_{\text{in}}(t)+\sqrt{\kappa_{L}}c_{L}(t)e^{ik_{L} x_{0}}+\sqrt{\gamma_{L}}\sum_{j=1}^{N}\sigma_{-}^{(j)}(t)e^{ik_{L}x_{j}} \tag{20}\]
with
\[a_{\text{out}}(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{0}d\omega b_{L}\left( t_{f}\right)e^{-i(\omega-\omega_{c})x_{N}/v}e^{-i(\omega-\omega_{c})t} \tag{21}\]
and
\[a_{\text{in}}(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{0}d\omega b_{L}\left(0 \right)e^{-i(\omega-\omega_{c})t} \tag{22}\]
where a phase factor is added in \(a_{\text{out}}\left(t\right)\) since the left output field propagates freely after scattered by the waveguide-cavity junction. Since \(a_{\text{in}}\left(t\right)=0\) for left-incident planewave, the input-output relations Eqs. (19) and (20) yield Eqs. (18)-(21) in the main text. Note that \(a_{\text{in}}\left(t\right)\) [Eq. (22)] is different from the field amplitude \(a_{\text{in}}\) in Eq. (16).
## Appendix C Dissipative and dissipationless edge states of bare topological atom mirror
To demonstrate the dissipationless feature of edge states, the SE rate of atoms is omitted, i.e., \(\gamma_{0}=0\). In Fig. 6(a), we plot the decay rates \(\Gamma_{\text{edge}}=-2\operatorname{Im}\left[E_{\text{edge}}\right]\) of edge states with atom spacings \(d=\lambda_{0}/4\) and \(3\lambda_{0}/4\) versus intensity strength \(J_{0}\). It shows that \(\Gamma_{\text{edge}}\) of edge states with \(d=3\lambda_{0}/4\) suddenly drops to zero at a critical \(J_{0}\approx\Gamma/2\) regardless of \(\phi\), demonstrating the dissipationless feature; while the edge states in dissipative topological phase (\(d=\lambda_{0}/4\)
manifests the distinct feature of slowly decreased \(\Gamma_{\text{edge}}\) without transition as \(J_{0}\) increases. As we see in Figs. 6(c) and (d), the delocalization populates all unit cells for a sufficient large intensity strength (\(J_{0}>10\Gamma\)) and as a consequence, \(\Gamma_{\text{edge}}\neq 0\) for dissipationless edge states.
In Figs. 6(c) and (d), we investigate the probability distributions of edge states for a bare topological atom mirror, i.e., without coupling to the cavity QED system. We can see that the topological edge states localize at the left (right) boundary when \(0\leq\phi\leq\pi/2\) (\(\pi/2<\phi\leq\pi\)). The long-range hoppings between topological atoms induced by waveguide destroys the exponential localization of edge states: the probability decreases non-monotonically from the boundary and extends to the opposite boundary, exhibiting strong delocalization with a large intensity strength \(J_{0}\). The delocalization means the increased dissipation of atom mirror and therefore, we can see in Fig. 6(b) that it requires more atoms to achieve high reflection for a large \(J_{0}\). But on the other hand, a large \(J_{0}\) can produce a wide topological bandgap, which is beneficial to suppress the dissipation from edge states to bulk states and achieve high reflection. The balance between the delocalization and the dissipation induced by bulk states yields an optimal \(J_{0}\) for the maximal reflection of topological atom mirror with a fixed number of atoms.
We stress that for \(\gamma_{0}=0\), both topological and trivial atom mirrors have unity reflectivity, while the cavity polaritons of latter are dissipative. Therefore, the formation of bound cavity polaritons cannot understood as a result of classical destructive interference between the cavity field and the reflected field through atom mirror.
## Appendix D Emission spectrum of QE and probability distributions of hybrid edge modes
Fig. 7(a) presents the logarithmic plot of the emission spectrum of QE versus \(\phi\). By comparing with the reflection spectrum shown in Fig. 3(f), we can see that they demonstrate similar pattern and features, such as the variation of topological bandgap and the distinct linewidth of cavity polaritons for the left and right edge states. However, for observing the linewidth narrowing of polaritonic states (Rabi peaks), the emission spectrum of QE is preferred since the signal of bulk states is weak. In addition, we find that the composed system shows a feature of SSH model with
Figure 6: (a) Decay rates of edge states with two atom spacings, \(d=\lambda_{0}/4\) and \(3\lambda_{0}/4\). The latter becomes dissipationless when \(J_{0}>\Gamma/2\). The horizontal dashed black line indicates the zero decay rate. (b) Enhancement of reflection versus intensity strength \(J_{0}\) for bare topological atom mirror with different number of atoms \(N\). \(R_{\text{TO}}\) and \(R_{\text{AM}}\) are the reflection of topological and trivial atom mirrors, respectively. \(R_{\text{AM}}\approx 0.68\). (c) and (d) Probability distributions of edge states for topological atom mirror with \(\phi=0.3\pi\) and \(0.7\pi\), respectively. Parameters not mentioned are the same as Fig. 2 in the main text, while \(\gamma_{0}=0\) in (a).
even sites, where two hybrid edge modes with a finite gap can be seen in the region of \(\pi/2<\phi\leq\pi\), which is hard to recognize in the reflection spectrum. The probability distributions shown in Fig. 7(b) indicate the formation of hybrid edge modes with even and odd parities, distinguishing from the left or right edge states of bare topological atom mirror that populate either the odd or even sites.
On the other hand, we find an inconspicuous anticrossing at \(\phi\sim 0.4\pi\) in Fig. 7(a), indicating the strong coupling between bulk states and polaritonic states. This anticrossing behaviour is more evident in the emission spectrum of QE versus \(J_{0}\), as Fig. 7(c) shows. In stark contrast to the conventional strong-coupling anticrossing that is observed by tuning the frequency detuning, here \(J_{0}\) plays the role of frequency detuning in the strong-coupling anticrossing. It is because the variation of \(J_{0}\) does not alter the energy of polaritonic states, but the width of topological bandgap linearly enlarges with increased \(J_{0}\). Therefore, \(J_{0}\) can change the energies of bulk states.
It is important to note that in the reflection spectrum of Fig. 3(f), two dips corresponding to cavity polaritons are vanishing around \(\phi=0.33\pi\), while this phenomenon is not found in the emission spectrum of QE, as Fig. 7(d) shows, where the black arrows indicate the polaritonic states. The results reveal that the formation of dark cavity polaritons is related to specific excitation method. For the configuration of planewave excitation, these dark cavity polaritons stem from destructive interference between the incident and scattering photons.
## Appendix E Dissipation spectrum and radiating modes
With the matrix elements in the Lindblad operator Eq. (2), we can obtain a dissipation matrix \(\gamma\), which can be expressed as follows
\[\gamma=\sum_{m}\chi_{m}\ket{v_{m}}\bra{v_{m}} \tag{12}\]
Figure 7: (a) Emission spectrum of QE for cavity QED system with topological atom mirror versus \(\phi\) under the strong-coupling regime. (b) Probability distributions of hybrid edge modes versus atoms index for \(\phi=0.85\pi\) indicated by the dashed line in (a). Parameters are the same as Fig. 3(f). (c) A closeup of the emission spectrum of QE for the strong-coupling anticrossing between bulk states and polaritonic states, which are indicated by the black dashed line and the light gray lines, respectively. Parameters are the same as Fig. 4(b). (d) Comparison of the emission spectrum of QE and the reflection and transmission spectra for dark cavity polaritons in the planewave excitation, with \(\phi\) indicated by the dashed dotted line in (a).
where \(\chi_{m}\) is called the dissipation spectrum and \(\left|v_{m}\right\rangle\) is the corresponding wave function. Fig. 8(a) shows \(\chi_{m}\) for the composed system in the strong-coupling regime with 31 atoms in mirror. We can see that the dissipation of modes \(m=1\) and 34 is \(\chi_{1,34}\sim\kappa/2\), thus they are related to two cavity modes. Two radiating modes indexed by \(m=2\) and 3 can be found in the dissipation spectrum, whose dissipation is much greater than other modes. Fig. 8(b) plots the wave function of two radiating modes versus atoms index, where we can identify the odd and even polarization for \(m=2\) and 3, respectively. With the eigenstates \(\left|\psi_{n}\right\rangle\) of Hamiltonian \(\mathrm{Re}\left[H_{\mathrm{eff}}\right]\), we can evaluate the dissipation rate \(\Gamma_{n}\) of the \(n\)th eigenstate to the environment
\[\Gamma_{n}=\left\langle\psi_{n}|\gamma|\psi_{n}\right\rangle=\sum_{m}\Gamma_{n }^{m} \tag{12}\]
with \(\Gamma_{n}^{m}\) being the contribution of the \(m\)th mode in dissipation spectrum
\[\Gamma_{n}^{m}=\chi_{m}\left\langle\psi_{n}\mid v_{m}\right\rangle^{2} \tag{13}\]
Particularly, the eigenstates corresponding to cavity polaritons is indicated by \(n=\pm\). Fig. 4(d) shows the contributions of cavity modes (\(\Gamma_{\pm}^{1}\) and \(\Gamma_{\pm}^{34}\)) and radiating modes (\(\Gamma_{\pm}^{2}\) and \(\Gamma_{\pm}^{3}\)) to the dissipation of cavity polaritons.
|
2301.08146 | What's happening in your neighborhood? A Weakly Supervised Approach to
Detect Local News | Local news articles are a subset of news that impact users in a geographical
area, such as a city, county, or state. Detecting local news (Step 1) and
subsequently deciding its geographical location as well as radius of impact
(Step 2) are two important steps towards accurate local news recommendation.
Naive rule-based methods, such as detecting city names from the news title,
tend to give erroneous results due to lack of understanding of the news
content. Empowered by the latest development in natural language processing, we
develop an integrated pipeline that enables automatic local news detection and
content-based local news recommendations. In this paper, we focus on Step 1 of
the pipeline, which highlights: (1) a weakly supervised framework incorporated
with domain knowledge and auto data processing, and (2) scalability to
multi-lingual settings. Compared with Stanford CoreNLP NER model, our pipeline
has higher precision and recall evaluated on a real-world and human-labeled
dataset. This pipeline has potential to more precise local news to users, helps
local businesses get more exposure, and gives people more information about
their neighborhood safety. | Deven Santosh Shah, Shiying He, Gosuddin Kamaruddin Siddiqi, Radhika Bansal | 2023-01-15T03:20:18Z | http://arxiv.org/abs/2301.08146v3 | # What's happening in your neighborhood? A Weakly Supervised Approach to Detect Local News
###### Abstract
_Local news articles_ are a subset of news that impact users in a geographical area, such as a city, county, or state. Detecting local news (Step 1) and subsequently deciding its geographical location as well as radius of impact (Step 2) are two important steps towards accurate local news recommendation. Naive rule-based methods, such as detecting city names from the news title, tend to give erroneous results due to lack of understanding of the news content. Empowered by the latest development in natural language processing, we develop an integrated pipeline that enables automatic local news detection and content-based local news recommendations. In this paper, we focus on Step 1 of the pipeline, which highlights: (1) a weakly supervised framework incorporated with domain knowledge and auto data processing, and (2) scalability to multi-lingual settings. Compared with Stanford CoreNLP NER model, our pipeline has higher precision and recall evaluated on a real-world and human-labeled dataset. This pipeline has potential to more precise local news to users, helps local businesses get more exposure, and gives people more information about their neighborhood safety.
## 1 Introduction
Local news has always been a constant source of interest because of more relevancy to the individuals comparing with national or international news. People love to remain informed about the news and events happening in and around their neighborhood and find ways to connect with the community. Detecting this local news and showcasing it to the right audience will help them to achieve this. Not only does it benefit the local users and news publishers, but showcasing geolocation-specific news articles can drive user engagement (Robindro et al., 2017) for the products of digital news recommendation. These local news articles could be of different types like crime, events, food and drink, healthcare, politics, college-level sports, real estate, etc. Some examples of these types of local news articles are:
* Crime: "San Jose Police arrest 74-year-old Fresno man in connection to homicide"1 Footnote 1: [https://www.cbsnews.com/sanfrancisco/news/san-jose-police-arrest-74-year-old-fresno-man-in-connection-to-homicide](https://www.cbsnews.com/sanfrancisco/news/san-jose-police-arrest-74-year-old-fresno-man-in-connection-to-homicide)
* Food and restaurants: "Carmel's Much-Anticipated New Fine Dining Restaurant Chez Noir Opens Friday"2 Footnote 2: [https://sf.eater.com/2022/10/5/23389267/chez-noir-open-new-carmel-restaurant-jenny-black](https://sf.eater.com/2022/10/5/23389267/chez-noir-open-new-carmel-restaurant-jenny-black)
* Real estate: "See where home prices have been rising the fastest in Washington"3 Footnote 3: [https://www.msn.com/en-us/money/realestate/see-where-home-prices-have-been-rising-the-fastest-in-washington/ss-AA153W3H](https://www.msn.com/en-us/money/realestate/see-where-home-prices-have-been-rising-the-fastest-in-washington/ss-AA153W3H)
* Law: "Legislature must remake water laws for a drier California"4 Footnote 4: [https://calmatters.org/commentary/2022/10/legislature-must-step-up-for-water-rights-of-all-californians/](https://calmatters.org/commentary/2022/10/legislature-must-step-up-for-water-rights-of-all-californians/)
* Sports: "Samson Ebukam's strong second year continues in win over Rams"5
Footnote 5: [https://sports.yahoo.com/samson-ebukam-strong-second-continues-120006770.html](https://sports.yahoo.com/samson-ebukam-strong-second-continues-120006770.html)
Hence, keeping the users informed is a two-step process, _(1) detecting whether an article is a local news article, (2) determining the geolocation and the impact radius of the local news article, so that we could serve right news articles to the right audience._ In this paper, we will primarily focus on the former task. We define local news articles that impact a specific set of users at the city/county/state level.
Many papers have researched local news. These research papers rely on the geolocation mentioned in the article to serve the local news to their respective users (Tahmasebzadeh et al., 2021; Bell et al., 2015; Robindro et al., 2017; Sankaranarayanan
et al., 2009). However, having a geolocation mentioned in the article doesn't necessarily mean the article is local and is impacting the local population. We came across multiple issues on relying on geolocation extraction to treat the article as a local news article. These are:
1. **Articles of National Importance:** We came across multiple news articles in which the geolocation is mentioned. However, it impacted more than just the local population of the geolocation. These news articles were indifferent to the location name present in them. For instance: "Ryvid Anthem Launch Edition Electric Bike Preorders Are Now Open"; this article has Irvine, CA mentioned in its body, but the article impacts more than just the local population of Irvine, CA6. Footnote 6: [https://www.rideapart.com/news/604729/ryvid-anthem-launch-preorders-open/](https://www.rideapart.com/news/604729/ryvid-anthem-launch-preorders-open/)
2. **Articles reported from a location:** The geolocation were the locations from which the news article is being reported from but the news is not about that location. For instance: "Laboratory to study dark matter opens 1km under Australian town"7 is a science and space related article with Melbourne and Australia in its body and title. "Prince Harry makes surprise visit to Mozambique ahead of trip back to UK" is apparently talking about a celebrity8. These articles will certainly raise interests from broader readers more than the residents from the localized areas mentioned in the context. Therefore it is limited to just showcasing the news to the local audience. Footnote 7: [https://www.pressreader.com/usa/the-guardian-us2/20220820/282248079355210](https://www.pressreader.com/usa/the-guardian-us2/20220820/282248079355210)
3. **Difficult to detect geolocation:** We also found multiple news articles in which the geolocation wasn't present. Still, acronyms of those locations are present, for instance: "WWU students receive racist emails encouraging violence against Black students"9, "SPD updates employee policies on tattoos, jewelry, hair styles, gender language"10, WWU for Western Washington University, or SPD for Seattle Police Department. Detecting these acronyms and mapping them to the correct location is a difficult task. Techniques (Tahmasebzadeh et al., 2021; Robindro et al., 2017; Bell et al., 2015; Sankaranarayanan et al., 2009) relying on geolocation in the article would miss out on these types of local news articles to be showcased to the right audience. Footnote 9: [https://www.kiro.com/news/local/video-www-students-receive-racist-emails-encouraging-violence-against-black-students/ab9d52e5-75e1-4b13-b8b3-e60c352e404f](https://www.kiro.com/news/local/video-www-students-receive-racist-emails-encouraging-violence-against-black-students/ab9d52e5-75e1-4b13-b8b3-e60c352e404f)
Footnote 10: [https://komonews.com/news/local/spd-seattle-police-department-employee-policy-tattoo-jewelry-hair-style-beard-gender-language-recruiting-application-officer-king-county](https://komonews.com/news/local/spd-seattle-police-department-employee-policy-tattoo-jewelry-hair-style-beard-gender-language-recruiting-application-officer-king-county)
Detecting whether an article is a local news article is not only a mere extraction of location names or a certain piece of keywords. It needs a comprehensive systematization of the contextual information, summarization of the article, and then predicting the possibility of the news documents that will attract interest from users in particular location affinity. It is essential to develop an advanced algorithm to understand human language.
Thus, we propose a solution to train a local news classifier. Our major contributions include: (1) serving local news as a two-step process, (2) a weakly supervised framework to gather weakly supervised data along with click statistics of users to train a deep learning model, and (3) an approach to scale the model to non-English languages.
## 2 Related Work
While this is a first attempt at defining the local news and determining it by developing a deep learning model, other techniques exist that do not precisely differentiate between local and non-local news articles but primarily focuses on showcasing news article based on the geolocation of the user. Tahmasebzadeh et al. (2021); Bell et al. (2015); Robindro et al. (2017), rely on extracting the local news as news articles having any geolocation information present in them. As we mentioned in section 1, not all geolocation-mentioned news articles are local and have a local impact. These studies are theoretical and not in production to serve the geolocation-specific local news. Tahmasebzadeh et al. (2021) proposed using geolocation and structural type extracted from an image to showcase local news of that area. Their geolocation extraction as a classification task makes it difficult to scale it worldwide. Bell et al. (2015), focused on Automatic Speech Recognition (ASR) to convert the audio news from news broadcasting channels to
textual content to showcase to the users. Robindro et al. (2017) proposed a study where showcasing news articles belonging to the same geolocation as the user would drive user engagement.
Sankaranarayanan et al. (2009) proposed using tweets on the Twitter platform to gather breaking news in the area. Unlike Google News, Bing News, and Yahoo! News, they gather breaking news from User Generated Content (UGC). They also proposed an importance score defining a particular news article's importance to a neighborhood. Selecting the users manually wouldn't help scale the system worldwide. This technique was theoretical and not in production.
In the field of Journalism, Goncalves et al. (2021); Vaataja et al. (2012) focused on the idea of participatory journalism to help crowdsource the local news to the community's people. They would rely on the geolocation extracted from the mobile application as the location of the post/news. (Alt et al., 2010). Goncalves et al. (2021) focused on coping with the challenges local journalism is facing in Portugal. The neighborhood's people will be encouraged to share photos, videos, and posts about the news, and local journalists will pick it up if it seems a critical issue like crime. These techniques will require the manual intervention of determining the needle in a haystack of posts, figuring out the credibility of the users posting, and it would be difficult to scale it worldwide.
Kliman-Silver et al. (2015) presents a study showcasing the importance of user personalization using Google search results based on user's geolocation. They showcase that the queries that are indifferent to the location, for instance, "Joe Biden" or "abortion" only show a minor change in the search results for different user locations. However, maximum changes are observed for queries like "Starbucks" and "KFC". This can further be extended to showcase local news articles based on the precise location of the user (Alt et al., 2010).
Detecting local news would be the first step in moving forward in the user geolocation-based personalization in the news domain, followed by the geolocation detection and recognition from the article. In this paper, we primarily focus on the first step, i.e., detecting local news articles. If we don't detect local news and don't show it to the right audience, it will degrade the user experience.
## 3 Methodology
### Problem formulation
As we mentioned earlier, we define serving local news to the users as a two-step process, (1) detecting whether an article is a local news article, (2) determining the geolocation of the article to be served to the right audience. These two tasks form the basis for informing users about their city/county/state. _We define local news as news articles that impact a specific segment of users at the city/county/state level._ Our approach to determining local news is to develop a scalable multi-lingual local news classifier trained on a weakly supervised dataset. We will discuss various techniques used to curate the weakly supervised training dataset in Section 4.
### Model Overview
We trained a binary classification model by using XLM-RoBERTa (XLM-R)(Conneau et al., 2019) model as our base model. We chose XLM-R as it is a transformer-based multi-lingual model with pre-learned multi-lingual associations in 100 different languages, allowing us to quickly transfer learn the model learned in the English language to other languages. These multi-lingual associations enable us to scale the classifier to different languages instead of having a separate local classifier model for different languages without losing much on the precision and recall of the English language data. We fine-tuned XLM-R by attaching convolution filters to capture 2, 3, and 4 grams which are eventually connected to a dense layer to calculate the probability of the article being a local news. More details on the model's scalability to other languages can be found in section 4.
### Prediction Algorithm / Features Used
Four different features were used in the binary classification model for training and inference. These are:
* **Topics:** To capture topics of the article, we ran various off-the-shelf and in-house trained topic models, like important keywords extractor, Term Frequency-Inverse Document Frequency(TF-IDF), and Latent Dirichlet Analysis(LDA).
* **Tag-line:** We use an in-house trained extractive summarization model to extract the tagline from the body of the article.
* **Title:** The title is extracted from the article.
* **URL Features:** Features extracted from the URL proved fruitful in gaining higher precision and recall of the classifier model. The URL is split by '/'. With initial model training and analysis, we found the model is biased towards the publisher names instead of basing the decision on the article's content. Hence, we filtered out the publisher name (recognized as the domain name in the URL) and all the numbers present in the URL. Some URLs also contain the title of the article. In case 80% of the words overlap between the last segment of the URL, which normally contains the title, and the extracted title of the article, we filter out the title segment of the URL.
These features are then concatenated and fed into the Binary Local News classification model for training and inference.
## 4 Dataset Preparation
To gather weakly supervised labeled training data, we used multiple weak supervision techniques and various knowledge constraints Shah et al. (2021). The weak supervision techniques are illustrated as follows:
* **Publisher marked local articles:** We used the publisher marked local articles as positives and non-local articles as negatives. However, these labels had a lot of noise. Below are the techniques used to refine the dataset further.
* **Publisher-to-location affinity:*
* We mined the user click logs and calculated the affinity of the publisher to a location. The steps to calculate the publisher-to-location affinity are (as shown in figure 1):
* Gather all the articles published by the publisher in a time frame.
* Calculate an aggregated click counts across all articles published by a publisher grouped by the cities where they were showcased.
* Pick only those cities which have number of clicks > 50 and calculate it's click distribution.
* Calculate the gap ratio between the values of the city with max distribution and every other city (discussed in Section 4.1).
* Choose cities with a gap ratio < 0.25 as the cities the publisher has an affinity for.
* Classify the publisher as a strong local publisher if the number of chosen cities in same state is < 10. For instance, KOMO-TV Seattle is a local publisher with an affinity to the cities in King county, Washington, US.
* Classify the publisher as a strong non-local publisher if the chosen cities are across more than two states. For instance, FOX News is a national news agency having an affinity to various cities across different states like New York, California, Michigan, etc.
* Classify the rest publishers as ambiguous, such as Associated Press, because they broadcast both local and non-local news articles.
* Label the articles published by strong local and non-local publishers as local and non-local, respectively. After applying these techniques, the dataset will still have noise, for example, local publishers may also cover national news articles and vice versa. However, by combining all the knowledge constraints, the noise can be further reduced. The publisher-to-location affinity was used to train the binary classification model only on the English data for bootstrapping the data of non-English languages (discussed in Section 4). We came up with article-to-location affinity similar to publisher-to-location affinity to correct the labels and generate an all-language weakly supervised dataset to eventually train a multi-lingual local classifier model.
* **Distant Supervision:** From our raw data, we matched the canonical URL and title of the
Figure 1: _Publisher to Location Affinity_: Depicting steps to figure out the publisher to location affinity.
licensed articles with the URL and the title of the non-licensed articles, respectively. Matching aims to get similar content from non-licensed news and increase the dataset size and diversity, as the features extracted from the non-licensed content would differ.
* **Bootstrapping:** Bootstrapping helped us further reduce the noise from the non-English language training dataset. We trained a binary local news classifier model on the English language data. The Precision-Recall numbers of this model are presented in the Table 3. We used the model trained on English language to detect whether articles from any other languages are local by translating them into English using GPT-3 Brown et al. (2020). Suppose the article from a different language is already marked as local, however, the new trained model predicts with a high probability that it is non-local (local probability < 0.2), then we corrected the label to non-local and vice versa.
* **Neural Machine Translations (NMT):*
* Due to the scarcity of news articles in other languages, we had to increase the dataset size to make the model understand the local characteristics of a particular language to avoid label bias in the training dataset Shah et al. (2019). To increase the dataset size further for languages like German, Italian, Spanish, Japanese, French, Portuguese, Russian, Chinese, and Korean, we used GPT-3 Brown et al. (2020) for translations. Both front and back translations are used to generate the dataset (as shown in Figure 2):
* **Front translations:*
* Translate local contents from the English to the target languages. Front translations helped maintain the precision of the model trained on English language data.
* **Back translations:*
* Translate the data from different languages to English and then back to their original language. This keeps the same semantics of the news articles while the changed words allows the model to expand its vocabulary and improve the precision and recall on non-English data.
This resulted noise-reduced weakly supervised dataset was used for the proposed multi-lingual binary classification model.
### Test Dataset
We manually labeled and created UHRS11 hitapp to crowdsource the labeling of news articles from both English and non-English languages. The local-news distribution of the test set across different markets is shown in the table 1.
Footnote 11: [https://prod.uhrs.playms.com/UHRS/](https://prod.uhrs.playms.com/UHRS/)
**Gap Ratio:** Gap Ratio is the metric that we utilize to filter out outliers in the data based on their distributions. The steps to calculate the gap ratio are as follows:
1. Calculate the distribution of values for a key/column/index.
2. Calculate the gap between the distributions of a key with the key having the max distribution. We formulated it as: \[x_{gap}=\frac{x-x_{maxDistribution}}{x_{maxDistribution}}\] where \(x\) is the distribution of a key, \(x_{maxDistribution}\) is the distribution value of the key having the maximum distribution share, \(x_{gap}\) is the gap between the distribution of a particular key, and the key having the maximum distribution value.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Language** & **Local news distribution** \\ \hline English & 37.21\% \\ non-English & 46.56\% \\ \hline
**Market** & **Local news distribution** \\ \hline EN-AU & 23.31\% \\ EN-CA & 42.07\% \\ EN-GB & 38.49\% \\ EN-IN & 39.47\% \\ EN-US & 42.22\% \\ DE-DE & 74.46\% \\ ES-MX & 32.74\% \\ ES-US & 51.60\% \\ FR-CA & 79.61\% \\ FR-FR & 47.40\% \\ T-IT & 84.33\% \\ JA-JP & 38.89\% \\ \hline \end{tabular}
\end{table}
Table 1: Local news test set distribution aggregated and market breakdown.
Figure 2: _Neural Machine Translation;_ Introduce diversity in the style of writing news articles in the dataset.
3. Choose those keys whose gap ratio is < 0.25; Increasing this threshold will result in capturing more outliers, decreasing the threshold will make the outlier selection stricter.
## 5 Experiment Details
We trained the model on an Azure compute server with a 2.4GHz CPU and 1 Tesla V100 GPU. We extracted 5.2 million articles across 10 markets and 6 different languages and then split the dataset into 90%-10% as training and validation sets. Table 2 shows the training data distribution across different markets.
We didn't perform hyperparameter tuning and used the default parameters to fine-tune the Roberta-XLM model Conneau et al. (2019). We used cross-entropy loss as our loss function with Adam optimizer.
## 6 Results
The model performance on the test set is measured by precision and recall as the metric. Since we want to improve user engagement in the local news segment, our primary focus is improving the precision at the recall cost. We chose Stanford's CoreNLP NER model Manning et al. (2014) where geolocation presents in the articles as local news as baseline. Table 3 and 4 depict precision-recall of the baseline and trained local news classifier model. The performances reported are at the prediction score cut-off of 0.5. From the table, the multi-lingual local classifier model outperforms the NER model on precision. Interestingly, on the non-English segment, the NER model has a slightly better recall. This is explainable as local news are supposed to have a geolocation name presented in the article, however the inverse isn't true. _Geolocation presents in the article_ is only a sufficient condition for an article to be classified as a local content as we introduced in Section 1. Therefore, this local classifier model achieves a strong performance on precision while keeping a similar recall as the NER model.
The detailed performance on per market are shown in table 4. The local classifier model trained on the multi-lingual data set with NMT maintains good performance on local news identification in most of the market segments.
## 7 Conclusion
In this paper, we propose an integrated pipeline for local news recommendation in two steps (1) detecting local news and (2) determining the geolocation and the impact radius of the article. The first step is the primary focus of this paper. This is achieved by a multi-lingual model incorporated with multiple weakly supervised methods. We conducted comprehensive experiments on a real-world dataset of more than 5 million news articles from 6 different languages. The results show that our model is able to detect the local news precisely with the scalability to multiple languages.
## 8 Limitations
There is good potential to improve the local news classifier for non-English languages. Lack of clean training data couldn't help us achieve higher recall in non-English languages. We hope this paper encourages further research in improving the local news classifier in non-English languages to keep people informed.
|
2307.05934 | Sem-CS: Semantic CLIPStyler for Text-Based Image Style Transfer | CLIPStyler demonstrated image style transfer with realistic textures using
only a style text description (instead of requiring a reference style image).
However, the ground semantics of objects in the style transfer output is lost
due to style spill-over on salient and background objects (content mismatch) or
over-stylization. To solve this, we propose Semantic CLIPStyler (Sem-CS), that
performs semantic style transfer. Sem-CS first segments the content image into
salient and non-salient objects and then transfers artistic style based on a
given style text description. The semantic style transfer is achieved using
global foreground loss (for salient objects) and global background loss (for
non-salient objects). Our empirical results, including DISTS, NIMA and user
study scores, show that our proposed framework yields superior qualitative and
quantitative performance. Our code is available at
github.com/chandagrover/sem-cs. | Chanda Grover Kamra, Indra Deep Mastan, Debayan Gupta | 2023-07-12T05:59:42Z | http://arxiv.org/abs/2307.05934v1 | # SEM-CS: Semantic Clipstyler for Text-Based Image Style Transfer
###### Abstract
CLIPStyler demonstrated image style transfer with realistic textures using only a style text description (instead of requiring a reference style image). However, the ground semantics of objects in the style transfer output is lost due to style spill-over on salient and background objects (content mismatch) or over-stylization. To solve this, we propose Semantic CLIP-Styler (Sem-CS), that performs semantic style transfer.
Sem-CS first segments the content image into salient and non-salient objects and then transfers artistic style based on a given style text description. The semantic style transfer is achieved using global foreground loss (for salient objects) and global background loss (for non-salient objects). Our empirical results, including DISTS, NIMA and user study scores, show that our proposed framework yields superior qualitative and quantitative performance. Our code is available at github.com/chandagrover/sem-cs.
Chanda Grover Kamra\({}^{\star}\)Indra Deep Mastan\({}^{\dagger}\)Debayan Gupta\({}^{\star}\)\({}^{\star}\) Ashoka University, India. \({}^{\dagger}\) The LNM Institute of Information Technology (LNMIIT), India.
Object detection, Salient, CLIP, Style Transfer, Semantics
## 1 Introduction
Image style transfer [1, 2, 3, 4, 5, 6, 7] aims to synthesize new images by transferring style features such as colour and texture patterns to the content image. Image style transfer can be classified into photo-realistic style transfer [8, 9] and artistic style transfer [1, 10] based on the input content image and style image. One problem in image style transfer is the fact that a user needs to find a good reference image with the desired style.
Recently, CLIPStyler [11] proposed a novel artistic style transfer approach that uses a text condition to perform style transfer without a reference style image. However, it suffers from the over-styling problem, which results in the distortion of content features in the output image (Fig. 1-first row).
Another challenge in style transfer is when style spillover between dissimilar objects occurs, also known as the content mismatch problem [9] (Fig. 1-first-second row). Content mismatch reduces the visual quality of the style transfer output, and it is hard to avoid when the semantic objects in the style and the content features are of different types and numbers [13, 14]. A good style transfer approach minimizes both content mismatch and over-styling.
Generative Artisan (Gen-Art) [12] addresses the over-styling problem of CLIPStyler [11] through an FCN semantic segmentation network [15]. They control the degree of image style transfer in different semantic chunks. However, the supervised approach to extract the semantic parts of the content image needs to be more generalizable. _E.g._, their FCN semantic segmentation network only considers 21 classes; this is insufficient to represent real-world images. Also, they do not address content mismatch (see Fig. 1).
In this paper, we propose Semantic CLIPStyler (Sem-CS), which addresses the content mismatch and over-styling problems of text-condition based style transfer. We use the deep spectral segmentation network [16], which extracts salient and non-salient objects of the content image in an unsupervised manner. As such, our method is generalizes well for real-world images.
Sem-CS applies styles on salient or non-salient objects based on the text conditions. The key idea is to perform semantic style transfer using the proposed _global foreground_ and _background loss_. Sem-CS also achieves controllable generation in transferring texture information for multiple text conditions. Our major contributions are as follows:
Figure 1: The figure illustrates over-stylization and the effects of content mismatch on style transfer output. **Top row** CLIPStyler [11] and Gen-Art [12] over-stylize style features on salient objects and image background as the content features of the flower are lost. Sem-CS (ours) preserved the semantics of the flower. **Bottom row** CLIPStyler [11] and Generative Artisan [12] outputs suffer from content mismatch as the Desert Sand style is applied to both man and horse. Sem-CS (ours) performed style transfer while minimizing content mismatch and preserving semantics.
* We propose a novel framework (Sem-CS) to perform style transfer with a text condition (Algorithm 1).
* We propose global foreground and global background loss to supervise style features semantically on the output (Sec. 2).
* We provide a reference-based quality assessment using DISTS [17] as well as a no-reference quality assessment using NIMA [18] to show Sem-CS outperforms baselines (Table 1).
## 2 Our Method
This section describes our framework. It has two major phases: Salient Object Detection and Semantic Style Transfer. We illustrate Sem-CS in Fig. 2 and Algorithm 1 formally describes the proposed framework. The two phases of Sem-CS are described as follows.
_Salient Object Detection:_ In the first phase, we compute the masks for salient objects in the content image; see Algorithm 1, lines 2-4. (The mask for salient objects is computed in an unsupervised setting.) First, we compute the affinity matrix (W) of the content image \(I_{C}\) from the attention block of the last layers of feature extractor \(\phi\). Secondly, we find the eigenvectors of the laplacian of the affinity matrix. Finally, we extract the mask from the eigenvector \(y_{1}\).
_Semantic Style Transfer:_ In the second phase, we train Sem-StyleNet \(S\) to transfer style features to the salient objects and background objects based on the text conditions (Fig. 2). We use ResNet50 with softmax3d [19] for the image encoder to make the stylized output more robust. We propose _global foreground loss_ and _global background loss_ for style supervision on salient objects and the background of the output
Figure 2: The figure shows Semantic CLIPStyler (Sem-CS) framework. The two phases of Sem-CS are Salient Object Detection and Semantic Style Transfer are shown at the top & bottom. The proposed _global foreground_ and _background loss_ are illustrated in the middle-right.
image, respectively. These are:
_Global Foreground Loss_**.** This ensures that relevant style text applies to the salient objects present in the output. To maintain the diversity of generated stylized outputs, directional CLIP loss [20] is computed instead of global CLIP loss [21] by aligning the CLIP-space direction between the text-image pairs of input and output. Foreground text directional loss (\(\Delta fg_{T}\)) is defined to be the difference between source text embedding (\(t_{src}\)) and foreground style text embedding (\(t_{fg}\)) as described in Eq. 1.
\[\Delta fg_{T}=E_{T}(t_{fg})-E_{T}(t_{src}) \tag{1}\]
Here, \(E_{T}\) is the CLIP text-encoder and \(t_{src}\) is set to "Photo". Foreground image directional loss (\(\Delta fg_{I}\)) is computed between embeddings of salient objects and style transfer output. Given the content image \(I_{C}\) and \(Mask\), Hadamard product \(\odot\) is computed between \(Mask\) and \(S(I_{C})\) to extract features for salient objects as \(I_{fg}=Mask\odot S(I_{C})\). Next, \(\Delta fg_{I}\) is computed as described in Eq. 2.
\[\Delta fg_{I}=E_{I}(I_{fg})-E_{I}(I_{C}) \tag{2}\]
\(E_{I}\) is the CLIP image encoder. Finally, Global foreground loss (\(\mathcal{L}_{FGlob}\)) is computed by taking cosine similarity between CLIP-Space direction of the foreground of image and style texts (Eq. 3).
\[\mathcal{L}_{FGlob}=1-\frac{\Delta fg_{I}.\Delta fg_{T}}{|\Delta fg_{I}|| \Delta fg_{T}|} \tag{3}\]
Here, one minus the cosine similarity represents the distance between image and text directional loss. In other words, the global foreground loss minimizes the distance between the image direction loss and text direction loss for salient objects.
_Global Background Loss_**.** This is computed for style feature supervision of the output image background. Similar to global foreground loss, we compute background text directional loss (\(\Delta bg_{T}\)) for style background as given in Eq. 4.
\[\Delta bg_{T}=E_{T}(t_{bg})-E_{T}(t_{src}) \tag{4}\]
Here, \(t_{bg}\) is the style text condition for the background. Also, background image directional loss \(\Delta bg_{I}\) is computed as shown in Eq. 5. We take Hadamard product between the background mask and generated image \(I_{bg}=(1-Mask)\odot I_{O}\) to extract background features. Next, \(\Delta bg_{I}\) is computed as below in Eq. 5
\[\Delta bg_{I}=E_{I}(I_{bg})-E_{I}(I_{C}) \tag{5}\]
Finally, global background loss \(\mathcal{L}_{BGlob}\) is computed to minimize the distance between image and text directional losses for background objects as described in Eq. 6.
\[\mathcal{L}_{BGlob}=1-\frac{\Delta bg_{I}.\Delta bg_{T}}{|\Delta bg_{I}|| \Delta bg_{T}|} \tag{6}\]
Here, global background loss \(\mathcal{L}_{BGlob}\) helps to perform controllable style transfer for background objects in the style transfer outputs.
_Other Loss_**.** We also add content loss and a total variation regularization loss to our proposed loss for style transfer [11].
## 3 Experimental Results
Fig. 3 shows that Sem-CS preserves the semantics of objects in output images while minimizing over-stylization and con
Figure 3: The figure shows the visual comparison of style transfer outputs with single text condition a) The text-based image style transfer outputs on the left shows that CLIPStyle[11] and Gen-Art[12] suffers from over-stylization. Sem-CS (ours) reduces the effects of over-stylization. b) Similarly, the outputs on the right shows that baseline methods suffers from content mismatch problem. Sem-CS (ours) reduces content mismatch problem (images are best viewed after zooming).
tent mismatch. For example, let us see first row on left side of Fig. 3. It could be observed that CLIPStyler [11] and Generative Artisan [12] outputs are over-stylized (the "_Acrylic_" style spills both below the bridge and onto the sky), and the content features of the water are lost. Sem-CS (ours) preserves the semantics of the bridge. Similarly, in the first row, on the right side, CLIPStyler [11] and Generative Artisan [12] outputs suffer from content mismatch as the _"Snowy"_ style is applied to the bicycle and background. Sem-CS performs style transfer while minimizing the content mismatch effects of the _"Snowy"_ style feature.
We evaluated Sem-CS framework with DISTS [17], NIMA [18], and a User Study (Table 1). We describe the quantitative results as follows.
_DISTS [17] Scores._ DISTS [17] is a reference-based image quality assessment that shows the preservation of object structure in the presence of texture transfer in stylized output (how well they are preserved); since DISTS [17] may not capture all aspects of style transfer quality like semantic coherence, we add NIMA [18] scores to support it and also conduct a user study Table 1.
_NIMA [18] Scores._ NIMA [18] is a no-reference-based image quality metric that predicts the quality of distribution ratings with a significant correlation to ground truth ratings. Table 1 reports the average scores of top-100 output images.
_User Study._ We conducted a user study to validate preserved semantics of objects while transferring the style texts onto the content image (Table 1). We randomly sampled 5 groups of 15 images from the outputs produced above, with ten images from single-style text and five images from double-style text stylized outputs. All 5 \(\times\) 15 stylized outputs were distributed anonymously and randomly to 40 participants. They were asked to observe the stylized results from different methods and vote for the image that looks better in quality and matches the style text. Table 1 shows the percentage vote for each method. Sem-CS outperforms baseline methods.
Overall, we find that Sem-CS scores are higher than the baseline methods CLIPStyler [11] and Generative Artisan [12]. This justifies that adding global foreground and background losses improves the image quality of stylized output. Sem-CS minimizes content mismatch and prevents distortion of objects present in output image when supervising style features.
_Ablation Studies._ Fig. 4 illustrate ablation studies for style transfer using double style texts condition. The double-style texts are challenging because style supervision is required for salient objects and backgrounds of image. Therefore, double-style texts require more controllable generation capabilities for style transfer. We evaluated Sem-CS framework for double style-texts with NIMA [18] and DISTS [17] scores on 100 stylized outputs. Table 2 describes that Sem-CS outperforms Generative Artisan [12]. Also, note that the user study scores of style transfer outputs for double text condition for Sem-CS are higher.
## 4 Conclusion
We proposed Semantic CLIPStyler (Sem-CS) to preserve the semantics of objects and prevent over-stylization when performing text-based image style transfer. We showed that style transfer could be done semantically by training the StyleNet with the proposed global background and foreground loss. Our quantitative and qualitative experimental results showed that Sem-CS achieves superior stylized output with text descriptions. The scope of future work extends to applying different text conditions on more than one object present in the content image. For this, we aim to improve the segmentation mask of content image.
|
2301.10709 | The Clinical Trials Puzzle: How Network Effects Limit Drug Discovery | The depth of knowledge offered by post-genomic medicine has carried the
promise of new drugs, and cures for multiple diseases. To explore the degree to
which this capability has materialized, we extract meta-data from 356,403
clinical trials spanning four decades, aiming to offer mechanistic insights
into the innovation practices in drug discovery. We find that convention
dominates over innovation, as over 96% of the recorded trials focus on
previously tested drug targets, and the tested drugs target only 12% of the
human interactome. If current patterns persist, it would take 170 years to
target all druggable proteins. We uncover two network-based fundamental
mechanisms that currently limit target discovery: preferential attachment,
leading to the repeated exploration of previously targeted proteins; and local
network effects, limiting exploration to proteins interacting with highly
explored proteins. We build on these insights to develop a quantitative
network-based model of drug discovery. We demonstrate that the model is able to
accurately recreate the exploration patterns observed in clinical trials. Most
importantly, we show that a network-based search strategy can widen the scope
of drug discovery by guiding exploration to novel proteins that are part of
under explored regions in the human interactome. | Kishore Vasan, Deisy Gysi, Albert-Laszlo Barabasi | 2023-01-25T17:21:35Z | http://arxiv.org/abs/2301.10709v1 | # The Clinical Trials Puzzle: How Network Effects Limit Drug Discovery
###### Abstract
The depth of knowledge offered by post-genomic medicine has carried the promise of new drugs, and cures for multiple diseases. To explore the degree to which this capability has materialized, we extract meta-data from 356,403 clinical trials spanning four decades, aiming to offer mechanistic insights into the innovation practices in drug discovery. We find that convention dominates over innovation, as over 96% of the recorded trials focus on previously tested drug targets, and the tested drugs target only 12% of the human interactome. If current patterns persist, it would take 170 years to target all druggable proteins. We uncover two network-based fundamental mechanisms that currently limit target discovery: _preferential attachment_, leading to the repeated exploration of previously targeted proteins; and _local network effects_, limiting exploration to proteins interacting with highly explored proteins. We build on these insights to develop a quantitative network-based model of drug discovery. We demonstrate that the model is able to accurately recreate the exploration patterns observed in clinical trials. Most importantly, we show that a network-based search strategy can widen the scope of drug discovery by guiding exploration to novel proteins that are part of under explored regions in the human interactome.
## Introduction
Prior to receiving approval by the Food and Drug Administration (FDA), a new drug must complete multiple phases of clinical trials to prove its efficacy and safety. The complete clinical trials pipeline for a single drug, from early safety testing to trials on large populations, takes on average six years [1], and is estimated to cost about $1 billion USD [2]. In 2007, the FDA Act [3] required funders to publicly post clinical trial designs and results to an online repository managed by the National Library of Medicine (NLM), increasing transparency in the drug discovery process [4]. Despite well-documented compliance issues on reporting the results [5, 6, 7], the accumulated data offers a unique lens into the drug innovation practices [8], and has allowed researchers to conduct meta-analyses on disease specific trials [9, 10], obtain key insights into equity for patients with rare diseases [11, 12], and unveil systemic biases in patient demographics [13, 14].
The choices in clinical trials, from designing the trial protocol to selecting the patient population to testing drugs for specific diseases, have direct implications for the efficacy and equity of drugs that enter the market. While advances in genomics, machine learning [15, 16], network medicine [17, 18], and pharmacology [19] present novel opportunities for drug discovery, potentially reducing the cost and time of conducting exhaustive experimental testing [20], they may be inadequate if the discovered knowledge about drug candidates (_in-silico_) is not actively transferred to applied settings (_in-vitro_), and make their way into clinical practice. Therefore, understanding the drug exploration patterns documented by clinical trials is important to improve population health [21, 22].
In this work, we offer a large-scale temporal analysis of drugs and its target's trajectory through clinical trials by exploring the cumulative knowledge of the clinical trials database. By combining data from various sources, including investigational and approved drugs, rare and common diseases, proteins and its disease associations, we aim to understand the factors driving the discovery and exploration of new drugs
and targets. We find that while the number of clinical trials continues to increase, the rate of novel drugs entering clinical trials has decreased since 2001, a puzzling effect potentially indicating a drug discovery winter. We also find that target selection is primarily driven by two distinct network-based mechanisms, preferential attachment and local network effects, leading to the over exploration of certain drugs and protein targets. Our results illustrate that we currently fail to utilize the complete therapeutic potential of the human genome, prompting us to offer a data-driven pathway to unlock its potential through the human interactome, which captures the physical interaction between targets. We build a quantitative model of drug discovery that helps unveil network effects capable of boosting the identification of novel targets.
## Results
### Curating clinical trials and drugs
We extracted the clinical trials data from the publicly available clinical trials portal ([https://clinicaltrials.gov](https://clinicaltrials.gov)), documenting 356,403 trials from 1975 to 2020. We observe a rapid growth in the number of reported drug trials before the 2007 activation date of the FDA amendment that required all funders to publicly disclose all active clinical trials by that year (Fig 1 A, vertical line), likely reflecting the sudden registration of all ongoing trials. Following 2007, an organic growth sets in, indicating compliance with public reporting of new trials.
We conducted a multi-step data standardization process to disambiguate drug names listed on trials (Supplementary Section 1), enabling the identification of 5,694 drugs used in 127,432 trials (89% of drug trials). A drug is designed to bind to specific proteins in the human interactome, known as primary drug targets, responsible for the desired therapeutic effect. In some cases, drugs can also indirectly bind to other proteins, referred to as secondary drug targets. Of the 5,694 identified drugs, 2,528 (44%) drugs have associations to 2,726 drug targets (both primary and secondary) and 1,442 (25%) drugs have associations
to 1,842 primary targets. We consider both primary and secondary targets, but we find that our results apply even when we limit our focus on primary targets only (Supplementary Section 1.2).
Clinical trials are divided into several phases[23]. The pre-clinical stage (Phase 0 or early Phase 1) involves small dosage of a drug on a few people for a short duration to measure treatment response, corresponding to 1,880 (1.5%) trials in our data. Phase 1 is the first full-scale human trial that includes close monitoring of treatment on a small number of patients, representing 26,207 trials (18%). Phase 2 requires 25 to 100 patients with a specific disease condition to test for drug efficacy, representing 37,784 (26%) trials. Phase 3 usually involves several hundred patients, where the experimental drugs are tested alongside other drugs to compare side effects and drug efficacy, representing 24,896 (17%) trials. Finally, Phase 4 often involves thousands of patients, aiming to gain additional knowledge on drug safety over time, interaction with various diseases, and consists of 21,632 (15%) trials. Some trials combined multiple phases such as Phase 1/ Phase 2, Phase 2/ Phase 3, together representing 11,381 (8%) trials in our database. Here, we focus only on drug trials in Phases 1 to 4, representing in total 110,519 (76%) trials (Fig 1 B highlighted), and disregard 19,718 (13%) trials without phase information (Fig 1 A, gray). Clinical trials can test multiple types of interventions, from drugs to medical devices to behavioral studies. Drugs, the most widely tested intervention, represent 40% of all trials, followed by medical devices (10%) and behavioral interventions (10%) (Fig 1 C).
**Drug discovery winter**
The Human Genome Project (HGP), lasting from 1990 to 2001, boosted innovation and drug exploration[24], as in this decade clinical trials tested 768 (30% of all) new drugs and 1,149 (42% of all) new targets (Fig 2 A shaded). Yet, beginning 2001, the exploration of new drugs has reduced. For example, between 2011 to 2020, clinical trials tested only 339 (13%) new drugs and 662 (24%) new targets (Fig 2 A, Supplementary Fig S8), which, on average, corresponds to 33 new drugs and 24 new targets yearly,
considerably lower compared to 99 drugs and 113 targets tested yearly in the early 2000s. Further, of the 339 new drugs that entered clinical trials, only 88 (25%) drugs have novel targets, i.e. targeting not previously targeted proteins (Fig 2 A, bottom). This indicates a drug discovery winter that started around 2001 characterized by a large number of clinical trials that focus mainly on drugs that target proteins already targeted by other previously tested or approved drugs.
Throughout the history of clinical trials, 956 drugs (17% of all), involving 1,340 targets (49% of all) have been approved by the FDA (Fig 2 A inset). Yet, only 342 (35%) approved drugs test novel targets, indicating that drugs with established targets are more likely to receive approval [25]. Although 1,449 (70%) drugs and 2,076 (81%) targets have reached Phase 4, only 40% of those drugs and 51% of those targets in Phase 4 targets received approval (Fig 2 B). We also find that, on average a drug experiences a 3-year lag for approval after successfully completing Phase 3 clinical trials capturing the slow approval period, despite standard clinical development times [26] (Supplementary Fig S12). Taken together, we find that clinical trials have tested only 12% of all human proteins and 22% of all druggable proteins [27] (Fig 2 C). We estimate that if the current exploration patterns persist, it will likely lead to the exploration of 2,477 (13% of all) proteins by 2025, and following this rate, it would take 170 years to test all 10,648 druggable proteins (Supplementary Section 2).
**Previously tested proteins are repeatedly selected for future trials**
Clinical trials tend to focus on a small number of previously tested proteins, leading to an uneven approach to drug discovery (Fig 2, Supplementary Fig S9, Supplementary Section 3). For example, we find that CYP3A4, ABCB1, ABCC2, SLCO1A2, proteins associated with the drug metabolism and transportation [28], are involved in 72,884 (66% of all) trials, while EGFR, TNF, TP53, proteins associated to auto-immune diseases and several neoplasms, are involved in 8,396 (8% of all) trials (Fig 2 D). Similarly, we find lidocaine, levomenthol, drugs that serve as anesthetics, to be over-represented in trials (Fig 2 E).
The COVID-19 pandemic had also a detectable impact on trial activity: hydroxychloroquine, a dormant drug which had a few clinical trials for over a decade, experienced a rapid increase in the number of trials in 2020[29] (Fig 2 E).
A consequence of this uneven drug-target exploration is that only a small number of trials focus on new targets, new drugs, and new target combinations (Fig 3 A-C). The majority of the trials (50%) involve only previously approved drugs, while 11% of the trials test a combination of approved and experimental drugs (Fig 3 D; Supplementary Section 4). Seeking to find the patterns responsible for this over-exploration of previously targeted proteins, we measured to what degree targets that received more attention in the past are tested in subsequent years. We find that the number of drugs that target a specific protein, \(N_{drug}(t)\), is well approximated by a growth rate following, \(N_{drug}(t)\propto N_{drug}^{\gamma}(t-1)\), where \(\gamma\) is a scaling exponent (Supplementary Fig S10; \(\gamma_{2000}=1.2\), \(\gamma_{2010}=1.1\), \(\gamma_{2020}=0.9\)). This pattern, known as preferential attachment, is known to be responsible for the emergence of network hubs in network science[30, 31] and quantifies the degree to which previously tested proteins have a cumulative advantage over other proteins (Supplementary Section 5).
**The role of human interactome in drug exploration**
Some diseases can be treated by inhibiting the disease associated proteins, but most often the effective drugs target proteins that are in the network vicinity of known disease proteins[32]. Indeed, most drugs act by perturbing the activity of the sub-cellular web known as the human interactome[33], captured by experimentally detected Protein-Protein Interactions (PPI) (Fig 4 A). This prompts us to ask, can we take advantage of the interactome to better understand the patterns characterizing target discovery and exploration. To answer this question, we first mapped the 2,726 drug targets explored in clinical trials into the interactome, finding that 1,260 (92% of all) experimental drugs target at least one protein that has been previously targeted by another approved drug, in line with Figs 2 and 3. However, when focusing on the
proteins not targeted by previously approved drugs, we find that 891 (76%) of them interact with at least one protein that is targeted by an approved drug, while 274 (23%) are two steps away from the target of an approved drug. This local network-based clustering of experimental and approved drugs is absent if we randomly select the drug targets (Supplementary Section 6.1).
We also find that proteins located farther from approved and experimental targets are rarely selected as a drug-target (Fig 4 A), even if they have multiple disease associations and are known to be druggable. In other words, we find a strong preference for targeting proteins that are embedded in local network neighborhoods with multiple explored targets (Supplementary Fig S19). This means that a protein that interacts with other proteins that are the subject of multiple clinical trials for experimental or approved drugs is more likely to be selected as a new drug-target compared to a protein located in an unexplored network neighborhood. This suggests that the protein-protein interaction network captures and potentially drives drug discovery and exploration[34].
To unlock the impact of the observed network effects, we examine the likelihood of a protein to be selected as a drug-target in a future clinical trial using a Generalized Linear Mixed Model (GLMM). The GLMM model considers as input four features of each target: (1) disease associations, (2) number of approved drugs targeting it, (3) number of clinical trials it was involved in, and (4) number of experimental drugs targeting it. As an output, it offers several insights on the mechanisms governing new drug-target exploration (Fig 4 B; Supplementary Table S3):
1. Disease associated proteins are two times more likely to be in a clinical trial compared to proteins with no disease associations (OR: 2.2 [CI:1.6, 3.2], p<0.05).
2. Proteins experience increased likelihood of becoming the target of a new drug when they are already targeted by multiple approved drugs (OR: 3.7 [CI: 3.6, 3.9], p<0.01), multiple experimental drugs (OR: 2.7 [CI: 2.6, 2.8], p<0.01), or are the subject of multiple trials (OR: 1.47 [CI: 1.45, 1.49],
p<0.01).
3. Previously untargeted proteins are more likely to be selected if they interact with proteins associated with multiple approved drugs (OR: 1.01 [CI: 0.99, 1.04]), multiple trials (OR: 1.03 [CI: 1.01, 1.04], p<0.01), or multiple experimental drugs (OR: 1.05 [CI: 1.03, 1.07], p<0.01).
These findings establish two fundamental mechanisms that drive drug exploration:
(i) _Preferential attachment:_ The future attractiveness of a protein as a drug candidate increases as more drugs target it and more trials focus on it (increased clinical exposure). For example, for a protein that is already targeted by ten drugs, its odds of being the target of a new drug increases eight-fold, compared to a protein not targeted by a drug.
(ii) _Local network effects_: Previously untargeted proteins located in network neighborhoods with high exploration patterns (containing multiple drug targets and clinical trials) are more likely to be selected as new drug target compared to proteins located in network neighborhoods with fewer clinical trials and drugs.
**Modeling choices in drug discovery**
We build on the insights (i) and (ii) to introduce a network model that aims to quantitatively recreate the observed patterns in drug exploration (Supplementary Section 7), and helps us understand how to accelerate drug discovery by exploring a wider set of druggable candidates. We begin by creating a timeline of drug discovery, accounting for the precise dates when targets became associated with drugs (Fig 5 A). Using the proteins (nodes) and its interactions (links) in the PPI network as the underlying space of possible exploration, we model drug discovery through two parameters: The parameter \(p\) represents the probability that a previously tested protein is selected again for clinical trials. Hence, for \(p=0\), we model the scenario where we always choose untargeted proteins, while for \(p=1\) we always select previously tested proteins as targets. The second parameter, \(q\), represents the probability that we choose an untargeted
protein that is part of an explored neighborhood, driven by local network search (Fig 5 B). Hence, for \(q=0\), we always select proteins from unexplored neighborhoods, while for \(q=1\) we select proteins from previously explored neighborhoods. Finally, to account for preferential attachment in target selection, a previously tested protein is selected again as a target proportionally to the number of drugs that have targeted it in the previous years, \(P(N_{drug}(t))\propto P(N_{drug}(t-1))\) (Supplementary Fig S10).
The advantage of the proposed model is that we can explicitly extract the parameters \(p\) and \(q\) from the clinical trials data (Fig 5 C). For example, in 2010, 295 proteins were tested in clinical trials, of which 244 (82%) were tested in previous clinical trials, and we find that of the 51 previously untargeted proteins, 45 (88%) interact with a previously tested protein, hence \(p=0.82\) and \(q=0.88\). We find that the empirically obtained (\(p\),\(q\)) parameters are remarkably stable over time, indicating that previously tested proteins are in each year preferred at high rates (\(p_{2010}=0.82\), \(p_{2015}=0.78\), \(p_{2020}=0.91\); Supplementary Fig S18). We also find that among the untargeted proteins, those interacting with other previously tested proteins are more likely to be selected (\(q_{2010}=0.88\), \(q_{2015}=0.92\), \(q_{2020}=0.87\)), allowing us to quantify the stable patterns characterizing drug discovery (Supplementary Fig S19). As Fig 5 C shows, the empirically observed patterns are stable in the high (\(p\), \(q\)) regime, with a slight shift over time to higher values of \(p\) and \(q\), confirming an increasing trend to explore previously tested targets.
We find that for the observed (\(p^{*}\), \(q^{*}\)) values, the network model accurately reproduces the distribution of number of drugs per target (Fig 5 D; KS-distance: 0.06; p<0.01). The model also allows us to test the relative importance of its building blocks. For example, if we remove the preferential selection of targets, the model fails to capture the drug exploration patterns (Supplementary Fig S23), confirming that preferential attachment (PA) is a key ingredient of the current drug exploration strategy. The model also unveils the imperfections of the current target selection patterns: the PA strategy, which redirects attention and resources to previously tested proteins, only tests 21 new targets yearly on average. As a consequence,
the same protein is explored as a target for a total of 175 (17% of all) drugs (\(GINI=0.65\)), acting as a hub of drug discovery (Supplementary Section 7.1). Overall, the current strategy, by repeatedly targeting previously tested targets, fails to take advantage of the broader potential of the interactome to unveil potential novel targets. To validate the model, we quantified its ability to predict drug candidates for three autoimmune diseases - Rheumatoid Arthritis (RA), Crohn's Disease (CD), and Asthma (Supplementary Section 7.2). We find that the model accurately predicted novel candidates for these diseases with 70% accuracy (Supplementary Fig S25). Further, we validated the predicted proteins through an extensive literature search, finding them to be biologically relevant (Supplementary Table S5). For example, the model identified protein _NLRP3_ as a potential drug candidate for RA, which has been shown to reduce RA-induced inflammation in animal models[35]. These results demonstrate that a network strategy can be a useful mechanism to drive exploration towards proteins in druggable parts of the network.
Finally, we want to exploit the predictive power of the network model to explore how to incentivize a wider exploration of human interactome as potential targets. For this, we examine two alternative exploration strategies: (i) Random (R) strategy, when the newly tested proteins are randomly selected (\(p=0.5\)); (ii) Network Search (NS) strategy, when untargeted proteins interacting with previously targeted proteins are preferred (\(p=0.05\)). In each case we keep \(q=0.95\), as indicated by the empirical data.
We find that the random (R) strategy selects more drug targets than currently tested (as captured by the PA strategy) (2,655 vs 1,121), offering an opportunity to deviate from the current distribution of number of drugs per target (Fig 5 E, KS-distance: 0.22, p\(<\)0.01). Despite the randomness of the strategy, the same protein is selected as a target for 110 (11% of all) drugs (\(GINI=0.35\)), indicating that the R strategy also focuses repeatedly on a few network hubs, a pattern similar to the one observed in the PA strategy (175). Overall, the R strategy tests more targets than PA but still results in an over-exploration of a few proteins, and hence offers minimal improvements compared to PA (Supplementary Fig S24).
In contrast, we find that the network search (NS) strategy generates statistically different distribution of number of drugs per target (Fig 5 F; KS-distance: 0.37; p\(<\)0.01). Most importantly, the strategy selected 4,055 targets, a three-fold increase in the number of selected targets compared to the PA strategy (1,121). Of those 4,055, we find that 3,922 (96%) are new targets. Further, the NS strategy selects the same protein as a target for a maximum of 10 (1% of all) drugs (\(GINI:0.06\)), significantly lower compared to the R (110) or PA (175) strategies.
Overall, our results indicate that the current practice (PA) is inefficient in terms of exploring the human interactome, focusing most resources on a small number of highly explored protein targets. In contrast, a network search approach can improve the total number of tested targets by preventing the emergence of protein hubs in drug discovery and also attract attention to potential drug candidates, ultimately resulting in a wider exploration of the human interactome. These results suggest that policy changes, such as prioritizing the approval of drugs with novel targets or targeted funding from the National Institutes of Health (NIH) towards the exploration of novel targets, could significantly enhance drug discovery by re-focusing resources on a wider range of novel targets while maintaining accuracy.
## Discussion
A scientist's choice of an idea to pursue is influenced by a combination of the project novelty and its potential research impact[36, 37]. Similarly, a pharmaceutical company's choice of a target for a new drug is influenced by its potential market value and the likelihood that the drug succeeds in clinical trials[38]. However, the high attrition rates of drugs in clinical trials[39], difficulties with patent licensing[40], and the growing cost of developing new molecules[41] have led to a risk-averse approach to drug discovery characterized by'small bets, big wins'[25]. While this strategy, resulting in the creation of multiple drugs within the same therapeutic class[42], increases competition and reduces drug prices[43, 44], it takes away
resources from the exploration of novel drugs and targets [45], encouraging incremental innovation and hindering progress for population health.
Our analysis of clinical trials data shows that the highest growth in drug exploration was between 1990 and 2001, likely driven by the advent of the Human Genome Project (HGP). However, in the following two decades, there was a decrease in the incentive to test novel drugs, and a disproportionate focus on approved drugs (61% of all trials). This allocation of resources ultimately slows the discovery of novel therapies. Further, drug discovery in clinical trials often prioritize previously tested proteins (preferential attachment) and proteins connected to previously tested proteins (network effect), neglecting proteins in under-explored regions of the network, even if they have disease associations and are verified as druggable targets. To optimize target exploration in druggable regions of the network and improve the number of tested targets, it may be beneficial to reduce the emphasis on previously tested proteins and adopt a network-based search for drug candidates.
Our proposed modeling approach offers a framework for economists, policymakers, and medical researchers seeking to optimize choices in drug discovery, particularly in situations with limited resources. The introduced drug discovery model could be extended to incorporate the exploration benefits of both successful and failed trials, results that are currently not systematically reported by pharmaceutical companies [5, 6, 7]. Future work, ensuring data transparency, could incorporate multiple parameters on clinical trials, including de-identified information on trial participants to better inform drug discovery strategies. Optimizing the search strategy for drugs can help to maximize the potential of new drugs by targeting novel proteins within the human interactome. |
2307.00694 | Concentrating Dirac Operators and Generalized Seiberg-Witten Equations | This article studies a class of Dirac operators of the form $D_\varepsilon=
D+\varepsilon^{-1}\mathcal A$, where $\mathcal A$ is a zeroth order
perturbation vanishing on a subbundle. When $\mathcal A$ satisfies certain
additional assumptions, solutions of the Dirac equation have a concentration
property in the limit $\varepsilon\to 0$: components of the solution orthogonal
to $\ker(\mathcal A)$ decay exponentially away from the locus $\mathcal Z$
where the rank of $\ker(\mathcal A)$ jumps up. These results are extended to a
class of non-linear Dirac equations. This framework is then applied to study
the compactness properties of moduli spaces of solutions to generalized
Seiberg-Witten equations. In particular, it is shown that for sequences of
solutions which converge weakly to a $\mathbb Z_2$-harmonic spinor, certain
components of the solutions concentrate exponentially around the singular set
of the $\mathbb Z_2$-harmonic spinor. Using these results, the weak convergence
to $\mathbb Z_2$-harmonic spinors proved in existing convergence theorems is
improved to $C^\infty_{loc}$. | Gregory J. Parker | 2023-07-03T00:45:53Z | http://arxiv.org/abs/2307.00694v1 | # Concentrating Dirac operators and generalized Seiberg-Witten equations
###### Abstract.
This article studies a class of Dirac operators of the form \(D_{\varepsilon}=D+\varepsilon^{-1}\mathcal{A}\), where \(\mathcal{A}\) is a zeroth order perturbation vanishing on a subbundle. When \(\mathcal{A}\) satisfies certain additional assumptions, solutions of the Dirac equation have a concentration property in the limit \(\varepsilon\to 0\): components of the solution orthogonal to \(\ker(\mathcal{A})\) decay exponentially away from the locus \(\mathcal{Z}\) where the rank of \(\ker(\mathcal{A})\) jumps up. These results are extended to a class of non-linear Dirac equations.
This framework is then applied to study the compactness properties of moduli spaces of solutions to generalized Seiberg-Witten equations. In particular, it is shown that for sequences of solutions which converge weakly to a \(\mathbb{Z}_{2}\)-harmonic spinor, certain components of the solutions concentrate exponentially around the singular set of the \(\mathbb{Z}_{2}\)-harmonic spinor. Using these results, the weak convergence to \(\mathbb{Z}_{2}\)-harmonic spinors proved in existing convergence theorems ([16, 37, 38, 40, 46]) is improved to \(C^{\infty}_{loc}\).
###### Contents
* 1 Introduction
* 2 Concentrating Dirac Operators
* 3 Non-Linear Concentrating Dirac Equations
* 4 Generalized Seiberg-Witten Equations
* 5 Concentration Properties of Generalized Seiberg-Witten Equations
* 6 Bootstrapping
* A An Extension in \(n=3\) Dimensions
* B Estimates for the Green's Function
## 1. Introduction
Let \((Y,g)\) be a Riemannian manifold of dimension \(n\geqslant 3\), and \(D:\Gamma(E)\to\Gamma(E)\) a Dirac operator on sections of a Clifford module \(E\to Y\), i.e. a first-order elliptic operator whose principal symbol satisfies \(\sigma^{2}_{D}=-\mathrm{Id}\). A 1-parameter family of Dirac operators displaying a concentration property or, more succinctly, a **Concentrating Dirac Operator** (sometimes called a _localizing_ Dirac operator) is a parameterized perturbation
\[D_{\varepsilon}=D+\tfrac{1}{\varepsilon}\mathcal{A} \tag{1.1}\]
of \(D\) by positive scalings of a zeroth order operator \(\mathcal{A}\in\mathrm{End}(E)\) such that the support of solutions concentrates along a distinguished collection of submanifolds \(\mathcal{Z}\subset Y\) as \(\varepsilon\to 0\). Concentrating Dirac operators were introduced to the mathematical literature by Witten's celebrated work on Morse theory [47], although they were familiar to physicists for decades prior to this. Since then, concentrating Dirac operators have been employed to give geometric proofs of many results in index theory [6, 13, 18, 19, 23, 28, 51], and geometric quantization [9, 10, 11, 44].
Concentrating Dirac operators have also played a significant role in Seiberg-Witten gauge theory. It is an observation due to C. Taubes that the linearization of the Seiberg-Witten equations behaves as a concentrating Dirac operator in certain limits. This perspective is central to Taubes's celebrated work "SW=Gr" relating the Seiberg-Witten and Gromov invariants of symplectic 4-manifolds [31, 32,
33, 34, 35], and to his resolution of the Weinstein Conjecture [36]. In these situations, the role of the perturbation \(\mathcal{A}\) is occupied by a large multiple of the symplectic or contact form, and the solutions concentrate along submanifolds of codimension 2 which Taubes proves are the pseudo-holomorphic curves enumerated by the Gromov invariant or the Reeb orbits whose existence was postulated by Weinstein in the two cases respectively.
Concentrating Dirac operators are also relevant in more recent work of Taubes and others on the compactness of moduli spaces for generalized Seiberg-Witten theories. For a specified compact Lie group \(G\), a system of **generalized Seiberg-Witten equations** on a 3 or 4-dimensional manifold is a system of first-order non-linear PDEs for a pair \((\Psi,A)\) of a spinor \(\Psi\) and a connection \(A\) on a principal \(G\)-bundle which has the schematic form
\[\not{D}_{A}\Psi = 0 \tag{1.2}\] \[\star F_{A} = -\tfrac{1}{2}\mu(\Psi,\Psi) \tag{1.3}\]
where \(\not{D}_{A}\) (resp. \(\not{D}_{A}^{+}\) on a 4-manifold) is the Dirac operator twisted by the connection \(A\), \(F_{A}\) (resp. \(F_{A}^{+}\)) is the curvature of \(A\) (resp. the self-dual component thereof), and \(\mu\) is a pointwise quadratic map 1. Examples include the Vafa-Witten equations [29, 30, 41], the Kapustin-Witten equations [21, 22, 42, 48], the \(\mathrm{SL}(2,\mathbb{C})\) anti-Self-Dual Yang-Mills equations [37, 38], the Seiberg-Witten equations with multiple spinors [16, 40], and the ADHM Seiberg-Witten Equations [46]. The reader is referred to [2, 5, 15, 45, 49] for discussions of conjectures relating these equations to the geometry of manifolds and to other gauge theories. The main barrier to progress on all of these conjectures is the lack of a well-understood compactification for the moduli space of solutions: unlike for the standard Seiberg-Witten equations, these more general theories do not admit compact moduli space of solutions and there instead may be sequences of solutions for which the \(L^{2}\)-norm of the spinor diverges. For such sequences, a renormalized (sub)sequence must converge to a \(\mathbb{Z}_{2}\)**-harmonic spinor** or more general **Fueter section** (proved for each respective equation in the above references).
Footnote 1: Note that my convention for the sign of \(\mu\) differs from that used by many authors. That is to say, I denote by \(-\mu\) what others denote by \(\mu\); the equations (thus their relevant compactness properties) are the same.
A promising approach to constructing well-understood compactifications for these moduli spaces is to attach boundary strata consisting of \(\mathbb{Z}_{2}\)-harmonic spinors or Fueter sections. A necessary step in showing the suitability of any putative compactification formed in this way is to construct boundary charts for the moduli space. Constructing these charts requires gluing results (see [3, 25, 26, 27] for progress in this direction). Even after showing appropriate gluing results, however, the existence of the desired charts does not follow immediately since it is not _a priori_ clear that any sequence approaching a given boundary point necessary arises from such a gluing--in other words, the gluing may only construct a subset of the desired chart. In order for the compactifications to be well-behaved, extraneous sequences not captured by gluing constructions must be ruled out: this problem is known as the _surjectivity of gluing_. The convergence to \(\mathbb{Z}_{2}\)-harmonic spinors and Fueter sections proved in extant convergence results is rather weak (see Section 4.2 for a precise statement); in particular it leaves open the possibility for the existence of sequences converging to a \(\mathbb{Z}_{2}\)-harmonic spinor or Fueter section in a space of low regularity that would necessarily elude gluing constructions (which automatically have convergence in higher-regularity spaces).
As explained in Section 6, attempts to bootstrap the convergence to \(\mathbb{Z}_{2}\)-harmonic spinors using standard methods are doomed to fail by the accumulation of powers of the spinor's diverging \(L^{2}\)-norm in the relevant estimates, and more robust techniques are required. The theory of concentrating Dirac operators supplies these techniques: for a sequence of solutions to (1.2-1.3) converging to a \(\mathbb{Z}_{2}\)-harmonic spinor, the linearization of the equations behaves as a particular type of concentrating Dirac operator, with the diverging \(L^{2}\)-norm of the spinor occupying the role of \(\varepsilon^{-1}\) in the expression (1.1). Although the perspective and philosophy of concentrating Dirac operators implicitly informs the approach of [16, 37, 38, 40, 41, 42, 46], there is more to be gained by making the connection precise.
This article is best viewed as consisting of three parts. First, Sections 2-3 extend results about the behavior of solutions to concentrating Dirac operators to a larger class of operators than previously studied, and to Dirac operators with certain types of non-linearities. Second, Sections 4-5, show that
the linearization of generalized Seiberg-Witten equations, by design, fall into this new class of operators. Finally, Section 6 uses these results to improve convergence results for sequences of solutions to generalized Seiberg-Witten equations in [16, 37, 38, 40, 46] to the \(C^{\infty}_{loc}\) topology. Although this bootstrapping is the main application given here, the exponential convergence results obtained from the results of Sections 2-3 are far stronger than is necessary simply for bootstrapping: they additionally provide a more precise geometric picture of how the convergence occurs. In particular, as discussed in Appendix A, they imply that there is an expected invariant length scale for the concentration of curvature along the singular set. The results and techniques developed here may therefore be helpful in addressing questions related to gluing and the surjectivity of gluing (in fact, the present work grew out of the necessity for the full strength of these results in the gluing construction of [25, 27]).
### Main Results
The key property that leads to concentration as \(\varepsilon\to 0\) is a commutation relation of \(\mathcal{A}\) and the principal symbol \(\sigma_{D}\) of the unperturbed operator \(D\). These are required to satisfy
\[\mathcal{A}^{\star}\sigma_{D}(\xi)=\sigma_{D}^{\star}(\xi)\mathcal{A}\]
for any \(\xi\in T^{\star}Y\). In previous work on concentrating Dirac operators, it has typically been assumed that \(\mathcal{A}\) is invertible on an open dense subset of \(Y\); here this assumption is weakened to include the case that \(\mathcal{A}\) vanishes identically along a subbundle of \(E\). More precisely, let \(r\) denote the maximal rank of \(\mathcal{A}\), and set \(\mathcal{Z}=\{y\mid\operatorname{rank}(\mathcal{A}(y))<r\}\subseteq Y\). Assume that there is a parallel decomposition \(E|_{Y-\mathcal{Z}}=\mathfrak{N}\oplus\mathfrak{H}\) of the Clifford module's restriction to \(Y-\mathcal{Z}\), in which \(\mathcal{A}\) takes the form
\[\mathcal{A}=\begin{pmatrix}0&0\\ 0&A_{\mathfrak{H}}\end{pmatrix}. \tag{1.4}\]
Dirac operators satisfying these two assumptions will be referred to as _concentrating Dirac operators with fixed degeneracy_ (see Definition 2.1 for a more precise definition).
The first main result shows that the \(\mathfrak{H}=\ker(\mathcal{A})^{\perp}\)-components of a solution to the Dirac equation concentrate along \(\mathcal{Z}\) and decay exponentially away from it:
**Theorem 1.1**.: Suppose that \(D_{\varepsilon}\) is a concentrating Dirac operator with fixed degeneracy, and that \(\mathfrak{q}\in\Gamma(E)\) is a solution i.e.
\[(D+\tfrac{1}{\varepsilon}\mathcal{A})\mathfrak{q}=0.\]
For any compact subset \(K\Subset Y-\mathcal{Z}\), let \(R_{K}=\operatorname{dist}(K,\mathcal{Z})\). Then, there exists a compact subset \(K^{\prime}\) with \(K\Subset K^{\prime}\Subset Y-\mathcal{Z}\), and constants \(C,c\) independent of \(K\) such that for \(\varepsilon\) sufficiently small, the components of \(\mathfrak{q}\) in the subbundle \(\mathfrak{H}\) obey
\[\|\pi_{\mathfrak{H}}(\mathfrak{q})\|_{C^{0}(K)}\leq\frac{C}{R_{K}^{n/2}}\, \operatorname{Exp}\left(-\frac{c\Lambda_{K}}{\varepsilon}R_{K}\right)\| \mathfrak{q}\|_{L^{1,2}(K^{\prime})} \tag{1.5}\]
where \(\Lambda_{K}=\inf_{y\in K,v\in\mathfrak{H}_{y}}\frac{\|\mathcal{A}v\|}{|v\|}\), i.e. \(\Lambda_{K}\) is the minimum fiberwise norm of \(A_{\mathfrak{H}}^{-1}\) over \(K\).
The proof of the above result occupies Section 2. It is worth remarking that the proof is easily adapted to show that the conclusion of Theorem 1.1 also holds if \(\mathfrak{q}\) is an eigenspinor of \(D_{\varepsilon}\) with eigenvalue \(\lambda<\frac{c_{K}}{\varepsilon}\) for a small constant \(c_{K}\) depending depending on the compact set \(K\), though this extension is not needed here and the details of the proof are omitted.
The arguments in the proof of Theorem 1.1 can be extended to the following class of non-linear concentrating Dirac operators. Suppose that \(Q:\Gamma(E)\to\Gamma(E)\) is a pointwise non-linear bundle map that takes the form \(Q(\mathfrak{q})=Q_{1}(\mathfrak{q})\pi_{\mathfrak{H}}(\mathfrak{q})\) where \(Q_{1}\) is again a pointwise non-linear bundle map and \(\pi_{\mathfrak{H}}\) the projection onto the subbundle \(\mathfrak{H}\). Said more simply, it is assumed that the non-linearity has at least a linear factor in the \(\mathfrak{H}\) components. Additionally, assume that \(Q_{1}(\mathfrak{q})\) obeys the same commutation relation as \(\mathcal{A}\): that is, \(Q_{1}(\mathfrak{q})^{\star}\sigma_{D}=\sigma_{D}^{\star}Q_{1}(\mathfrak{q})\). The second main result is the following corollary, whose proof occupies Section 3.
**Corollary 1.2**.: If \(\mathfrak{q}\) solves the non-linear equation
\[(D+\tfrac{1}{\varepsilon}\mathcal{A})\mathfrak{q}+Q(\mathfrak{q})=f \tag{1.6}\]
where \(D,\mathcal{A}\) are as in Theorem 1.1, \(f\in\Gamma(\mathfrak{H}^{\perp})\), and where \(Q(\mathfrak{q})=Q_{1}(\mathfrak{q})\pi_{\mathfrak{H}}(q)\) is of the above form and satisfies
\[\varepsilon\|Q_{1}(\mathfrak{q})\|_{L^{1,n}(K^{\prime})}\to 0\qquad\qquad \varepsilon\|Q_{1}(\mathfrak{q})\|_{C^{0}(K^{\prime})}\to 0. \tag{1.7}\]
Then the conclusion (1.5) of Theorem 1.1 holds.
The motivation for and application of the above results come from investigations of the compactness of moduli spaces of solutions to generalized Seiberg-Witten equations (see Sections Sections 4.1-4.2 below). As explained in the introduction, it is known in many cases of interest that sequences of solutions to (1.2-1.3) lacking bounded \(L^{2}\) subsequences subconverge after renormalization to a type of \(\mathbb{Z}_{2}\)-harmonic spinor. There are four cases of interest:
**Case (I):** The two-spinor Seiberg-Witten equations on a compact 3-manifold \(Y\)[16]
**Case (II):** The equations for a flat \(\mathrm{SL}(2,\mathbb{C})\) connection on a compact 3-manifold \(Y\)[38, 46]
**Case (III):** The two-spinor Seiberg-Witten equations on a compact 4-manifold \(X\)[40]
**Case (IV):** The \(\mathrm{SL}(2,\mathbb{C})\) ASD equations on a compact 4-manifold \(X\)[37]
The precise statements of the known compactness results for these cases is amalgamated in Theorem 4.10 in Section 4.2. A precise definition of \(\mathbb{Z}_{2}\)-harmonic spinors is given in the subsequent Definition 4.11.
**Theorem 1.3**.: Suppose that a sequence \((\Phi_{i},A_{i},\varepsilon_{i})\) of re-normalized solutions to a generalized Seiberg-Witten equation converge to a \(\mathbb{Z}_{2}\)-harmonic spinor \((\mathcal{Z}_{0},A_{0},\Phi_{0})\) in the sense of Theorem 4.10 for one of Cases (I)-(IV) above. Then, in the limit \(\varepsilon_{i}\to 0\)
* The equations (1.2-1.3) behave as a non-linear concentrating Dirac operator satisfying the assumptions of Corollary 1.2.
* If, in addition to the conclusion of Theorem 4.10, the sequence \((\Phi_{i},A_{i})\) satisfies \(A_{i}\to A_{0}\) in \(L^{1,p}_{loc}\) and \(\Phi_{i}\to\Phi_{0}\) in \(L^{2,p}_{loc}\) for any \(p>2\), then the same conclusion holds.
Additionally, in these situations
\[\Phi_{i}\xrightarrow{C^{\infty}_{loc}}\Phi_{0}\qquad\qquad A_{i}\xrightarrow {C^{\infty}_{loc}}A_{0}\]
where local convergence mean on compact subsets \(K\subseteq Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\) in Cases (III)-(IV)). Moreover, in all cases the limiting connection satisfies \(F_{A_{0}}=0\) (resp. \(F^{+}_{A_{0}}=0\)).
**Remark 1.4**.: In Cases (I)-(III), Theorem 1.3 applies directly to strengthen the results of [16, 38, 40, 46]. In Case (IV), the additional requirement beyond the conclusions of [37] that \(A_{i}\to A_{0}\) in \(L^{1,p}_{loc}\) and \(\Phi_{i}\to\Phi_{0}\) in \(L^{2,p}_{loc}\) for some \(p>2\) is necessary. This is needed to overcome the quadratic non-linearity in \(F^{+}_{A}\) which is borderline for the relevant Sobolev embeddings in dimension 4 (this assumption is not necessary in Case (III), which is abelian). It is unclear if this slightly stronger convergence result than Theorem 4.10 can be proved by extending the techniques of [37, 41], or if a new approach is required to bridge the gap from the results therein to the point where Theorem 1.3 applies.
**Remark 1.5**.: [41] proves a version of Theorem 4.10 for the Vafa-Witten equations as well, though this case is notably absent from Theorem 1.3. Although the equations have a similar form to the equations of Cases (I)-(IV), it is not expected for the Vafa-Witten equations that the sequence of connections \(A_{i}\) should converge. In particular, for these equations there exist sequences of solutions such that the \(L^{2}\)-norm of the curvature over any compact set with non-empty interior diverges. Solutions with this behavior are the subject of forthcoming work of C. Taubes [43], and were known to E. Witten and others prior to the work [41]. For sequences along which the connections \(A_{i}\) do happen to converge in \(L^{1,p}_{loc}\) for some \(p>2\), then the conclusions of Theorem 1.3 apply--the proof is trivially different from Case (IV) and is omitted.
**Remark 1.6**.: Also absent from the cases enumerated in Theorem 1.3 are the compactness theorems for the ADHM\({}_{1,2}\) Seiberg-Witten equations established in [46] and for Seiberg-Witten equations with \(r>2\) spinors from [16, 40]. The approach of Theorem 1.3 fails in these cases, because the \(\mathfrak{H}\)-components of the equations are too strongly coupled to the other components. This occurs in two slightly different ways: for the ADHM\({}_{1,2}\) Seiberg-Witten equations, the linearized equations properly fit into the framework of Theorem 1.1, but the non-linear terms do not satisfy the hypotheses of Corollary 1.2. For the Seiberg-Witten equations with \(r>2\) spinors, the reverse occurs: the non-linear terms have the desired form, but the splitting 1.4 fails to be parallel and the \(\mathfrak{H}\)-components are too strongly coupled to the other components for the linearized equations. It is an interesting task to investigate whether Theorem 1.1 may be extended to the case where the splitting is not parallel.
## Acknowledgements
This work grew out of the authors Ph.D. thesis. The author is grateful to his advisor Clifford Taubes for his support and suggestions. This work also benefitted from conversations with Rafe Mazzeo, Aleksander Doan, and Thomas Walpuski, and was supported by a National Science Foundation Graduate Research Fellowship and by National Science Foundation Grant No. 2105512.
## 2. Concentrating Dirac Operators
Let \((Y,g)\) be an n-dimensional Riemannian manifold (not necessarily compact), and \(E\to Y\) a real Clifford module. That is to say, \(E\) is a real vector bundle equipped with i) an inner product, denoted \(\langle-,-\rangle\), ii) a metric-compatible connection denoted \(\nabla\), and iii) a Clifford multiplication \(\sigma_{D}:T^{\star}Y\to\operatorname{End}(E)\) obeying the Clifford relation
\[\sigma_{D}(\xi)\sigma_{D}(\eta)+\sigma_{D}(\xi)\sigma_{D}(\eta)=-2\langle\xi, \eta\rangle. \tag{2.1}\]
If \(n\) is even, we assume that there is a splitting \(E=E^{+}\oplus E^{-}\) with respect to which \(\sigma_{D}\) is off-diagonal. Consider an \(\varepsilon\)-parameterized family of Dirac operators \(D_{\varepsilon}:\Gamma(E)\to\Gamma(E)\) of the form
\[D_{\varepsilon}\mathfrak{q}=\left(D+\frac{1}{\varepsilon}\mathcal{A}\right) \mathfrak{q} \tag{2.2}\]
where \(D\) is the Dirac operator \(\sigma_{D}\circ\nabla\), and \(\mathcal{A}:E\to E\) is a bundle map. In the case that \(n\) is even, we denote by the same symbol the Dirac operator \(D:\Gamma(E^{+})\to\Gamma(E^{-})\) and assume that \(\mathcal{A}:E^{+}\to E^{-}\). The operators \(D_{\varepsilon}\) and \(\mathcal{A}\) are only assumed to be \(\mathbb{R}\)-linear, even in the case that \(E\) possesses a complex structure.
Let \(r\) denote the maximal rank of \(\mathcal{A}\), and define the **singular set** of \(\mathcal{A}\) by
\[\mathcal{Z}:=\{y\in Y\ |\ \operatorname{rank}(\mathcal{A}(y))<r\}.\]
That is, the set where \(\mathcal{A}\) has strictly lower than its maximal rank. We assume that \(\mathcal{A}\) is continuous, hence \(\mathcal{Z}\) is closed. We additionally assume, for convenience, that \(\mathcal{Z}\) is non-empty (else all assertions are vacuous). In the case of the Seiberg-Witten equations linearized at a \(\mathbb{Z}_{2}\)-harmonic spinor \(\Phi_{0}\), this set coincides with the singular set \(\mathcal{Z}=|\Phi_{0}|^{-1}(0)\) of the \(\mathbb{Z}_{2}\)-harmonic spinor. We do NOT assume that \(\mathcal{A}\) is smooth (and indeed this is not the case in the setting of \(\mathbb{Z}_{2}\)-harmonic spinors), nor do we assume that \(\mathcal{Z}\) is a submanifold or a union of submanifolds.
The focus will be on the following class of operators.
**Definition 2.1**.: A perturbed Dirac operator
\[D_{\varepsilon}=D+\tfrac{1}{\varepsilon}\mathcal{A}\]
is said to be a **concentrating Dirac operator with fixed degeneracy** if it obeys the following two properties:
1. **(Concentration property)** The principal symbol \(\sigma_{D}\) of \(D\) obeys \[\mathcal{A}^{\star}\sigma_{D}(\xi)=\sigma_{D}(\xi)^{\star}\mathcal{A} \qquad\qquad\forall\ \xi\in T^{\star}Y\] (2.3)
2. **(Fixed Degeneracy)**: On \(Y-\mathcal{Z}\), there is a bundle splitting \(E|_{Y-\mathcal{Z}}=\mathfrak{N}\oplus\mathfrak{H}\) (resp. \(E^{\pm}|_{Y-\mathcal{Z}}=\mathfrak{N}^{\pm}\oplus\mathfrak{H}^{\pm}\) if \(n\) is even) which is parallel with respect to \(\nabla\) and preserved by \(\sigma_{D}\), and in this splitting \(\mathcal{A}\) takes the form \[\mathcal{A}=\begin{pmatrix}0&0\\ 0&A_{\mathfrak{H}}\end{pmatrix}\] (2.4) where \(A_{\mathfrak{H}}:\mathfrak{H}\to\mathfrak{H}\) (resp. \(A_{\mathfrak{H}}:\mathfrak{H}^{+}\to\mathfrak{H}^{-}\)).
Note that item 2) implies that \(\mathfrak{N}=\ker(\mathcal{A})\). By taking \(\mathfrak{N}\) to be empty, this definition generalizes the class of operators previously considered in [18, 19, 28] where \(\mathcal{A}\) is invertible on an open dense set. In Section 4, it is shown that generalized Seiberg-Witten equations provide a rich class of examples for which the degeneracy is non-trivial.
The remainder of this section establishes Theorem 1.1. To begin, we prove a general Weitzenbock formula for concentrating Dirac operators with fixed degeneracy.
**Lemma 2.2**.: A concentrating Dirac operator \(D_{\varepsilon}\) satisfies
\[D_{\varepsilon}^{\star}D_{\varepsilon}=D^{\star}D+\frac{1}{\varepsilon^{2}} \mathcal{A}^{\star}\mathcal{A}+\frac{1}{\varepsilon}\mathfrak{B}\]
where \(\mathfrak{B}\) is a zeroeth order term.
Proof.: The key point is that the concentration property (2.3) ensures that the first-order cross terms cancel.
\[D_{\varepsilon}^{\star}D_{\varepsilon}\mathfrak{q} = D^{\star}D\mathfrak{q}+\frac{1}{\varepsilon^{2}}\mathcal{A}^{ \star}\mathcal{A}\mathfrak{q}+\frac{1}{\varepsilon}\left(\mathcal{D}^{\star} \mathcal{A}+\mathcal{A}^{\star}D\right)\mathfrak{q}\] \[= D^{\star}D\mathfrak{q}+\frac{1}{\varepsilon^{2}}\mathcal{A}^{ \star}\mathcal{A}\mathfrak{q}+\frac{1}{\varepsilon}\left(\sigma_{D}(e^{j})^{ \star}\nabla_{j}^{\star}(\mathcal{A}\mathfrak{q})+\mathcal{A}^{\star}\sigma_ {D}(e^{j})\nabla_{j}\mathfrak{q}\right)\] \[= D^{\star}D\mathfrak{q}+\frac{1}{\varepsilon^{2}}\mathcal{A}^{ \star}\mathcal{A}\mathfrak{q}+\frac{1}{\varepsilon}\left(-\sigma_{D}(e^{j})^{ \star}(\nabla_{j}\mathcal{A})\mathfrak{q}\right)\] \[= D^{\star}D\mathfrak{q}+\frac{1}{\varepsilon^{2}}\mathcal{A}^{ \star}\mathcal{A}\mathfrak{q}+\frac{1}{\varepsilon}\left(-\sigma_{D}(e^{j})^{ \star}(\nabla_{j}\mathcal{A})\mathfrak{q}\right)\] \[= D^{\star}D\mathfrak{q}+\frac{1}{\varepsilon^{2}}\mathcal{A}^{ \star}\mathcal{A}\mathfrak{q}+\frac{1}{\varepsilon}\mathfrak{B}\mathfrak{q}\]
where the last line is taken as the definition of \(\mathfrak{B}\).
We now prove Theorem 1.1. The proof requires two brief lemmas. For these, we denote the components of a spinor \(\mathfrak{q}\) in the splitting of (2.4) by
\[\mathfrak{q}=(q_{0},q_{1})\in\mathfrak{N}\oplus\mathfrak{H}.\]
Additionally, we fix a compact subset \(K\Subset Y-\mathcal{Z}\). To avoid separating cases, it is understood in the above expression, and in the discussion that follows that in the even-dimensional case the bundles \(\mathfrak{N}\oplus\mathfrak{H}\) undecorated by superscripts refer to \(\mathfrak{N}^{+}\) and \(\mathfrak{H}^{+}\).
**Lemma 2.3**.: If \(D_{\varepsilon}\mathfrak{q}=0\), then for sufficiently small \(\varepsilon\) the scalar quantity \(|q_{1}|^{2}\) satisfies the differential inequality
\[d^{\star}d|q_{1}|^{2}+\frac{1}{\varepsilon^{2}}|\mathcal{A}q_{1}|^{2}\leq 0. \tag{2.5}\]
Proof.: For the \(\varepsilon\)-independent operator \(D\), one has a Weitzenbock formula
\[D^{\star}D=\nabla^{\star}\nabla+\mathcal{R} \tag{2.6}\]
where \(\mathcal{R}\) is Clifford multiplication by a curvature term which is bounded in \(C^{0}\) by a constant independent of \(\varepsilon\). The fixed degeneracy assumption (2.4) implies that \((0,q_{1})\in\ker(D_{\varepsilon})\). In a slight abuse of notation, we denote this pair simply by \(q_{1}\), so that
\[D_{\varepsilon}^{\star}D_{\varepsilon}q_{1}=0. \tag{2.7}\]
Using (2.6), (2.7), and Lemma 2.2, for \(\varepsilon\) sufficiently small we have
\[-\frac{1}{2}d^{\star}d|q_{1}|^{2} = \langle\nabla q_{1},\nabla q_{1}\rangle+\langle q_{1},-\nabla^{ \star}\nabla q_{1}\rangle\] \[= |\nabla q_{1}|^{2}+\langle q_{1},\mathcal{R}q_{1}\rangle+\langle q _{1},-D^{\star}Dq_{1}\rangle\] \[= |\nabla q_{1}|^{2}+\langle q_{1},\mathcal{R}q_{1}\rangle+\tfrac{1 }{\varepsilon^{2}}|\mathcal{A}q_{1}|^{2}+\tfrac{1}{\varepsilon}\langle q_{1}, \mathfrak{B}q_{1}\rangle\] \[\geqslant \tfrac{1}{2\varepsilon^{2}}|\mathcal{A}q_{1}|^{2}.\]
In the last line, we have used the fact that \(\mathcal{R}\) and \(\mathfrak{B}\) are bounded independent of \(\varepsilon\), so can be absorbed for sufficiently small \(\varepsilon\) since the injectivity of \(\mathcal{A}\) on \(\mathfrak{H}|_{Y-\mathcal{Z}_{0}}\) implies that \(|\mathcal{A}q_{1}|>c|q_{1}|\) holds for some constant \(c\) on a compact subset \(K\Subset Y-\mathcal{Z}_{0}\).
The next lemma is an abstract result about scalar functions satisfying differential inequalities of the form (2.5). The statement uses the following notation: for each \(y\in Y-\mathcal{Z}\), let \(\Lambda(y)=\inf_{|v|=1}\|Av\|\) where \(v\in\mathfrak{H}_{y}\). The fixed degeneracy hypothesis (2.4) ensures \(A_{\mathfrak{H}}\) is invertible on \(Y-\mathcal{Z}\), hence \(\Lambda(y)>0\). Then, for a fixed compact subset \(K\Subset Y-\mathcal{Z}\) as in the statement of Theorem 1.1, let \(\Lambda_{K}:=\inf_{y\in K}\Lambda(y)\) so that
\[|\mathcal{A}\mathfrak{q}(y)|^{2}\geqslant\Lambda(y)^{2}|q_{1}(y)|^{2} \geqslant\Lambda_{K}^{2}|q_{1}(y)|^{2}\]
holds for \(y\in K\). As in the statement of Theorem 1.1, \(R_{K}=\operatorname{dist}(K,\mathcal{Z})\). Additionally, in what follows, \(\Delta_{g}=d^{\star}d\) denotes the positive definite Laplacian defined by the Riemannian metric \(g\).
**Lemma 2.4**.: Suppose that \(u:K\to\mathbb{R}\) satisfies \(u\geqslant 0\) and
\[\Delta_{g}u+\frac{\Lambda(y)^{2}}{\varepsilon^{2}}u\leqslant 0. \tag{2.8}\]
Then there exist constants \(C,c>0\) such that at a point \(y_{0}\in K\) one has
\[u(y_{0})\leqslant\frac{C}{R_{K}^{n}}\mathrm{Exp}\left(-\frac{\Lambda(y_{0})c }{\varepsilon}R_{K}\right)\int_{K^{\prime}}|du|+|u|\ dV \tag{2.9}\]
where \(K^{\prime}\) is a compact set with \(\overset{\circ}{K}\Subset K^{\prime}\Subset Y-\mathcal{Z}\).
Proof.: Recall first that the Green's function of \(\Delta+m^{2}\) on \(\mathbb{R}^{n}\) is
\[G(x,x_{0})=\frac{C(n)}{|x-x_{0}|^{n-2}}\mathrm{Exp}\left(-m|x-x_{0}|\right) \tag{2.10}\]
where \(C(n)\) is a constant depending only on the dimension. Let \(K^{\prime}\Subset Y-\mathcal{Z}\) be a compact set whose interior contains \(K\), and take \(1>c_{0}>0\) to be a small number such that the following three conditions hold for \(R_{0}=c_{0}R_{K}\):
* \(y\in K\Rightarrow B_{R_{0}}\subseteq K^{\prime}\)
* \(y\in B_{R_{0}}(y_{0})\Rightarrow\Lambda(y)^{2}\geqslant\frac{\Lambda_{0}^{2} }{2}\) where \(\Lambda_{0}=\Lambda(y_{0})\).
* The Green's function of \(-\Delta_{g}-\frac{\Lambda_{0}^{2}}{2\varepsilon^{2}}\) on \(B_{2R_{0}}\) with Dirichlet boundary conditions satisfies \[G(y,y_{0}) \leqslant \frac{c_{1}}{|y-y_{0}|^{n-2}}\mathrm{Exp}\left(-\frac{\Lambda_{0} }{2\varepsilon}|y-y_{0}|\right)\] uniformly on \(Y\) (resp. \(X\)) once \(\varepsilon\) is sufficiently small (depending on \(K\)), and where \(c_{1}\) depends only on the dimension.
The first of these is possible by the compactness of \(K\), the second by the fact that \(\mathcal{A}\) is \(C^{1}\). The third follows by using the comparison principle on \(B_{2R_{0}}\) comparing with the Green's function (2.10) on Euclidean space. The details are provided in Appendix B.
Now let \(y_{0}\in K\) and consider the ball \(B_{0}=B_{R_{0}}(y_{0})\subseteq K\), where \(R_{0}\) is as above. Green's identity on \(B_{0}\) for functions \(\eta,\psi\) states that
\[\int_{B_{0}}\eta(-\Delta_{g}\psi)-\psi(-\Delta_{g}\eta)\ dV_{g}=\int_{\partial B _{0}}\eta\partial_{\nu}\psi-\psi\partial_{\nu}\eta\ dS_{g}\]
and is derived by integrating the quantity \(0=\int\langle d\eta,d\psi\rangle-\langle d\psi,d\eta\rangle\) by parts. Here, \(dV_{g},dS_{g}\) denote the volume forms arising from the Riemannian metric on the ball and the sphere respectively. Now set
\[M:=\frac{\Lambda_{0}}{\sqrt{2}\varepsilon}.\]
Adding and subtracting \(M^{2}\eta\psi\) on the left-hand side yields
\[\int_{B_{0}}\eta(-\Delta_{g}-M^{2})\psi+\psi(\Delta_{g}+M^{2})\eta\ dV_{g}=\int _{\partial B_{0}}\eta\partial_{\star}\psi-\psi\partial_{\star}\eta\ dS_{g}. \tag{2.11}\]
Next, we let \(\beta(y)\) denote a cutoff function equal to \(1\) on \(B_{R_{0}/2}(y_{0})\) and supported in the interior of \(B_{0}\) satisfying
\[0\leqslant\beta\leqslant 1\hskip 56.905512pt|\nabla\beta|\leqslant\frac{C}{R_{0} }\hskip 56.905512pt|\nabla^{2}\beta|\leqslant\frac{C}{R_{0}^{2}} \tag{2.12}\]
and apply the identity 2.11 with \(\eta=u(y)\beta(y)\) and \(\psi=G(y,y_{0})\). The boundary term on the right hand side vanishes by the choice of \(\beta\). The first term becomes \(-u(y_{0})\) since, by definition, the Green's function satisfies \((\Delta_{g}+M^{2})G=\delta_{y_{0}}\). Meanwhile for the second term, the assumption that \(u\) satisfies(2.8) implies
\[(\Delta_{g}+M^{2})u=\left(\Delta_{g}+\frac{\Lambda_{0}^{2}}{2 \varepsilon^{2}}\right)u\leqslant\left(\Delta_{g}+\frac{\Lambda(y)^{2}}{ \varepsilon^{2}}\right)u\leqslant 0 \tag{2.13}\]
hence
\[(\Delta_{g}+M^{2})\beta u=\beta(\Delta_{g}+M^{2})u-2\langle d\beta,du\rangle+ (\Delta_{g}\beta)u\leqslant C\left(\frac{1}{R_{0}}|du|+\frac{1}{R_{0}^{2}}|u| \right)\chi_{A}\]
where \(\chi_{A}\) is the characteristic function equal to \(1\) on the outer annulus \(A=\{R_{0}/2\leqslant r\leqslant R_{0}\}\) and vanishing elsewhere.
Thus the identity (2.11) becomes the inequality
\[u(y_{0}) \leqslant C\int_{A}G(y,y_{0})\left(\frac{1}{R_{0}}|du|+\frac{1}{R_{0}^{2} }|u|\right)\ dV_{g}\] \[\leqslant \frac{C}{R_{0}^{n-1}}\mathrm{Exp}\left(-\frac{\Lambda_{0}}{8 \varepsilon}R_{0}\right)\int_{A}|du|+\frac{|u|}{R_{0}}.\]
Substituting the definition \(R_{0}=c_{0}R_{K}\) yields the bound (2.9).
Proof of Theorem 1.1.: Apply Lemma 2.4 to \(u=|q_{1}|^{2}\). Kato's inequality \(d|q|\leqslant|\nabla q|\) shows that
\[d|q_{1}|^{2}\leqslant 2|q_{1}||d|q_{1}|\leqslant 2|q_{1}||\nabla q_{1}|\leqslant |q_{1}|^{2}+|\nabla q_{1}|^{2}.\]
Applying this to the right side of (2.9) and setting \(R_{K}=c_{0}R_{0}\) yields
\[|q_{1}(y_{0})|^{2} \leqslant \frac{C}{c_{0}^{n}R_{K}^{n}}\mathrm{Exp}\left(-\frac{\Lambda_{0} c_{0}}{\varepsilon}R_{K}\right)\|q_{1}\|_{L^{1,2}(K^{\prime})}^{2}. \tag{2.14}\]
Taking the square root and decreasing constants by a factor of \(2\) to replace \(\Lambda_{0}\) by \(\Lambda_{K}\) shows the desired estimate (1.5).
## 3. Non-Linear Concentrating Dirac Equations
This section extends the proof of Theorem 1.1 to the case of the non-linear equation (5.16) of Corollary 1.2.
Let us clarify the subtlety in deducing the estimate (1.5) for a non-linear Dirac equation. Equation (5.16) can be written as the solution of a concentrating Dirac operator as follows: let \(A_{\varepsilon}(q_{1})=\varepsilon Q_{1}(\mathfrak{q})q_{1}\) with \(Q_{1}\) as in the statement of Corollary 1.2. Then
\[D_{\varepsilon}\mathfrak{q}+Q(\mathfrak{q}) = 0 \tag{3.1}\] \[\left(D+\tfrac{1}{\varepsilon}\mathcal{A}+Q_{1}(\mathfrak{q}) \right)\mathfrak{q} = 0\] (3.2) \[\left(D+\tfrac{1}{\varepsilon}(A_{\mathfrak{H}}+A_{\varepsilon}( \mathfrak{q}))\right)\mathfrak{q} = 0 \tag{3.3}\]
i.e. \(\mathfrak{q}\) is the solves a concentrating Dirac equation, but the zeroth order perturbation now depends on \(\mathfrak{q}\). In this case, the proof of Theorem 1.1 fails in general because it is no longer clear we can absorb the \(\mathfrak{B}=\nabla\mathcal{A}\) term. Indeed, this would require a \(C^{1}\)-bound on \(A_{\varepsilon}\). The naive bootstrapping, however, leads only to bounds with powers of \(\varepsilon^{-1}\) larger than can be absorbed (see also Section 6). The proof of Corollary 1.2 relies on stronger differential inequalities obtained by not discarding the \(|\nabla\mathfrak{q}_{1}|^{2}\) term in Lemma 2.5.
Corollary 1.2 is deduced from the following proposition by setting \(A_{\varepsilon}(-)=\varepsilon Q_{1}(\mathfrak{q},-)\). The additional assumption on \(\varepsilon Q_{1}\) now manifests as the requirement that the concentration property \(A_{\varepsilon}^{\star}\sigma_{D}-\sigma_{D}^{\star}A_{\varepsilon}\) is satisfied for this term as well.
**Proposition 3.1**.: The conclusion (1.5) of Theorem 1.1 continues to hold if the zeroth order term is given by
\[\mathcal{A}=\begin{pmatrix}0&0\\ 0&A_{\mathfrak{H}}\end{pmatrix}+A_{\varepsilon}. \tag{3.4}\]
where \(A_{\mathfrak{H}}\) is smooth with all derivatives bounded independent of \(\varepsilon\) on \(K\) and \(\mathfrak{q},A_{\varepsilon}\) satisfy the relation \(A_{\varepsilon}^{\star}\sigma_{D}(\xi)=\sigma_{D}^{\star}(\xi)A_{\varepsilon}\) = for \(\xi\in T^{\star}Y\) and
\[\|A_{\varepsilon}\|_{L^{1,n}(K^{\prime})}\to 0\qquad\qquad\qquad\|A_{ \varepsilon}\|_{C^{0}(K^{\prime})}\to 0.\]
More generally, the same conclusion holds for the inhomogeneous version of equation (3.3),
\[\left(D+\tfrac{1}{\varepsilon}(A_{\mathfrak{H}}+A_{\varepsilon}(\mathfrak{q}) )\right)\mathfrak{q}=f\]
where \(f\) satisfies \(\pi_{\mathfrak{H}}(f)=0\).
Proof.: Retaining the notation of Lemma 2.4, let \(\Lambda_{K}=\inf_{y\in K}\Lambda(y_{0})\) where \(\Lambda(y)=\inf_{\left\lfloor v\right\rceil=1}\|Av\|\) for \(v\in\mathfrak{H}_{y}\) be as before (thus it has no dependence on \(A_{\varepsilon}\)). Additionally, write
\[\mathfrak{B}=\nabla A_{\mathfrak{H}}+\nabla A_{\varepsilon}\]
where \(\nabla\) is shorthand for the operator \(\sigma_{D}(e^{j})^{\ast}\nabla_{j}\) appearing in the proof of Lemma 2.2.
Proceeding as in the proof of Lemma 2.5, we have the following differential inequality. Note here that the inner product with \(q_{1}\) and the assumption that \(\pi_{\mathfrak{H}}(f)=0\) imply that the cases of \(f=0\) and \(f\neq 0\) yield the same inequality, and the two cases therefore coincide for the remainder of the proof.
\[-\frac{1}{2}d^{\star}d|q_{1}|^{2} = |\nabla q_{1}|^{2}+\langle q_{1},\mathcal{R}q_{1}\rangle+\tfrac{ 1}{\varepsilon^{2}}|\mathcal{A}q_{1}|^{2}+\tfrac{1}{\varepsilon}\langle q_{1},\mathfrak{B}q_{1}\rangle\] \[\geqslant |\nabla q_{1}|^{2}+\langle q_{1},\mathcal{R}q_{1}\rangle+\tfrac {1}{\varepsilon^{2}}|(A_{\mathfrak{H}}+A_{\varepsilon})q_{1}|^{2}+\tfrac{1}{ \varepsilon}\langle q_{1},(\nabla A_{\mathfrak{H}}+\nabla A_{\varepsilon})q_{ 1}\rangle\] \[\geqslant |\nabla q_{1}|^{2}+\langle q_{1},\mathcal{R}q_{1}\rangle+\tfrac {1}{\varepsilon^{2}}|(A_{\mathfrak{H}}+A_{\varepsilon})q_{1}|^{2}+\tfrac{1}{ \varepsilon}\langle q_{1},(\nabla A_{\mathfrak{H}}+\nabla A_{\varepsilon})q_{ 1}\rangle\] \[\geqslant |\nabla q_{1}|^{2}+\tfrac{1}{2\varepsilon^{2}}|A_{\mathfrak{H}}q _{1}|^{2}+\tfrac{1}{\varepsilon}\langle q_{1},(\nabla A_{\varepsilon})q_{1}\rangle\]
where in addition to the absorptions done before, we have used that \(\|A_{\varepsilon}\|_{C^{0}(K^{\prime})}\to 0\) which implies that
\[|A_{\varepsilon}q_{1}|^{2}\leqslant\frac{\Lambda_{K}^{2}}{4}|q_{1}|^{2} \leqslant\frac{1}{4}|A_{\mathfrak{H}}q_{1}|^{2} \tag{3.5}\]
once \(\varepsilon\) is sufficiently small. Rearranging shows the differential inequality now dictates that
\[\Delta|q_{1}|^{2}+\frac{|A_{\mathfrak{H}}q_{1}|^{2}}{\varepsilon^{2}}\leq-2| \nabla q_{1}|^{2}-\tfrac{2}{\varepsilon}\langle q_{1},(\nabla A_{\varepsilon})q _{1}\rangle \tag{3.6}\]
on \(K^{\prime}\).
Proceeding as before (but with an additional factor of \(\sqrt{2}\)), set \(M:=\frac{\Lambda_{0}}{2e}\). Taking \(u=|q_{1}|^{2}\), in this case (2.11) becomes
\[u(y_{0}) = -\int_{B_{0}}G(y,y_{0})(-\Delta_{g}-M^{2})(\beta u)\ dV_{g}\] \[\leq \int_{B_{0}}G(y,y_{0})\beta(\Delta_{g}+M^{2})u)\ dV_{g}+C\int_{B_ {0}}G(y,y_{0})\left(\frac{1}{R_{0}}|du|+\frac{1}{R_{0}^{2}}|u|\right)\ dV_{g}\]
and combining this with 3.6 yields
\[|q_{1}|^{2}(y) + \int_{B_{0}}G(y,y_{0})\beta\left(|\nabla q_{1}|^{2}+\frac{ \Lambda_{0}}{8\varepsilon^{2}}|q_{1}|^{2}\right)\ dV_{g} \tag{3.7}\] \[\leq C\int_{B_{0}}G(y,y_{0})\left(\frac{1}{R_{0}}|d|q_{1}|^{2}|+\frac{ 1}{R_{0}^{2}}|q_{1}|^{2}\right)\ dV_{g}+\frac{1}{\varepsilon}\int_{B_{0}}G(y_ {0},y)\beta|\langle q_{1},(\nabla A_{\varepsilon})q_{1}\rangle|dV_{g}. \tag{3.8}\]
Using Lemma 3.2 below, to absorb the second term on the right in (3.8) into the two remaining terms of (3.7)-(3.8) leads to
\[|q_{1}|^{2}(y) \leq 2C\int_{B_{0}}G(y,y_{0})\left(\frac{1}{R_{0}}|d|q_{1}|^{2}|+\frac {1}{R_{0}^{2}}|q_{1}|^{2}\right)\ dV_{g} \tag{3.9}\]
after which the result follows identically to Theorem 1.1 by repeating the argument leading to (2.14).
**Lemma 3.2**.: For \(\varepsilon\) sufficiently small, the integral
\[I=\frac{1}{\varepsilon}\int_{B_{0}}G(y_{0},y)\beta|\langle q_{1},(\nabla A_{ \varepsilon})q_{1}\rangle|dV_{g}\]
satisfies
\[I \leq \int_{B_{0}}G(y,y_{0})\beta\left(|\nabla q_{1}|^{2}+\frac{\Lambda _{0}}{8\varepsilon^{2}}|q_{1}|^{2}\right)dV_{g}+C\int_{B_{0}}G(y,y_{0})\left( \frac{1}{R_{0}}|d|q_{1}|^{2}|+\frac{1}{R_{0}^{2}}|q_{1}|^{2}\right)dV_{g}. \tag{3.10}\]
Proof of Lemma 3.2.: This follows from a weighted interpolation inequality and a dyadic decomposition. Let
\[A_{n}=\left\{|y|\in[r_{n+1},r_{n}]\right\}\]
for \(n\geq 0\) be a sequence of disjoint annuli covering \(B_{0}\) where the radii \(r_{n}\) are defined inductively by
\[r_{0} = R_{0}\] \[r_{n+1} = r_{n}-\min\left\{\tfrac{r_{n}}{S},\tfrac{1}{M}\right\}.\]
Next, subdivide each \(A_{n}\) into a disjoint union of sectors \(A_{n\ell}\) such that \(\operatorname{diam}(A_{n\ell})\leq|r_{n+1}-r_{n}|\). It is now enough to prove (3.10) separately for each of the disjoint sectors \(A_{n\ell}\).
On these sectors, we have the following weighted interpolation inequality: If \(p\) satisfies
\[\frac{1}{p}=\frac{j}{n}+\alpha\left(\frac{1}{r}-\frac{m}{n}\right)+\frac{1- \alpha}{s}. \tag{3.11}\]
then the following inequality holds uniformly over the collection \(A_{n\ell}\) and uniformly in \(M\) (on which \(G=G(y,y_{0})\) depends).
\[\left(\int_{A_{n\ell}}G^{p/2}|\nabla^{j}v|^{p}\ dV\right)^{1/p} \leq C\left(\int_{A_{n\ell}}G^{r/2}|\nabla^{m}v|^{r}\ dV\right)^{\alpha/r} \left(\int_{A_{n\ell}}G^{s/2}|v|^{s}\ dV\right)^{(1-\alpha)/s} \tag{3.12}\] \[+C\left(\int_{A_{n\ell}}G|v|^{2}\ dV\right)^{1/2}.\]
To prove (3.12), notice that without the Green's function weight (ie setting \(G=1\)) this follows from the scale invariance of the standard interpolation inequalities (the \(L^{2}\)-term gets comparatively stronger on smaller balls). The version including \(G\) follows from this and invoking the Harnack inequality ([12], Theorem 8.20) on each \(A_{n\ell}\) which shows
\[\sup_{A_{n\ell}}|G(y_{0},y)|\leq C\inf_{A_{n\ell}}|G(y_{0},y)|.\]
Indeed, by construction, each \(A_{n\ell}\) is contained in a ball of radius \(R_{n}\) such that the ball of radius \(4R_{n}\subseteq B_{R_{0}}-\{y_{0}\}\). Since \(G(y_{0},y)\geqslant 0\) by the maximum principle and satisfies \((\Delta_{g}+M^{2})G(y_{0},y)=0\) on \(B_{R_{0}}-\bigcup_{k\geqslant n+3}A_{k}\), the Harnack inequality ([12], Theorem 8.20) applies. Moreover, by the scaling of the constant (see [12] Theorem 8.20 and the subsequent comments), the constant can be taken to be uniform since \(r_{n}\leqslant\frac{1}{M}\) by construction (so that in the notation of 8.20 one has \(\nu R=O(1)\)).
We conclude the lemma using (3.12). Without disrupting the bounds (2.12) we may assume that \(\beta=\chi^{2}\) is the square of another smooth cut-off function satisfying the same bounds up to universal constants. In the case of dimension \(4\), applying Holder's inequality with with \(q^{*}=4\) and \(p^{*}=4/3\) yields
\[\frac{1}{\varepsilon}\int_{A_{n\ell}}G(y_{0},y)\beta|\langle q_{1},(\nabla A_{\varepsilon})q_{1}\rangle|dV_{g} \leq \frac{1}{\varepsilon}\|A_{\varepsilon}\|_{L^{1,4}(B_{0})}\left( \int_{A_{n\ell}}G^{p/2}|q_{1}\sqrt{\beta}|^{p}\ dV\right)^{2/p} \tag{3.13}\]
where \(p=8/3\). Next, we apply the interpolation inequality (3.12) to \(q_{1}\sqrt{\beta}\) with \(r=s=2\), and \(j=0\) and \(m=1\) in which case (3.11) shows \(\alpha=\frac{1}{2}\) hence (3.13) is bounded by
\[\leq \frac{C}{\varepsilon}\|A_{\varepsilon}\|_{L^{1,4}(B_{0})}\left[ \left(\int_{A_{n\ell}}G|\nabla(\chi q_{1})|^{2}\ dV\right)^{1/2}\left(\int_{A_ {n\ell}}G|\chi q_{1}|^{2}\ dV\right)^{1/2}+C\|G^{1/2}\chi q_{1}\|_{L^{2}(A_{n \ell})}^{2}\right]\] \[\leq \frac{C}{\varepsilon}\|A_{\varepsilon}\|_{L^{1,4}(B_{0})}\left( \varepsilon\frac{\|G^{1/2}\nabla(\chi q_{1})\|_{L^{2}(A_{n\ell})}^{2}}{2}+ \frac{\|G^{1/2}\chi q_{1}\|_{L^{2}(A_{n\ell})}^{2}}{2\varepsilon}+\|G^{1/2} \chi q_{1}\|_{L^{2}(A_{n\ell})}^{2}\right)\] \[\leq C\|A_{\varepsilon}\|_{L^{1,4}(B_{0})}\left(\int_{A_{n\ell}}G(y,y_ {0})\beta\left(|\nabla q_{1}|^{2}+\frac{1}{2\varepsilon^{2}}|q_{1}|^{2}\right) \ dV+\int_{A_{n\ell}}G(y_{0},y)\frac{1}{R_{0}^{2}}|q_{1}|^{2}\ dV\right)\]
where we have used the bounds (2.12) on \(d\chi\) to obtain the last term. The assumption that \(\|A_{\varepsilon}\|_{L^{1,4}}\to 0\) allows us to choose \(\varepsilon\) sufficiently small that \(\left(C+\frac{C\Delta a}{8}\right)\|A_{\varepsilon}\|_{L^{1,4}}\leqslant 1\) giving (3.10). The case of dimension \(n\neq 4\) differs only in the arithmetic choices of exponents in Holder's inequality.
## 4. Generalized Seiberg-Witten Equations
This section introduces generalized Seiberg-Witten equations; the subsequent section shows that in the relevant cases these fit into the framework of Sections 2-3.
### Seiberg-Witten Data
Generalized Seiberg-Witten equations are systems of coupled non-linear first-order PDEs on manifolds of dimension \(3\) and \(4\). There is a system of generalized Seiberg-Witten equations associated to each quaternionic representation of a compact Lie group \(G\). Rather than working in the most general setting, we will here opt for an abridged exposition which suffices for our purposes. For a more general introduction, see [1, 45, 46].
We first focus on the three-dimensional case. Let \((Y,g_{0})\) denote an oriented Riemannian \(3\)-manifold, and fix a spin structure \(\mathfrak{s}\to Y\) considered as a principal \(\operatorname{Sp}(1)\simeq\operatorname{Spin}(3)\)-bundle. We may write
\[T^{\star}Y\simeq\mathfrak{s}\times_{Ad}\operatorname{Im}(\mathbb{H}) \tag{4.1}\]
where \(Ad:\operatorname{Sp}(1)\to\operatorname{Im}(\mathbb{H})=\mathfrak{sp}(1)\) is the adjoint representation of \(\operatorname{Spin}(3)\). Next, let \(G\) be a compact Lie group, and \(V\) a quaternionic vector space with a real inner-product denoted \(\langle-,-\rangle\) carrying a quaternionic representation
\[\rho:G\to\operatorname{GL}_{\mathbb{H}}(V)\]
respecting the inner product.
Since \(\rho\) is quaternionic, it extends to a map (denoted by the same symbol) \(\rho:\operatorname{Sp}(1)\times G\to\operatorname{GL}_{\mathbb{H}}(V)\) given by \((q,g)\mapsto q\cdot\rho(g)\). Let
\[\gamma:\operatorname{Im}(\mathbb{H})\otimes_{\mathbb{R}}\mathfrak{g}\to \operatorname{End}_{\mathbb{H}}(V). \tag{4.2}\]
be the induced linearized action. Additionally, for \(\Psi\in V\) define \(\mu:V\to\operatorname{Im}(\mathbb{H})\otimes_{\mathbb{R}}\mathfrak{g}\) by
\[\frac{1}{2}\mu(\Psi,\Psi)=\frac{1}{2}\underset{\alpha,j}{\sum}\langle\gamma(I _{j}\otimes\mathfrak{t}_{\alpha})\Psi,\Psi\rangle\;I_{j}\otimes t_{\alpha} \tag{4.3}\]
where \(I_{j}\) for \(j=1,2,3\) is a basis of \(\operatorname{Im}(\mathbb{H})\), and \(\{\mathfrak{t}_{\alpha}\}\) are a basis of \(\mathfrak{g}\). It is straightforward to check that \(\frac{1}{2}\mu\) is the **hyperkahler moment map** for the action of \(G\) on \(V\) by \(\rho\).
Working globally on \(Y\) now, let \(P\to Y\) be a principal \(G\)-bundle.
**Definition 4.1**.: The **Spinor Bundle** associated to \(\rho\) is the vector bundle
\[S=(\mathfrak{s}\times_{Y}P)\times_{\operatorname{Sp}(1)\times G}V.\]
It is endowed with a Clifford multiplication and a moment map denoted (respectively) by
\[\gamma:\Omega^{1}(\mathfrak{g}_{P})\to\operatorname{End}(S)\qquad\qquad\qquad \mu:S\to\Omega^{1}(\mathfrak{g}_{P})\]
given fiberwise by the maps (4.2) and (4.3). Here, \(\Omega^{1}(\mathfrak{g}_{P})\) is the space of \(1\)-forms valued in the adjoint bundle of \(P\), viewed as the associated bundle via (4.1) and the adjoint representation of \(G\).
Each set of data \((G,V,\rho)\) gives rise to a system of generalized Seiberg-Witten equations on \((Y,g_{0})\). These are the following PDEs for a pair \((\Psi,A)\in\Gamma(S)\times\mathscr{A}(P)\) consisting of a spinor \(\Psi\) and a connection \(A\) on the principal bundle \(P\) respectively. Such a pair is called a configuration.
**Definition 4.2**.: The **generalized Seiberg-Witten equations** determined by \((P,S,\gamma,\mu)\) are
\[\not{D}_{A}\Psi = 0 \tag{4.4}\] \[\star F_{A}+\tfrac{1}{2}\mu(\Psi,\Psi) = 0 \tag{4.5}\]
where \(\not{D}_{A}\) denotes the Dirac operator on \(S\) determined by the spin connection and \(A\) using the Clifford multiplication \(\gamma\), and \(F_{A}\) is the curvature of \(A\). These equations are invariant under the Gauge group
\[\mathcal{G}=\Gamma(P\times_{Ad}G).\]
The discussion above carries over to the case of an oriented Riemannian \(4\)-manifold \((X,g_{0})\) with only two minor modifications.
1. As not all \(4\)-manifolds are spin, we impose the additional requirement that there exists a central element \(-1\in Z(G)\) so that \(\rho(-1)=-\mathrm{Id}\), and consider principal bundles \(P,Q\) with structure groups \(G\) and \(\operatorname{Spin}^{G}(4)=(\operatorname{Spin}(4)\times G)/\mathbb{Z}_{2}\) respectively. Here, \(\mathbb{Z}_{2}\) acts by \((1,1)\mapsto(-1,-1)\).
2. In (4.1), \(\Lambda^{1}(Y)\) is replaced by \(\Lambda^{2}_{+}(X)=Q\times_{\sigma}\operatorname{Im}(\mathbb{H})\) where \(\sigma:\operatorname{Spin}^{G}(4)\to SO(\operatorname{Im}(\mathbb{H}))\) is the composition of projection to \(SO(4)\) via \(\operatorname{Spin}(4)\) and the standard \(3\)-dimensional representation of \(SO(4)\). Similarly in (4.4-4.5), \(\star F_{A}\) is replaced by \(F^{+}_{A}\) and \(\not{D}_{A}\) by \(\not{D}^{+}_{A}\).
See [45] for a more complete discussion of the \(4\)-dimensional case.
Most equation of interest in mathematical gauge theory arise as generalized Seiberg-Witten equations for particular choices of the data \((G,V,\rho)\).
**Example 4.3**.: The Anti-Self-Dual (ASD) Yang-Mills equations and standard Seiberg-Witten equations (and their dimensional reductions) are generalized Seiberg-Witten equations obtained from the following data.
* The data \(G=SU(2)\) and \(V=\{0\}\) with \(\rho\) being the trivial representation gives the ASD Yang-Mills equations.
* The data \(G=U(1)\) and \(V=\mathbb{H}\) with \(\rho\) being multiplication by \(e^{i\theta}\in U(1)\) on the right reproduces the standard Seiberg-Witten equations.
We now distinguish four cases, **(I)**-**(IV)** as in the statement of Theorem 1.3. These cases will be referred to repeatedly throughout the proof of Theorem 1.3.
**Example 4.4** (**Case (I))**.: The data \(G=U(1)\) and \(V=\mathbb{H}\otimes_{\mathbb{C}}\mathbb{C}^{r}\) with \(\rho\) being multiplication by \(e^{i\theta}\in U(1)\) on the right on the first factor leads to the Seiberg-Witten equations with \(r\) spinors.
More concretely, in this case \(S=W\otimes E\) where \(W\to Y\) is the spinor bundle of a spin\({}^{c}\) structure and \(E\to Y\) an auxiliary bundle of (complex) rank \(r\) with trivial determinant. Then the equations are
\[D\!\!\!/\,_{A}\Psi = 0\] \[\star F_{A}+\tfrac{1}{2}\mu(\Psi,\Psi) = 0\]
where \(D\!\!\!/\,_{A}\) is the Dirac operator on \(S_{V}\) and the moment map \(\mu\) can be described as follows. Let \(e^{j}\) denote a local frame of \(T^{\star}Y\); the local frame of \(W\) arising as the \(\pm i\) eigenspaces of Clifford multiplication by \(e^{1}\), a spinor can be written \(\Psi=(\alpha,\beta)\) where \(\alpha,\beta\in\Gamma(V)\). Then, the moment map is,
\[\frac{1}{2}\mu(\Psi,\Psi)=\frac{1}{2}\sum_{j=1}^{3}\langle ie^{j}.\Psi,\Psi \rangle ie^{j}=\frac{i}{2}\Big{(}(|\beta|^{2}-|\alpha|^{2})e^{1}\,\ \mathrm{Re}(- \overline{\alpha}\beta)e^{2}\,\ \mathrm{Im}(-\overline{\alpha}\beta)e^{3}\Big{)}. \tag{4.6}\]
In particular \(\mu\) is the sum of the standard Seiberg-Witten moment map over the \(r\)-spinors. See [4, 25] for further details.
**Example 4.5** (**Case (Iii))**.: On a 4-manifold \(X\) the equations and moment map are analogously related to the standard 4-dimensional Seiberg-Witten equations. To be precise, they take the form
\[D\!\!\!/\,_{A}^{+}\Psi = 0\] \[F_{A}^{+}+\tfrac{1}{2}\mu(\Psi,\Psi) = 0\]
where \(\mu\) is defined analogously to (4.6) using an orthonormal basis \(\omega^{j}\) of \(\Lambda^{2}_{+}(i\mathbb{R})\) in place of the basis \(e^{j}\).
**Example 4.6**.: (**Case (II)**). The data \(G=SU(2)\) and \(V=\mathfrak{su}(2)\otimes_{\mathbb{R}}\mathbb{H}\) with \(\rho\) being the quaternionicification of the adjoint representation gives rise to the equations for a flat \(\mathrm{SL}(2,\mathbb{C})\) connection in dimension 3. In this case,
\[S=(\Lambda^{0}\oplus\Lambda^{1})(\mathfrak{g}_{P})\qquad\qquad\qquad\,D\!\!\!/ \,_{A}=\mathbf{d}_{A}=\begin{pmatrix}0&-d_{A}^{\star}\\ -d_{A}&\star d_{A}\end{pmatrix}\begin{pmatrix}\Psi_{0}\\ \Psi_{1}\end{pmatrix}\]
and the moment map is \(\frac{1}{2}\mu(\Psi,\Psi)=-\frac{1}{2}\star[\Psi\wedge\Psi].\) The equations therefore become
\[\mathbf{d}_{A}\Psi = 0\] \[\star F_{A}-\tfrac{1}{2}\star[\Psi\wedge\Psi] = 0\]
**Example 4.7**.: (**Case (IV)**). In dimension 4, the data of Example 4.6 gives rise to the complex ASD equations. Analogously to the 3-dimensional case, in dimension 4 one has
\[S^{+}=\Lambda^{1}(\mathfrak{g}_{P})\hskip 28.452756ptS^{-}=(\Lambda^{0}\oplus \Lambda^{2}_{-})(\mathfrak{g}_{P})\hskip 56.905512pt\mathbf{d}^{+}_{A}=(d^{ \star}_{A},d^{-}_{A})\]
and \(\frac{1}{2}\mu(\Psi,\Psi)=-\frac{1}{2}[\Psi\wedge\Psi]^{+}.\) The corresponding equations are
\[\mathbf{d}^{+}_{A}\Psi = 0\] \[F^{+}_{A}-\frac{1}{2}[\Psi\wedge\Psi]^{+} = 0.\]
**Example 4.8**.: **(Vafa-Witten Equations)** The same data as in the previous example can also give rise to the Vafa-Witten equations on a four-manifold \(X^{4}\) (depending on a choice of auxiliary data omitted from the discussion here--see [45], Examples 2.31 and 2.36). In this case, one has
\[S^{+}=(\Lambda^{0}\oplus\Lambda^{2}_{+})(\mathfrak{g}_{P})\hskip 28.452756ptS^{-}= \Lambda^{1}(\mathfrak{g}_{P})\hskip 56.905512pt\not{D}_{A}(C,B)=d_{A}C+d^{ \star}_{A}B\]
and if \(\Psi=(C,B)\in(\Omega^{0}\oplus\Omega^{2}_{+})(\mathfrak{g}_{P})\) then the moment map is
\[\frac{1}{2}\mu(\Psi,\Psi)=[C,B]+\frac{1}{2}[B\times B]\]
where \([\_\times\_]\) is the product induced by viewing \(\Lambda^{2}_{+}(TX)\) as the bundle of Lie algebras arising as the associated bundle of \(SO(4)\)-frames on \(X\) via the positive irreducible component of the adjoint representation on \(\mathfrak{so}(4)\). Explicitly, in an orthonormal frame \(\omega_{i}\) of \(\Lambda^{2}_{+}\) it is given on a self-dual 2-form \(B=\omega_{i}\otimes B_{i}\) by \([B\times B]=\epsilon_{ijk}[B_{i},B_{j}]\omega_{k}\) where summation over repeated indices is implicit.
**Example 4.9**.: **(ADHM\({}_{r,k}\) Seiberg-Witten Equations)** For \(G=U(k)\) and
\[V_{r,k}=\operatorname{Hom}_{\mathbb{C}}(\mathbb{C}^{r},\mathbb{H}\otimes_{ \mathbb{C}}\mathbb{C}^{k})\oplus(\mathbb{H}^{\vee}\otimes_{\mathbb{R}}\mathfrak{ u}(k))\]
where \(\rho\) acts on the \(\mathbb{C}^{k}\) factor via the standard representation and the \(\mathfrak{u}(k)\) factor via the adjoint gives rise to the ADHM\({}_{r,k}\) Seiberg-Witten Equations. Here \(\mathbb{H}^{\vee}\) denotes the dual space. The zero-locus \(\mu^{-1}(0)\) of the moment map in this situation coincides with the ADHM construction of the moduli space of ASD \(SU(r)\)-instantons of charge \(k\) on \(\mathbb{R}^{4}\). These equations are conjectured to connect Yang-Mills theory on manifolds with special holonomy to Seiberg-Witten theory on calibrated submanifolds (see [2, 5, 15]).
When \(k=1\) these equations coincide with Example 4.5. For \((r,k)=(1,2)\) the spinor bundle is (effectively) the sum of those in Case (I) and Case (II), with the moment map being the sum of the (\(U(2)\)-analogue) of the moment maps for those. This particular case is studied in detail in [46]
### \(\mathbb{Z}_{2}\)-Harmonic Spinors and Compactness
This subsection summarizes known compactness results for the moduli space of solutions to generalized Seiberg-Witten equations; these were established in [16, 37, 38, 40, 41, 46]. See also [3, 16, 25, 46] for additional details and exposition. It is well-known that the moduli space of solutions to the standard Seiberg-Witten equations is compact [17, 24]. This is a consequence of the inequality
\[\langle\gamma(\mu(\Psi,\Psi))\Psi,\Psi\rangle\geqslant\frac{1}{4}|\Psi|^{4}\]
which leads to an _a priori_ bound on the \(L^{2}\)-norm of the spinor for a solution of (4.4-4.5). For general Seiberg-Witten data, however, no such inequality can hold because \(\mu^{-1}(0)\neq\emptyset\) in general. Consequently, there may be sequences of solutions \((\Psi_{i},A_{i})\) with \(\|\Psi_{i}\|_{L^{2}}\to\infty\), which have no convergent subsequences and therefore lead to a loss of compactness of the moduli space.
In this situation, one may attempt to compactify the moduli space by "blowing up" the configuration space, i.e. extending it to include a boundary stratum at infinity. More specifically, we re-parameterize the subset of the configuration space with \(\|\Psi\|_{L^{2}}>0\) by replacing \(\Psi\) by a pair \((\Phi,\varepsilon)\) where \(\|\Psi\|_{L^{2}}=\frac{1}{\varepsilon}\) and \(\Phi=\varepsilon\Psi\) is a spinor with unit \(L^{2}\)-norm. After including the boundary stratum consisting of configurations with \(\varepsilon=0\), the blown-up configuration space consists of triples \((\Phi,A,\varepsilon)\in\mathbb{S}(\Gamma(S))\times\mathscr{A}(P)\times[0, \infty]\)
where \(\mathbb{S}(\Gamma(S))\) denotes the sphere of spinors with unit \(L^{2}\)-norm. The corresponding **blown-up Seiberg-Witten equation** is
\[\not{D}_{A}\Phi = 0 \tag{4.7}\] \[\star\varepsilon^{2}F_{A}+\tfrac{1}{2}\mu(\Phi,\Phi) = 0\] (4.8) \[\|\Phi\|_{L^{2}} = 1. \tag{4.9}\]
Intuitively, one expects that a sequence of solutions to the original equations with diverging \(L^{2}\)-norm should converge to a solution of the \(\varepsilon=0\) version of the blown-up equations, and that including these solutions as boundary strata would result in a compact moduli space. The upcoming Theorem 4.10 establishes a version of this statement, but there are several important caveats:
1. the \(\varepsilon=0\) version of (4.8) demands that \(\Phi\) lies in the set \(\mu^{-1}(0)\) in each fiber; the latter is a closed subset of each fiber of \(S\), but is not a manifold because \(0\) is a cone point in each fiber.
2. The energy density \(|F_{A}|^{2}\) of the curvature may concentrate along subsets of \(Y\) (resp. \(X\)) in the limit \(\varepsilon\to 0\).
Because of these complications, the limit of a sequence of solutions only satisfies the \(\varepsilon=0\) version of (4.8) away from a closed subset denoted \(\mathcal{Z}\) called the **singular set**. In the case of a non-abelian gauge group on a \(4\)-manifold, the bubbling locus arising from Uhlenbeck compactness is also included in \(\mathcal{Z}\).
The following theorem combines compactness results for several generalized Seiberg-Witten equations which were proved independently by multiple authors. It unifies results on Case (I) the Seiberg-Witten equations with \(r=2\) spinors in \(3\) dimensions [16], Case (II) the equations for a flat \(\operatorname{SL}(2,\mathbb{C})\)-connection on a \(3\)-manifold [38, 46], Case (III) the Seiberg-Witten equations with \(r=2\) spinors in \(4\) dimensions [40], and Case (IV) the complex ASD equation in \(4\) dimensions [37]. In addition, it includes regularity statements for the singular set proved in [39, 50]. In each case, if a sequence of solutions has a subsequence on which the \(L^{2}\)-norm remains bounded (i.e. if \(\limsup\varepsilon>0\)) then standard compactness arguments apply to show a subsequence converges; thus we state the theorem only in the case where \(\varepsilon\to 0\).
**Theorem 4.10**.: ([16, 37, 38, 39, 40, 50]) Suppose that \(Y\) is a closed, oriented \(3\)-manifold (respectively, \(X\) a \(4\)-manifold) and \((P,G,\rho,\mu)\) generalized Seiberg-Witten data corresponding to Cases (I), (II) from the statement of Theorem 1.3 (resp. Cases (III), (IV)). Given a sequence \((\Phi_{i},A_{i},\varepsilon_{i})\) of blown-up configurations satisfying (4.7-4.9), i.e.
\[\not{D}_{A_{i}}\Phi_{i}=0 \star\varepsilon_{i}^{2}F_{A_{i}}+\frac{1}{2}\mu(\Phi_{i}, \Phi_{i})=0 \|\Phi_{i}\|_{L^{2}}=1\]
(resp. \(\not{D}_{A_{i}}^{+}\) and \(F_{A_{i}}^{+}\)) with respect to a sequence of metrics \(g_{i}\to g_{0}\) on \(Y\) (resp. \(X\)), such that \(\varepsilon_{i}\to 0\).
Then, there exists a triple \((\mathcal{Z}_{0},\Phi_{0},A_{0})\) where
* \(\mathcal{Z}_{0}\subseteq Y\) (resp. \(X\)) is a closed rectifiable subset of Haudorff codimension at least \(2\).
* \(\Phi_{0}\) is a spinor on \(Y-\mathcal{Z}_{0}\) such that \(|\Phi_{0}|\) extends as a continuous function to \(Y\) (resp. \(X\)) with \(\mathcal{Z}_{0}=|\Phi_{0}|^{-1}(0)\).
* \(A_{0}\) is a connection on \(P|_{Y-\mathcal{Z}_{0}}\) (resp. \(P|_{X-\mathcal{Z}_{0}}\)),
such that \((\Phi_{0},A_{0})\) satisfies the \(\varepsilon=0\) version of (4.7-4.9) on \(Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)) with respect to the metric \(g_{0}\). Furthermore, there is an \(\alpha>0\) such that and after passing to a subsequence and up to gauge transformations defined on \(Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)),
\[\Phi_{i}\stackrel{{ L^{2,2}_{loc}}}{{\rightarrow}}\Phi_{0} A_{i}\stackrel{{ L^{1,2}_{loc}}}{{\rightarrow}}A_{0} \qquad\qquad|\Phi_{i}|\stackrel{{ C^{0,\alpha}}}{{\rightarrow}}| \Phi_{0}|. \tag{4.10}\]
Here, local convergence means on compact subsets of \(Y-\mathcal{Z}_{0}\) (resp. \(X\)), and the half-arrows in the first two statements denote convergence in the weak topology. In Case (III) the convergence \(\Phi_{i}\rightarrow\Phi_{0}\) is (strongly) \(L^{2,2}_{loc}\) and in Case (IV) it is only (strongly) \(L^{1,2}_{loc}\).
Finally, in Case (II) and Case (IV) the limiting connection \(A_{0}\) has harmonic curvature, i.e. it satisfies \(d_{A_{0}}F_{A_{0}}=d^{\star}_{A_{0}}F_{A_{0}}=0\) where \(d_{A_{0}}\) is the exterior covariant derivative.
The limiting configuration \((\mathcal{Z}_{0},\Phi_{0},A_{0})\) satisfies the \(\varepsilon=0\) version of (4.7-4.9), which reads
\[\not{D}_{A_{0}}\Phi_{0}=0\qquad\qquad\mu(\Phi_{0},\Phi_{0})=0\qquad\qquad\|\Phi_ {0}\|_{L^{2}}=1, \tag{4.11}\]
and is considered up to the action of gauge transformations on \(Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)). This equation is not elliptic, even modulo gauge, as the symbol of \(\mathbf{d}_{A}\) degenerates in the limit \(\varepsilon\to 0\). The **Haydys Correspondence** (see [25] Section 2, [3]), however, shows that by exploiting gauge invariance in a different way than gauge-fixing, (4.11) can be recast as an elliptic equation whose symbol degenerates only along \(\mathcal{Z}_{0}\).
The Haydys Correspondence may be paraphrased as follows (see [3] Section 4 and [14] for details). Let \(\pi:\mu^{-1}(0)\to\mathfrak{X}=\mu^{-1}(0)/G\) be the projection map to the fiberwise quotient by the action of \(G\); \(\mathfrak{X}\) is a bundle whose fibers are hyperkahler orbifolds isometric to the hyperkahler quotient \(V///G\).
Given a solution \((\mathcal{Z}_{0},\Phi_{0},A_{0})\) of (4.11), the projection \(s=\pi(\Phi_{0})\in\Gamma(\mathfrak{X})\) is a solution of a different PDE, the _Fueter equation_\(\mathfrak{F}(s)=0\). The content of the Haydys correspondence is that one can recover the triple \((\mathcal{Z}_{0},\Phi_{0},A_{0})\) from \(s\) even though \(s\) retains no information about the connection \(A_{0}\) or the action of \(G\). This is done by choosing a local lift \(\Phi_{0}\) of \(s\) to \(\mu^{-1}(0)\subset\Gamma(S)\) on \(Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)). Such a lift determines a splitting of \(S\) along the image of \(\Phi\) as follows. First, decompose \(S=T(\mu^{-1}(0))\oplus T(\mu^{-1}(0))^{\perp}\); the first factor further decomposes as \(T(\mu^{-1}(0))=S^{\mathrm{Re}}_{\Phi_{0}}\oplus\mathfrak{g}\Phi_{0}\)2, where \(\mathfrak{g}\Phi_{0}=\{v\Phi_{0}\ |\ v\in\mathfrak{g}_{P}\}\). The decomposition determined by \(\Phi_{0}\) is then
Footnote 2: The notation of the superscripts is chosen to agree with the case of the Seiberg-Witten equations with \(r=2\) spinors where the spinor bundle admits a real structure \(\tau\) with \(\tau^{2}=Id\), see Sections 2–3 of [25]. Other authors [3] denote these \(\mathfrak{H}_{\Phi_{0}},\mathfrak{H}_{\Phi_{0}}\), while we reserve the latter notation for the splitting in (2.4)
\[S|_{Y-\mathcal{Z}_{0}}=S^{\mathrm{Re}}_{\Phi_{0}}\oplus S^{\mathrm{Im}}_{\Phi _{0}} \tag{4.12}\]
where \(S^{\mathrm{Im}}_{\Phi_{0}}=\mathfrak{g}\Phi_{0}\oplus T(\mu^{-1}(0))^{\perp}\). It can then be shown that the condition that \(\nabla_{A_{0}}\Phi_{0}\in\Gamma(S^{\mathrm{Re}}_{\Phi_{0}})\) determines \(A_{0}\) uniquely, in which case \(\mathfrak{F}(s)=0\) is equivalent to (4.11). Notice that one either side of the Haydys correspondence the singular behavior along \(\mathcal{Z}_{0}\) cannot be eliminated.
In Cases (I)-(IV) of Theorem 1.3 (and Examples 4.8, 4.9), the Haydys Correspondence and the Fueter equation admit a simplification due to the following additional structure: there exists a linear subspace \(E\subset\mu^{-1}(0)\subset V\) such that every \(G\)-orbit intersects \(E\) in exactly 2 points. In this situation, for a local lift valued in \(E\), one has \(S^{\mathrm{Re}}_{\Phi_{0}}=E\) and the Fueter equation is simply the Dirac equation on spinors considered up to sign. The data of a solution of (4.11) is then equivalent to the following data.
**Definition 4.11**.: A \(\mathbb{Z}_{2}\)**-Harmonic Spinor** valued in a (real) Clifford module \(E\to Y\) (resp. \(X\)) is a triple \((\mathcal{Z}_{0},\ell_{0},\Phi_{0})\), where
1. \(\mathcal{Z}_{0}\subset Y\) (resp. \(X\)) is a closed, rectifiable subset of Hausdorff codimension 2,
2. \(\ell_{0}\to Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)) is real line bundle, and
3. \(\Phi_{0}\in\Gamma(E\otimes_{\mathbb{R}}\ell_{0})\) is a spinor with \(\nabla\Phi_{0}\in L^{2}\) and whose norm \(|\Phi_{0}|\) extends to \(Y\) (resp. \(X\)) as a Holder continuous function,
such that
\[\|\Phi_{0}\|_{L^{2}}=1\qquad\qquad\not{D}_{\mathcal{Z}_{0}}\Phi_{0}=0\qquad \qquad|\Phi_{0}|^{-1}(0)=\mathcal{Z}_{0}. \tag{4.13}\]
\(\mathbb{Z}_{2}\)-harmonic spinors are always considered up to the equivalence \(\Phi_{0}\mapsto-\Phi_{0}\).
In (4.13), the Dirac operator is formed using the connection arising from the connection on \(E\) and the unique flat connection with holonomy in \(\mathbb{Z}_{2}\) on the line bundle \(\ell_{0}\). Under the Haydys correspondence, this unique flat connection of \(\ell_{0}\) is equivalent to the connection arising from \(A_{0}\) in the conclusion of Theorem 4.10 (see [25] Section 3) and the spin connection. For all the cases of Theorem 1.3, the \(\mathbb{Z}_{2}\)-harmonic spinors that arise are sections of Clifford modules of real rank 4. In Case (II), and Case (IV) the \(\mathbb{Z}_{2}\)-harmonic spinors arising from Theorem 4.10 are also called \(\mathbb{Z}_{2}\)-harmonic 1-forms, i.e. spinors for the Clifford modules \((\Omega^{0}\oplus\Omega^{1})(\mathbb{R})\) or \(\Omega^{1}(\mathbb{R})\) in 3 and 4 dimensions respectively.
## 5. Concentration Properties of Generalized Seiberg-Witten Equations
As explained in the introduction, it is desirable to improve the convergence statements in Theorem 4.10 to \(C^{\infty}_{loc}\). Although the equations are elliptic for \(\varepsilon\neq 0\), naive attempts to bootstrap convergence are foiled by increasingly large powers of \(\varepsilon\) entering the elliptic estimates (see Section 6). The abstract framework introduced in Section 2-3 can be applied to overcome this problem. Let \((\Phi_{i},A_{i},\varepsilon_{i})\) be a sequence of solutions to (4.7-4.9) converging to a \(\mathbb{Z}_{2}\)-harmonic spinor \((\mathcal{Z}_{0},A_{0},\Phi_{0})\) in the sense of Theorem 4.10. Then the un-renormalized configuration may be written as a perturbation of the limit \((\frac{\Phi_{i}}{\varepsilon_{i}},A_{i})=(\frac{\Phi_{0}}{\varepsilon_{i}},A_{ 0})+(\varphi_{i},a_{i})\), where \((\varphi_{i},a_{i})\) solve the equation
\[\mathcal{L}_{(\frac{\Phi_{0}}{\varepsilon_{i}},A_{0})}(\varphi_{i},a_{i})+Q( \varphi_{i},a_{i})=-E_{0} \tag{5.1}\]
Here, \(\mathcal{L}_{(\Psi,A)}\) denotes the linearized Seiberg-Witten equations linearized at the configuration \((\Psi,A)\), \(Q(-,-)\) is a quadratic term, and \(E_{0}=SW(\Phi_{0},A_{0})\) is the error by which the limiting configuration fails to solve the Seiberg-Witten equations. The next two subsections show that (5.1) behaves as a concentrating Dirac operator with fixed degeneracy as \(\varepsilon\to 0\), with the singular set \(\mathcal{Z}_{0}\) occupying the role of the set denoted by the same symbol in Sections 2-3.
### The Concentration Property
In this subsection, it is shown that the linearization of the generalized Seiberg-Witten equations associated to any data satisfy the concentration property of Definition 2.3. In Subsection 5.2, it is shown that the non-linearity of the equations satisfies the similar criteria necessary for Corollary 1.2 to apply.
On a 3-manifold, the linearization of the generalized Seiberg-Witten equations (4.4-4.5) is as follows. Let \((\frac{\Phi}{\varepsilon},A)\) denote a configuration, where \(\|\Phi\|_{L^{2}}=1\) and \(\varepsilon>0\). The linearization at \((\frac{\Phi}{\varepsilon},A)\) acting on a variation \((\varphi,a)\) is
\[\frac{d}{ds}\Big{|}_{s=0}SW\left(\tfrac{\Phi}{\varepsilon}+s\varphi,A+sA\right) =\begin{pmatrix}\not{D}_{A}\varphi+\gamma(a)\frac{\Phi}{\varepsilon}\\ \frac{\mu(\varphi,\Phi)}{\varepsilon}+\star d_{A}a\end{pmatrix}.\]
where \(\mu(\varphi,\psi)\) denotes the polarization of the moment map. To make this into an elliptic system, we supplement the configuration \((\frac{\Phi}{\varepsilon},A)\) with an auxiliary 0-form \(a_{0}\in\Omega^{0}(\mathfrak{g}_{P})\) and impose the gauge-fixing condition
\[-d_{A}^{\star}a+\frac{\mu_{0}(\varphi,\Phi)}{\varepsilon}=0 \tag{5.2}\]
where we have updated our notation so that \(a=(a_{0},a_{1})\in(\Omega^{0}\oplus\Omega^{1})(\mathfrak{g}_{P})\) and \(\mu=(\mu_{0},\mu_{1})\in(\Omega^{0}\oplus\Omega^{1})(\mathfrak{g}_{P})\). Here, \(\mu_{0}:S\otimes S\rightarrow\mathfrak{g}_{P}\) is defined in a local trivialization \(t_{\alpha}\) of \(\mathfrak{g}_{P}\) by \(\mu_{0}(\varphi,\psi):=\sum_{\alpha}\langle\mathfrak{t}_{\alpha}\varphi, \psi\rangle\mathfrak{t}_{\alpha}\). The equations may then be written in the suggestive form:
\[\mathcal{L}_{(\Phi,A,\varepsilon)}\begin{pmatrix}\varphi\\ a\end{pmatrix}=\left(D+\frac{1}{\varepsilon}\mathcal{A}\right)\begin{pmatrix} \varphi\\ a\end{pmatrix} \tag{5.3}\]
where
\[D=\begin{pmatrix}\not{D}_{A}&0\\ 0&\mathbf{d}_{A}\end{pmatrix}\qquad\qquad\qquad\qquad\mathcal{A}=\begin{pmatrix} 0&\gamma(_{-})\Phi\\ \mu(_{-},\Phi)&0\end{pmatrix}, \tag{5.4}\]
and \(\mathbf{d}_{A}=\begin{pmatrix}0&-d_{A}^{\star}\\ -d_{A}&\star d_{A}\end{pmatrix}\begin{pmatrix}a_{0}\\ a_{1}\end{pmatrix}.\) On a 4-manifold \(X\), the auxiliary form \(a_{0}\) is not necessary for ellipticity, and the formula (5.3) for \(\mathcal{L}_{(\Phi,A,\varepsilon)}\) is the same after replacing \(\not{D}_{A}\) by \(\not{D}_{A}^{+}\) and \(\mathbf{d}_{A}\) by \(\mathbf{d}_{A}^{+}=(-d_{A}^{\star},d_{A}^{+})\).
The symbol of \(D\) in 5.4 is
\[\sigma_{D}(\xi)=\begin{pmatrix}\rho(\xi)&0\\ 0&\mathbf{cl}(\xi)\end{pmatrix}\qquad\quad\text{for}\qquad\quad\xi\in\Omega^{ 1}(T^{\star}Y).\]
where \(\rho,\mathbf{cl}\) are the symbols of \(\not{D}_{A},\mathbf{d}_{A}\) respectively (or \(\not{D}_{A}^{+},\mathbf{d}_{A}^{+}\) in dimension 4). The next lemma is a particular instance of a more general result concerning commuting Clifford pairs discussed in [18, 19].
**Lemma 5.1**.: The linearization of the generalized Seiberg-Witten equations written in the form (5.3)-(5.4) obeys the Concentration Property of Definition 2.3, i.e. \(\sigma_{D}\) and \(\mathcal{A}\) satisfy
\[\mathcal{A}^{\star}\sigma_{D}(\xi)=\sigma_{D}(\xi)^{\star}\mathcal{A} \tag{5.5}\]
for all \(\xi\in T^{\star}Y\).
Proof.: For each \(\xi\in\Omega^{1}(\mathbb{R})\), (5.5) is equivalent to the following equalities for all \(a\in(\Omega^{0}\oplus\Omega^{1})(\mathfrak{g}_{P})\), and \(\varphi,\psi\in\Gamma(S)\):
\[\rho(\xi)\gamma(a) = -\gamma(\mathbf{cl}(\xi)a) \tag{5.6}\] \[\mathbf{cl}(\xi)\mu(\varphi,\psi) = -\mu(\rho(\xi)\varphi,\psi). \tag{5.7}\]
In (5.6), \(\mathbf{cl}\) is extended to act on \(\mathfrak{g}_{P}\)-valued forms via the form components. The expressions (5.6-5.7) are easily verified in an oriented orthonormal frame of \(T^{\star}Y\) (resp. \(T^{\star}X\)) and \(\mathfrak{g}_{P}\).
### Fixed Degeneracies
This subsection shows that the fixed degeneracy assumption of Definition 2.3 is satisfied for the linearized Seiberg-Witten equations, and that the non-linear terms have the form necessary for Corollary 1.2 to apply.
To begin, there is the following alternative description of the subspaces \(S^{\mathrm{Re}},S^{\mathrm{Im}}\) from (4.12). Since \(\frac{1}{2}\mu(\Phi,\Phi)\) is bilinear, its linearization at a spinor \(\Phi_{0}\) is given by \(\mu(-,\Phi_{0})\), where \(\mu\) is now interpretted as the bilinear form arising as the polarization of the original quadratic map. On \(Y-\mathcal{Z}_{0}\) where \(\Phi_{0}\) is non-vanishing, one then has,
\[S^{\mathrm{Re}}=\ker(\mu(-,\Phi_{0})) S^{\mathrm{Im}}=(S^{\mathrm{Re}})^{\perp} \tag{5.8}\]
(see [1], Proposition 2.1.5 and [25], Section 2). The next proposition is proved on a case by case basis for the different equations to which Theorem 1.3 applies. The result could be shown using a more abstract framework, but we find it instructive to give explicit descriptions as the splitting described by the proposition provides a novel way of writing many of these equations which may be useful elsewhere.
**Proposition 5.2**.: The linearization of the generalized Seiberg-Witten equations written in the form (5.3)-(5.4) obeys the fixed degeneracy assumption of Definition 2.3, i.e. in dimension \(n=3\) there is a splitting of vector bundles
\[S\oplus(\Omega^{0}\oplus\Omega^{1})(\mathfrak{g}_{P})=\mathfrak{N}\oplus \mathfrak{H}\]
which respects Clifford multiplication \(\gamma\) and is parallel with respect to \(\nabla_{A_{0}}\) such that the map \(\mathcal{A}\) of (5.4) has the block diagonal form (2.4).
In dimension \(n=4\) there are splittings
\[S^{+}\oplus\Omega^{1}(\mathfrak{g}_{P})=\mathfrak{N}^{+}\oplus\mathfrak{H}^{ +}\qquad\quad\text{and}\qquad\quad S^{-}\oplus(\Omega^{0}\oplus\Omega^{2}_{+} )(\mathfrak{g}_{P})=\mathfrak{N}^{-}\oplus\mathfrak{H}^{-}\]
for which the same conclusions hold.
Proof.: The proposition is proved separately for cases (I)-(VI) of Theorem 1.3.
**Case (I):** (Two spinor Seiberg-Witten on \(Y^{3}\)). In this case, the \(\mathbb{Z}_{2}\)-harmonic spinors are as in Definition 4.11 with \(E\) the spinor bundle of a spin structure on \(Y\). A limiting configuration \((\mathcal{Z}_{0},A_{0},\Phi_{0})\) satisfying (4.11) gives rise to such a \(\mathbb{Z}_{2}\)-harmonic spinor as follows: the Haydys Correspondence gives an isomorphism \(S^{\mathrm{Re}}_{\Phi_{0}}\simeq E\otimes\ell_{0}\), and under this isomorphism the connection induced on \(S^{\mathrm{Re}}\) by \(A_{0}\) is intertwined with the connection formed from the spin connection on \(E\) and the unique flat connection on \(\ell_{0}\) with holonomy in \(\mathbb{Z}_{2}\). Thus in this case, the limiting connection \(A_{0}\) on \(Y-\mathcal{Z}_{0}\) is itself flat with holonomy in \(\mathbb{Z}_{2}\) (see also [16] Appendix I and Sections 2-3 of [25]). We tacitly also refer to the triple \((\mathcal{Z}_{0},A_{0},\Phi_{0})\) as a \(\mathbb{Z}_{2}\)-harmonic spinor.
Let \(S^{\mathrm{Re}}\to Y-\mathcal{Z}_{0}\) be the bundle defined in (5.8). Then define
\[\mathfrak{N}:=S^{\mathrm{Re}}\qquad\qquad\qquad\mathfrak{H}:=S^{\mathrm{Im}} \oplus(\Omega^{0}\oplus\Omega^{1})(i\mathbb{R}). \tag{5.9}\]
The splitting 5.9 is respected by Clifford multiplication by forms in \(\alpha\in\Omega^{0}(\mathbb{R})\oplus\Omega^{1}(\mathbb{R})\). Indeed, the linearized moment map (cf. 4.6) now takes the form
\[\mu(\varphi,\Phi_{0})=\sum_{j=0}^{3}\langle ie^{j}\varphi,\Phi_{0}\rangle e^{j} \otimes i \tag{5.10}\]
where \(\{e^{0}=1,e^{1},e^{2},e^{3}\}\) is an orthonormal frame of \((\Omega^{0}\oplus\Omega^{1})(\mathbb{R})\) and \(i=\sqrt{-1}\) is the basis element of the Lie algebra of \(U(1)\). To show Clifford multiplication respects the splitting, it suffices to show that \(\varphi\in S^{\mathrm{Re}}\), i.e. \(\mu(\varphi,\Phi_{0})=0\) then \(\mu(e^{k}.\varphi,\Phi_{0})=0\) as well. This following from the observation that replacing \(e^{j}\) by \(e^{j}.e^{k}\) in (5.10) simply results in a permutation of the frame \(\{e^{0},e^{1},e^{2},e^{3}\}\). Since Clifford multiplication respects \(S^{\mathrm{Re}}\), it also respects the orthogonal complement \(S^{\mathrm{Im}}\).
Next, we show that the connection \(\nabla_{A_{0}}\) respects the splitting as well, i.e. \(\nabla_{A_{0}}\varphi^{\mathrm{Re}}\in S^{\mathrm{Re}}\) and likewise for \(S^{\mathrm{Im}}\). Indeed, the preceding paragraph implies that \(S^{\mathrm{Re}}=\{b.\Phi_{0}\mid b\in\Omega^{0}(\mathbb{R})\oplus\Omega^{1}( \mathbb{R})\}\). Similarly, \(S^{\mathrm{Im}}=\{(ia).\Phi_{0}\mid ia\in\Omega^{0}(\mathbb{R})\oplus\Omega^{ 1}(i\mathbb{R})\}\). Using this description, let \(\varphi^{\mathrm{Re}}=b.\Phi_{0}\in\Gamma(S^{\mathrm{Re}})\) be a spinor in \(S^{\mathrm{Re}}\). Then
\[\nabla_{A_{0}}\varphi^{\mathrm{Re}}=\nabla_{A_{0}}(b.\Phi_{0})=db.\Phi_{0}+b. \nabla_{A_{0}}\Phi_{0}\in\Gamma(S^{\mathrm{Re}})\]
since \(\nabla_{A_{0}}\Phi_{0}\in S^{\mathrm{Re}}\) by the Haydys Correspondence, and \(db.\Phi_{0}=-(\star db).\Phi_{0}\in S^{\mathrm{Re}}\) by the preceding paragraph. An identical argument applies to show that \(S^{\mathrm{Im}}\) is preserved as well. The decomposition 5.9 therefore satisfies both hypotheses of Definition 2.1.
Writing a configuration \((\varphi^{\mathrm{Re}},\varphi^{\mathrm{Im}},a)\) in this decomposition, the linearized Seiberg-Witten equations at \((\frac{\Phi_{0}}{\varepsilon},A_{0})\) take the form (5.3) where
\[D=\begin{pmatrix}D\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Case (III):** (Two spinor Seiberg-Witten on \(X^{4}\)). The four-dimensional case for the two-spinor Seiberg-Witten equations is virtually identical to the three-dimensional case: let \((S^{+})^{\rm Re}=\ker(\mu(-,\Phi_{0}))\) as in (5.8). Set
\[\mathfrak{N}^{+}=(S^{+})^{\rm Re} \mathfrak{H}^{+}:=(S^{+})^{\rm Im}\oplus\Omega^{1}(i\mathbb{R}). \tag{5.14}\]
Then, let \((S^{-})^{\rm Re}=\{\alpha.\varphi^{\rm Re}\ |\ \alpha\in\Omega^{1}(\mathbb{R})\,\ \varphi\in(S^{+})^{\rm Re}\}\) and likewise for \((S^{-})^{\rm Im}\), and define
\[\mathfrak{N}^{-}=(S^{-})^{\rm Re} \mathfrak{H}^{-}:=(S^{+})^{\rm Im}\oplus(\Omega^{0}\oplus\Omega^ {2}_{+})(i\mathbb{R}). \tag{5.15}\]
The proof that the fixed degeneracy hypothesis is satisfied carries over _mutatis mutandis_ from the 3-dimensional case, and the expressions (5.11) are identical regarded as matrices for the splittings (5.14) and (5.15).
**Case (IV):** (Complex \(ASD\) Equations on \(X^{4}\)). This case is the four-dimensional version of Case (II) in the same way that Case (IV) is the four-dimensional version of Case (I).
**Remark 5.3**.: To elaborate on Remark 1.6, the requirement that the splitting be parallel is the barrier to extending Theorem 1.3 to other generalized Seiberg-Witten equations (e.g. the Seiberg-Witten equations with \(r>2\) spinors). In such cases, \(\mu^{-1}(0)\) is a not simply the quotient of a vector space by a finite group, and the proof that the splitting is parallel breaks down. (In such cases, and the limiting Fueter equation has no interpretation as a linear Dirac equation). Analytically, the failure of the splitting to be parallel causes the assertion (2.7), as it would in general involve terms including the covariant derivative of \(q_{0}\) (cf [3] Section 6.2).
The next lemma verifies the first two hypotheses of Corollary 1.2 apply in the case of generalized Seiberg-Witten equations. The third hypothesis, i.e. the estimates (1.7), are the subject of the upcoming Lemmas 6.2 and 6.3 in the next section.
**Lemma 5.4**.: The generalized Seiberg-Witten equations in Cases (I) - (VI) can be written in the form
\[(D+\tfrac{1}{e}A)\mathfrak{q}+Q(\mathfrak{q})=f \tag{5.16}\]
where \(\mathfrak{q}=(\varphi,a)\) and \(f\) and \(Q\) satisfy the following:
* \(f\in\Gamma(\mathfrak{N})\)
* \(Q(\mathfrak{q})=Q_{1}(\mathfrak{q})\pi_{\mathfrak{H}}(\mathfrak{q})\) for a linear operator \(Q_{1}\)
* \(Q_{1}^{*}\sigma_{D}(\xi)=\sigma_{D}(\xi)^{*}Q_{1}\) for all \(\xi\in T^{\star}Y\) (resp. \(T^{\star}X\)).
Proof.: The (un-renormalized) \(\mathbb{Z}_{2}\)-harmonic spinor satisfies the equations
\[\not{D}_{A_{0}}\left(\tfrac{\Phi_{0}}{\varepsilon}\right)=0 \tfrac{1}{2}\tfrac{\mu(\Phi_{0},\Phi_{0})}{\varepsilon^{2}}=0\]
on \(Y-\mathcal{Z}_{0}\) (resp. \(F^{+}_{A_{0}}\), \(\not{D}^{+}_{A_{0}}\) on \(X-\mathcal{Z}_{0}\)). In particular, \((\tfrac{\Phi_{0}}{\varepsilon},A_{0})\) fails to solve the Seiberg-Witten equations on \(Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)) by \(\star F_{A_{0}}\) (resp. \(F^{+}_{A_{0}}\)). Thus \((\tfrac{\Phi_{0}}{\varepsilon},A_{0})+(\varphi,a)\) satisfies the Seiberg-Witten equations if and only if \((\varphi,a)\) satisfies the deformation equation (cf. (5.1)
\[\Big{(}\mathcal{L}_{\Big{(}\tfrac{\Phi_{0}}{\varepsilon},A_{0}\Big{)}}+Q\Big{)} (\varphi,a)=-E_{0}. \tag{5.17}\]
where \(Q(\varphi,a)=(\gamma(a)\varphi\,\ \mu(\varphi,\varphi))\), and \(E_{0}=-\star F_{A_{0}}\) (resp. \(-F^{+}_{A_{0}}\)). The Haydys Correspondence implies that \(F_{A_{0}}\in\Omega^{\rm Re}\) (see Appendix C of [2] ), hence \(E_{0}\in\Gamma(\mathfrak{N})\) and statement (i) is satisfied. The proof now proceeds in each of the cases individually.
**Case (I):** (Two Spinor Seiberg-Witten on \(Y^{3}\)). As in Case (I) in the proof of Lemma 5.2, Clifford multiplication by \(\mathbb{R}\)-valued forms preserves the splitting \(S=S^{\rm Re}\oplus S^{\rm Im}\) while Clifford multiplication by \(\mathfrak{g}_{P}=i\mathbb{R}\)-valued forms reverses it. Moreover, in this case, the Haydys Correspondence implies that \(F_{A_{0}}=0\). Thus in the splitting (5.9) of Lemma 5.2 the non-linear deformation equation (5.17) on triples \((\varphi^{\rm Re},\varphi^{\rm Im},a)\) takes the form
\[\underbrace{\begin{pmatrix}\not\!\!D^{\rm Re}_{A_{0}}\varphi^{\rm Re}\\ \not\!\!D^{\rm Im}_{A_{0}}\varphi^{\rm Im}+\gamma(a)\frac{\Phi_{0}}{\varepsilon} \\ \mathbf{d}a\ +\ \frac{\mu(\varphi^{\rm Im},\Phi_{0})}{\varepsilon}\end{pmatrix}}_{ \mathcal{L}_{(\Phi_{0},A_{0})}}\ +\ \underbrace{\begin{pmatrix}\gamma(a)\varphi^{\rm Im}\\ \gamma(a)\varphi^{\rm Re}\\ 2\mu(\varphi^{\rm Im},\varphi^{\rm Re}\end{pmatrix}}_{Q(\varphi,a)}\ =\ \ \begin{pmatrix}0\\ 0\end{pmatrix}. \tag{5.18}\]
Note that each term of \(Q(\varphi,a)\) contains at least a linear factor in \((\varphi^{\rm Im},a)\) which is the assertion of statement (ii) of the lemma. The equations can therefore be written as:
\[\left(\begin{pmatrix}\not\!\!D^{\rm Re}_{A_{0}}&0&0\\ 0&\not\!\!D^{\rm Im}_{A_{0}}&0\\ 0&0&\mathbf{d}\end{pmatrix}+\frac{1}{\varepsilon}\begin{pmatrix}0&0&\gamma( \_)\varepsilon\varphi^{\rm Im}\\ 0&0&\gamma(\_)(\Phi_{0}+\varepsilon\varphi^{\rm Re})\\ 0&\mu(\_,\Phi_{0}+\varepsilon\varphi^{\rm Re})&0\end{pmatrix}\right) \begin{pmatrix}\varphi^{\rm Re}\\ \varphi^{\rm Im}\\ a\end{pmatrix}=0. \tag{5.19}\]
which has the form (5.16). Equivalently, the equation has been recast as a concentrating Dirac operator with \(\mathcal{A}\) in the form (3.4) with
\[\begin{pmatrix}0&0\\ 0&A_{\mathfrak{H}}\end{pmatrix}=\begin{pmatrix}0&0&0\\ 0&0&\gamma(\_)\Phi_{0}\\ 0&\mu(\_,\Phi_{0})&0\end{pmatrix}\qquad\qquad\qquad A_{\varepsilon}=\begin{pmatrix} 0&0&\gamma(\_)\varepsilon\varphi^{\rm Im}\\ 0&0&\gamma(\_)\varepsilon\varphi^{\rm Re}\\ \mu(\_,\varepsilon\varphi^{\rm Im})&\mu(\_,\varepsilon\varphi^{\rm Im})&0 \end{pmatrix}. \tag{5.20}\]
where \(A_{\mathfrak{H}}\) is the lower \(2\times 2\) block. In this form, it is now obvious that item (iii) holds by Lemma 5.1, (as the proof of the commutation relation applied for any spinor).
**Case (II):** (Flat \({\rm SL}(2,\mathbb{C})\) Connections on \(Y^{3}\)) The only salient difference between this case and the previous one is that there is an additional non-linear term \(a\wedge a\) arising from the non-abelianness of the connection. In this case, item (ii) of the lemma follows from the following observation: since \(a\in\Omega^{\rm Re},\varphi\in S^{\rm Re}\) means that that \(a,\varphi\) are \(\mathfrak{g}_{P}\)-valued 1-forms whose Lie algebra component is parallel to \(\Phi_{0}\), the commutator \(\gamma(a^{\rm Re})\varphi^{\rm Re}=[a^{\rm Re}\wedge\varphi^{\rm Re}]=0\). Likewise, \(\mu(\varphi^{\rm Re},\varphi^{\rm Re})=[\varphi^{\rm Re}\wedge\varphi^{\rm Re }]=0\) and \(a^{\rm Re}\wedge a^{\rm Re}=0\). Consequently, (5.17) in this case takes the form
\[\not\!\!D^{\rm Re}_{A_{0}}\varphi^{\rm Re} + \Pi^{\rm Re}(\gamma(a^{\rm Re})\varphi^{\rm Im}\ +\ \gamma(a^{\rm Im})\varphi^{\rm Re}\ +\ \gamma(a^{\rm Im})\varphi^{\rm Im}) = 0\] \[\mathbf{d}^{\rm Re}_{A_{0}}a^{\rm Re} + \Pi^{\rm Re}(2\mu(\varphi^{\rm Re},\varphi^{\rm Im})\ +\ \mu(\varphi^{\rm Im},\varphi^{\rm Im})\ +\ (a^{\rm Re}+a^{\rm Im})\wedge(a^{\rm Im}) = -\star F_{A_{0}}\] \[\not\!\!D^{\rm Im}_{A_{0}}\varphi^{\rm Im}+\gamma(a)\frac{\Phi_{0 }}{\varepsilon} + \Pi^{\rm Im}(\gamma(a^{\rm Re})\varphi^{\rm Im}\ +\ \gamma(a^{\rm Im})\varphi^{\rm Re}\ +\ \gamma(a^{\rm Im})\varphi^{\rm Im}) = 0\] \[\underbrace{\mathbf{d}^{\rm Im}_{A_{0}}a^{\rm Im}+\frac{\mu( \varphi^{\rm Im},\Phi_{0})}{\varepsilon}}_{\mathcal{L}_{(\Phi_{0},A_{0})}}\ +\ \underbrace{\Pi^{\rm Im}(2\mu(\varphi^{\rm Re},\varphi^{\rm Im})\ +\ \mu(\varphi^{\rm Im},\varphi^{\rm Im})\ +\ (a^{\rm Re}+a^{\rm Im})\wedge(a^{\rm Im})}_{Q(\varphi,a)} = 0. \tag{5.21}\]
Thus, with \(A_{\mathfrak{H}}\) being the lower block of (5.13) and
\[A_{\varepsilon}=\begin{pmatrix}0&\Pi^{\rm Re}(\gamma(\_)\varepsilon\varphi^{ \rm Im})&0&\Pi^{\rm Re}(\gamma(\_)\varepsilon\varphi)\\ \Pi^{\rm Re}(\mu(\_,\varepsilon\varphi^{\rm Im}))&\Pi^{\rm Re}(\ \ \ \wedge\ \varepsilon a)&\Pi^{\rm Re}(\mu(\_,\varepsilon\varphi))&0\\ 0&\Pi^{\rm Im}(\gamma(\_)\varepsilon\varphi^{\rm Im})&0&\Pi^{\rm Im}(\gamma(\_ )\varepsilon\varphi)\\ \Pi^{\rm Im}(\mu(\_,\varepsilon\varphi^{\rm Im}))&0&\Pi^{\rm Im}(\mu(\_, \varepsilon\varphi))&\Pi^{\rm Im}(\_\ \ \wedge\ \varepsilon a)\end{pmatrix}\begin{pmatrix}\varphi^{\rm Re}\\ a^{\rm Re}\\ \varphi^{\rm Im}\\ a^{\rm Im}\end{pmatrix} \tag{5.22}\]
the operator has the desired form. Item (iii) of the lemma follows identically to the previous case.
**Cases (III) - (IV):** These cases are analogous to Cases (I) and (II) in the same way as Lemma 5.2.
## 6. Bootstrapping
The convergence \(\Phi_{i}\stackrel{{ L^{2,2}}}{{\rightharpoonup}}\Phi_{0}\) and \(A_{i}\stackrel{{ L^{1,2}}}{{\rightharpoonup}}A_{0}\) obtained in Theorem 4.10 cannot be naively bootstrapped to higher Sobolev spaces using elliptic estimates in the standard way. Indeed, if it is known that \(A_{i}\to A_{0},\Phi_{i}\to\Phi_{0}\) in \(L^{k,2}\) on a compact subset \(K^{\prime}\Subset Y-\mathcal{Z}_{0}\), or equivalently using the notation of (5.1), \(a_{i}\,\ \varepsilon_{i}\varphi_{i}\to 0\) in \(L^{k,2}\). Recalling that Theorem 4.10 asserts convergence for the renormalized spinor \(\Phi_{i}=\Phi_{0}+\varepsilon_{i}\varphi_{i}\), then the elliptic estimate on a compact subsets \(K\Subset K^{\prime}\Subset Y-\mathcal{Z}_{0}\) reads:
\[\|a_{i}\|_{L^{k+1,2}(K)}\leq C_{k}\left(\frac{1}{\varepsilon_{i}^{2}}\|\mu( \Phi_{0},\varepsilon_{i}\varphi_{i})+\mu(\varepsilon_{i}\varphi_{i}, \varepsilon_{i}\varphi_{i})\|_{L^{k,2}(K^{\prime})}+\|a\|_{L^{k,2}(K^{\prime} )}\right).\]
Because of the factor of \(\varepsilon_{i}^{-2}\) on the right hand side, convergence \(a_{i}\to 0\) in \(L^{k+1,2}\) does not follow. Concluding convergence in this way requires knowing that \(\varphi_{i}\to 0\) in \(L^{k,2}\) at least as fast as \(\varepsilon_{i}^{2}\). The exponential convergence furnished by Corollary 1.2 overcomes this issue and allows the bootstrapping to proceed.
Lemmas 5.1 and 5.2 show that the assumptions for Theorem 1.1 are satisfied for the linearized Seiberg-Witten equations in all case (I)-(IV). The next two lemmas show that the non-linear terms satisfy the final assumption 1.7 the necessary to apply Corollary 1.2 in the 3-dimensional and 4-dimensional cases respectively. First, we note the following fact:
**Lemma 6.1**.: Suppose \((\Phi_{i},A_{i})\to(\mathcal{Z}_{0},\Phi_{0},A_{0})\) is a sequence of generalized Seiberg-Witten solutions converging to a \(\mathbb{Z}_{2}\)-harmonic spinor in the sense of Theorem 4.10. The limiting connection \(A_{0}\) in dimensions \(n=3\) and \(n=4\) satisfies
\[\|F_{A_{0}}\|_{C^{2}(K^{\prime})}<\infty.\]
Proof.: In Cases (I) and (III), the Haydys Correspondence implies that \(F_{A_{0}}=0\), and the result is immediate.
In Cases (II) and (IV), the limiting curvature \(F_{A_{0}}\) in Theorem 4.10 is harmonic, i.e.
\[(d_{A_{0}}+d_{A_{0}}^{\star})F_{A_{0}}=0 \tag{6.1}\]
(in fact, Remark 1.37 of [46] shows \(F_{A_{0}}\) vanishes in Case (II)). Because \(A_{0}\) is only known to be \(L^{1,2}\), the coefficients of (6.1) are too rough to initiate a standard bootstrapping argument. However, there is more information in this case: the Haydys Correspondence with stabilizers (see Appendix C of [2]) shows that \(F_{A_{0}}\in\Omega^{\rm Re}\). Since \(A_{0}\) respects the splitting \(\Omega^{\rm Re}\oplus\Omega^{\rm Im}\) by Proposition 5.2, (6.1) restricts to an equation on the smooth bundle \(\Omega^{\rm Re}\to Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)).
As in the proof of Proposition 5.2, one has \(\Omega^{\rm Re}\simeq S^{\rm Re}\simeq E\otimes_{\mathbb{R}}\ell_{0}\), where \(E=\Lambda^{1}(\mathbb{R})\) is a rank 4 real Clifford module on \(Y\) (resp \(X\)), and \(\ell_{0}\to Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)) is real line bundle. Moreover, (cf. the discussion after Definition 4.11), the restriction of \(A_{0}\) viewed as a connection on \(E\otimes_{\mathbb{R}}\ell_{0}\) via this isomorphism is the connection arising from a smooth spin connection on \(E\) and the unique flat connection with holonomy in \(\mathbb{Z}_{2}\) on \(\ell_{0}\). Thus (6.1) becomes
\[(d_{\Gamma}+d_{\Gamma}^{\star})F_{A_{0}}^{\rm Re}=0\]
where \(d_{\Gamma}\) is (up to isomorphism), the exterior covariant derivative on \(\Lambda^{\star}(\mathbb{R})\otimes\ell_{0}\). By elliptic regularity, and since \(F_{A_{0}}=F_{A_{0}}^{\rm Re}\), one has \(\|F_{A_{0}}\|_{L^{k,2}}\leq C_{k}\|F_{A_{0}}\|_{L^{2}}<\infty\) for every \(k\), and the conclusion follows from the Sobolev embedding theorem.
**Lemma 6.2**.: Let \(Y\) be a closed, oriented 3-manifold and suppose that \((\Phi_{i},A_{i},\varepsilon_{i})\to(\mathcal{Z}_{0},A_{0},\Phi_{0})\) is a sequence of solution to (4.7-4.9) converging to a \(\mathbb{Z}_{2}\)-harmonic spinor in the sense of Theorem 4.10 in Case (I) or Case (II). Let \(\Phi_{i}=\Phi_{0}+\varepsilon_{i}\varphi_{i}\) and \(A_{i}=A_{0}+a_{i}\). Then
\[\|A_{\varepsilon}\|_{L^{1,3}(K)}\to 0\qquad\text{ and }\qquad\|A_{ \varepsilon}\|_{C^{0}(K)}\to 0 \tag{6.2}\]
on compact subsets \(K\Subset Y-\mathcal{Z}_{0}\), where \(A_{\varepsilon}\) is the matrix (5.20, 5.22) from the proof of Lemma 5.4.
Proof.: In Case (I), the matrix \(A_{\varepsilon}\) of (5.20) contains only terms involving \(\varepsilon_{i}\varphi_{i}\). In this case, the conclusion is immediate from Theorem 4.10 and the Sobolev embedding; indeed, the conclusion of Theorem 4.10 shows that \(\varepsilon_{i}\varphi_{i}=\Phi_{i}-\Phi_{0}\to 0\) in \(L^{2,2}\). In particular, \(\varepsilon_{i}\varphi_{i}\to 0\) in \(L^{1,6}\), thus _a fortiori_ in \(L^{1,3}\) and in \(C^{0}\) by the embedding \(C^{0,\alpha}\hookrightarrow L^{1,6}\). In Case (II), the matrix \(A_{\varepsilon}\) from (5.22) includes entries involving \(\varepsilon_{i}\varphi_{i}\), which converge as above, and also entries of the form \(\varepsilon_{i}a_{i}\). To prove the lemma, it therefore suffices to show that \(\|\varepsilon_{i}a_{i}\|_{L^{1,p}}\to 0\) for any \(p>3\).
For the remainder of the proof, the subscript \(i\) is kept implicit in the notation. Letting \(\psi=\varepsilon\varphi\) be the re-normalized deformation of the spinor, the curvature equation for the deformation \(a\) reads:
\[\star F_{A_{0}}+{\bf d}_{A_{0}}a=\frac{1}{\varepsilon^{2}}\mu(\Phi_{0}+\psi, \Phi_{0}+\psi)-a\wedge a. \tag{6.3}\]
where \({\bf d}_{A_{0}}\) is the Dirac operator from Example 4.6. The elliptic estimate for \({\bf d}_{A_{0}}\) applied to \(a\) yields
\[\|a\|_{L^{1,p}} \leq C_{k,p}\left(\frac{1}{\varepsilon^{2}}\|\mu(\Phi_{0},\psi)\|_{L^ {p}}+\frac{1}{\varepsilon^{2}}\|\mu(\psi,\psi)\|_{L^{p}}+\|a\wedge a\|_{L^{p} }+\|F_{A_{0}}\|_{L^{p}}+\|a\|_{L^{2}}\right) \tag{6.4}\]
holds for \(p\geqslant 2\). Differentiating (6.3) and commuting covariant derivatives yields the estimate
\[\|\nabla_{A_{0}}a\|_{L^{1,p}} \leq C_{k,p}\Big{(}\frac{1}{\varepsilon^{2}}\|\nabla_{A_{0}}\mu(\Phi_{ 0},\psi)\|_{L^{p}}+\frac{1}{\varepsilon^{2}}\|\nabla_{A_{0}}\mu(\psi,\psi)\|_{ L^{p}}+\|\nabla_{A_{0}}a\wedge a\|_{L^{p}} \tag{6.5}\] \[\qquad+\ \|\nabla_{A_{0}}F_{A_{0}}\|_{L^{p}}\ +\ \|\ [F_{A_{0}} \wedge a]\ \|_{L^{p}}+\|a\|_{L^{p}}\Big{)}.\]
for the covariant derivative (commuting covariant derivatives gives rise to the curvature term \([F_{A_{0}}\wedge a]\) along with a bounded Riemannian curvature term which has been absorbed into the final term of (6.5)).
Now we bootstrap. To begin, we know that \(\Phi_{0}\in L^{2,2}\), \(\psi=\varepsilon\varphi\to 0\) in \(L^{2,2}\), and \(a\to 0\) in \(L^{1,2}\).
_Step 0:_: By the Sobolev embeddings \(L^{1,2}\hookrightarrow L^{6}\), and \(L^{2,2}\hookrightarrow L^{2,2}\), the following quantities are bounded uniformly in \(\varepsilon\): \(\|a\|_{L^{6}}\ \,\ \|\mu(\Phi_{0},\psi)\|_{L^{3}}\ \,\ \|\mu(\psi,\psi)\|_{L^{3}}\). Additionally, \(\|F_{A_{0}}\|_{L^{3}}\) is bounded by
Lemma 6.1.
_Step 1:_: Apply the elliptic estimate (6.4) with \(p=3\) to conclude that \(\|\varepsilon^{2}a\|_{L^{1,3}}\) is bounded in \(L^{1,3}\).
_Step 2:_: By Holder's inequality with \(p=3/2\) and \(q=3\),
\[\|\nabla_{A_{0}}a\wedge a\|_{L^{2}}\leqslant\|a\|_{L^{1,3}}\|a\|_{L^{6}}\]
is bounded. Likewise for \(\|\mu(\nabla\Phi_{0},\psi)\|_{L^{2}},\|\mu(\Phi_{0},\nabla\psi)\|_{L^{2}},\| \mu(\nabla\psi,\psi)\|_{L^{2}}\) using the bounds on \(\Phi_{0},\psi\) from Theorem 4.10.
_Step 3:_: Using Lemma 6.1 to bound the terms involving \(F_{A_{0}}\), apply the elliptic estimate (6.5) with \(p=2\) to conclude that \(\|\varepsilon^{2}a\|_{L^{2,2}}\) is bounded.
_Step 4:_: By Cauchy-Schwartz,
\[\|\nabla_{A_{0}}a\wedge a\|_{L^{5/2}}\leqslant\|a\|_{L^{1,5}}\|a\|_{L^{5}}\]
and likewise for \(\|\mu(\nabla\Phi_{0},\psi)\|_{L^{5/2}},\|\mu(\Phi_{0},\nabla\psi)\|_{L^{5/2}}, \|\mu(\nabla\psi,\psi)\|_{L^{5/2}}\) using the bounds on \(\Phi_{0},\psi\) from Theorem 4.10.
_Step 5:_: Using Lemma 6.1 to bound the terms involving \(F_{A_{0}}\), apply the elliptic estimate (6.5) with \(p=5/2\) to conclude that \(\|\varepsilon^{2}a\|_{L^{2,5/2}}\) is bounded.
_Step 6:_: Applying the interpolation inequality with \(\delta<<1\),
\[\|a\|_{L^{1,p}}\leqslant C_{K}\left(\|a\|_{L^{2,5/2}}^{1/2}\|a\|_{L^{6-\delta}} ^{1/2}+\|a\|_{L^{2}}\right)\]
where \(\frac{1}{p}=\frac{1}{3}+\frac{1}{2}\left(\frac{1}{5/2}-\frac{2}{3}\right)+ \frac{1-1/2}{6-\delta}\) to conclude \(\|\varepsilon a\|_{L^{1,p}}\to 0\) for \(p=\frac{60}{17}-\delta^{\prime}>3\).
It follows that \(\varepsilon_{i}a_{i}\to 0\) in \(L^{1,3}\) and \(C^{0}\) by the Sobolev embedding since \(K\) is compact.
**Lemma 6.3**.: Let \(X\) be a closed, oriented 4-manifold and suppose that \((\Phi_{i},A_{i},\varepsilon_{i})\to(\mathcal{Z}_{0},A_{0},\Phi_{0})\) is a sequence of solution to (4.7-4.9) converging to a \(\mathbb{Z}_{2}\)-harmonic spinor in the sense of Theorem 4.10 in Case (III) or Case (IV). Let \((\varphi_{i},a_{i})\) denote the difference from \((\Phi_{0},A_{0})\) as in (5.1). Then
\[\|A_{\varepsilon}\|_{L^{1,4}(K)}\to 0\qquad\text{ and }\qquad\|A_{\varepsilon}\|_{C^{0}(K)}\to 0 \tag{6.6}\]
on compact subsets \(K\Subset Y-\mathcal{Z}_{0}\), where \(A_{\varepsilon}\) is the four-dimensional analogue of the matrices (5.20, 5.22) from the proof of Lemma 5.4 in Cases (III) and (IV) respectively..
Proof.: In Case (III) the conclusion again follows directly from Theorem 4.10 and the Sobolev embedding. Case (IV) follows from a similar bootstrapping procedure as in the previous lemma, but repeatedly applying Steps 1-5 until convergence in \(L^{2,4+\gamma}\) of \(\varepsilon^{2}a_{i}\) is obtained for \(0<\gamma<<1\). Then, interpolating as in Step 6 of Lemma 6.2 establishes that \(\varepsilon a_{i}\to 0\) in \(L^{1,4+\gamma}\). The second bound of (6.6) follows from the Sobolev embedding \(L^{1,4+\gamma}\hookrightarrow C^{0,\alpha}\).
The convergence to a \(\mathbb{Z}_{2}\)-harmonic spinor now, by design, fits into the abstract framework of Sections 2-3. The following proposition retains the setting of the previous lemmas, and additionally uses the notation of \(S^{\mathrm{Im}},\Omega^{\mathrm{Im}}\) from Section 5.2.
**Proposition 6.4**.: Suppose that \((\Phi_{i},A_{i},\varepsilon_{i})\to(\mathcal{Z}_{0},A_{0},\Phi_{0})\) is a sequence of solutions to (4.7-4.9) converging in the sense of Theorem 4.10 in Cases (I)-(III); in Case (IV) further assume that \(A_{i}\to A_{0}\) in \(L^{1,p}_{loc}\) and \(\Phi_{i}\to\Phi_{0}\) in \(L^{2,p}_{loc}\) for \(p>2\). Let \((\varphi_{i},a_{i})=(\varphi^{\mathrm{Re}},\varphi^{\mathrm{Im}},a^{\mathrm{ Re}},a^{\mathrm{Im}})\)3 denote the perturbations from the limiting data which satisfy (5.1). Then there exist constants \(C,c\) depending only on \(\Phi_{0}\) and background data such that
Footnote 3: In Cases (I) and (III), \(\Omega^{\mathrm{Re}}\) is empty hence \(a=a^{\mathrm{Im}}\).
\[\|(\varphi_{i}^{\mathrm{Im}},a_{i}^{\mathrm{Im}})\|_{C^{0}(K)}\leqslant\frac{ C}{R_{K}^{n/2}}\mathrm{Exp}\left(-\frac{cR_{K}}{\varepsilon_{i}}\right)\|( \varphi_{i},a_{i})\|_{L^{1,2}(K^{\prime})} \tag{6.7}\]
on compact subsets \(K\Subset K^{\prime}\Subset Y-\mathcal{Z}_{0}\) (resp. \(X-\mathcal{Z}_{0}\)), where \(n=3,4\) is the dimension in the respective cases, and \(R_{K}=\mathrm{dist}(K,\mathcal{Z}_{0})\).
Proof.: The regularity on \(A_{\varepsilon}\) established in Lemmas 6.2-6.3 is sufficient to apply Proposition 6.1 of [3] to show that there is a gauge transformation putting the deformation \((\varphi_{i},a_{i})\) in the gauge (5.2) (note that [3], Proposition 6.1) applies identically in \(4\)-dimensions for the equivalent Sobolev range). As in Claim 6.1, the limiting connection \(F_{A_{0}}\in\Omega^{\mathrm{Re}}\), so the limiting configuration solves the Seiberg-Witten equations in the \(S^{\mathrm{Im}},\Omega^{Im}\) components (i.e. \(SW(\Phi_{0},A_{0})=-F_{A_{0}}\in\Omega^{\mathrm{Re}}\) as in 5.1). Thus In this gauge, Proposition 5.1, Lemma 5.2, and Lemmas 6.2-6.3 show that the assumption of Corollary 1.2 hold in the respective cases. The conclusion then follows directly.
Using the exponential convergence of Proposition 6.4, we may now conclude the proof of Theorem 1.3:
Proof of Theorem 1.3.: Let \((\Phi_{0}+\psi_{i},a_{i})=\left(\varepsilon_{i}\left(\frac{\Phi_{0}}{ \varepsilon_{i}}+\varphi_{i}\right),a_{i}\right)\) denote the re-normalized sequence. For the remainder of the proof, the subscript is kept implicit. Let \(K\) denote a compact subset, and let \(K_{0}\Supset K_{1}\Supset K_{2}\ldots\) be a nested sequence of compact subsets of \(Y-\mathcal{Z}_{0}\) so that \(K\subseteq\bigcap K_{j}\). By the assumption that the sequence converges in the sense of Theorem 4.10, Proposition 6.4 applies to show
\[\|(\varphi^{\mathrm{Im}},a)\|_{C^{0}(K_{0})}\leqslant C_{K}\mathrm{Exp} \left(-\frac{c}{\varepsilon}\right)\|(\varphi^{\mathrm{Re}},\varphi^{\mathrm{ Im}},a)\|_{L^{1,2}(Y-\mathcal{Z}_{0})}\leqslant\frac{C}{\varepsilon}\mathrm{Exp} \left(-\frac{c}{\varepsilon}\right).\]
It now follows from the curvature equation
\[F_{A_{i}}+\frac{1}{2\varepsilon}\mu\left(\varphi_{i}^{\mathrm{Im}},\Phi_{0} \right)+\frac{1}{2}\mu(\varphi_{i}^{\mathrm{Re}},\varphi_{i}^{\mathrm{Im}})=0\]
that \(F_{A_{0}}=0\) (resp. \(F_{A_{0}}^{+}=0\)) for the limiting connection.
Bootstrapping now proceeds in the standard way using the Seiberg-Witten equations (4.4-4.5) to show
\[\|(\psi_{i}^{\mathrm{Im}},a_{i}^{\mathrm{Im}})\|_{L^{k,2}(K_{k})} \leqslant \frac{C_{k}}{\varepsilon^{2k+1}}\mathrm{Exp}\left(-\frac{c}{ \varepsilon}\right)\to 0\] \[\|(\psi_{i}^{\mathrm{Re}},a_{i}^{\mathrm{Re}})\|_{L^{k,2}(K_{k})} \to 0\]
for every \(k\geqslant 3\) (thus in particular on \(K\subseteq K_{k}\)), and the conclusion follows.
To spell out the first few steps for \(n=3\) in Case (I), first apply the elliptic estimate for the operator \({\bf d}\) of (5.4), which in this case is independent of \(A\) with \((k,p)=(1,6)\). The second equation (4.5) shows
\[\|a_{i}\|_{L^{1,6}(K_{1})} \leqslant C_{1,6}\left(\frac{1}{\varepsilon^{2}}\|\mu(\Phi_{0},\psi_{i}^{ \rm Im})+\mu(\psi_{i}^{\rm Re},\psi_{i}^{\rm Im})\|_{L^{6}(K_{0})}+\|a_{i}\|_{L ^{6}(K_{0})}\right)\] \[\leqslant \frac{C}{\varepsilon^{3}}\,\mathop{\rm Exp}\nolimits\left(-\frac{ c}{\varepsilon}\right)\to 0.\]
This in turn implies \(A_{0}\in L^{1,6}\). Likewise, for the spinor, applying the elliptic estimate for \(\not{D}_{\widetilde{A}}\) (for a fixed smooth background connection \(\widetilde{A}\)) to the first equation (4.4) yields
\[\|\psi_{i}^{\rm Im}\|_{L^{1,6}(K_{1})} \leqslant C_{1,6}\left(\|(\widetilde{A}-A_{0})\psi_{i}^{\rm Im}\|_{L^{6} (K_{0})}+\|\gamma(a_{i})\psi_{i}^{\rm Re}\|_{L^{6}(K_{0})}+\|\psi_{i}^{\rm Im }\|_{L^{6}(K_{0})}\right)\] \[\leqslant \frac{C}{\varepsilon^{3}}{\rm Exp}(-\frac{c}{\varepsilon})\to 0\] \[\|\psi_{i}^{\rm Re}\|_{L^{1,6}(K_{1})} \leqslant C_{1,6}\left(\|(\widetilde{A}-A_{0})\psi_{i}^{\rm Re}\|_{L^{6} (K_{0})}+\|\gamma(a_{i})\psi_{i}^{\rm Im}\|_{L^{6}(K_{0})}+\|\psi_{i}^{\rm Re} \|_{L^{2}(K_{0})}\right)\to 0\]
Repeating the elliptic estimate for the connection now with \((k,p)=(2,2)\) and using the fact that multiplication induces a bounded map \(L^{1,6}\times L^{1,6}\to L^{1,2}\) one has
\[\|a_{i}\|_{L^{2,2}(K_{2})} \leqslant C_{1,6}\left(\frac{1}{\varepsilon^{2}}\|\mu(\Phi_{0},\psi_{i}^{ \rm Im})+\mu(\psi_{i}^{\rm Re},\psi_{i}^{\rm Im})\|_{L^{1,2}(K_{1})}+\|a_{i}\| _{L^{6}(K_{1})}\right)\] \[\leqslant \frac{C_{2,2}}{\varepsilon^{2}}\left(\|\Phi_{0}\|_{L^{1,6}(K_{1} )}\|\psi_{i}^{\rm Im}\|_{L^{1,6}(K_{1})}+\|\psi_{i}^{\rm Re}\|_{L^{1,6}(K_{1} )}\|\psi_{i}^{\rm Im}\|_{L^{1,6}(K_{1})}+\|a_{i}\|_{L^{1,2}(K_{1})}\right)\] \[\leqslant \frac{C_{2}}{\varepsilon^{5}}{\rm Exp}\left(-\frac{c}{\varepsilon }\right)\to 0.\]
The bootstrapping continues in this fashion to obtain the desired bounds in \(L^{k,2}\) for every \(k>2\).
**Remark 6.5**.: Theorem 1.3 does not applying in the case of the ADHM\({}_{1,2}\) Seiberg-Witten equations described in Example 4.9. As discussed in remark 1.6 the form of the non-linear terms in this case does not satisfy the hypotheses of Corollary 1.2. Specifically, in this case one need not have that \(\mu(\Psi^{\rm Re},\Psi^{\rm Re})=0\), thus the differential inequality 3.6 contains an additional term of the form \(\langle q_{1},f(q_{0},q_{0})\rangle\). It seems likely to the author that in this situation, the techniques of [4] (Section 5) could be extended to bootstrap convergence to \(C_{loc}^{\infty}\) (though without the stronger exponential convergence statement of Proposition 6.4).
In addition, for the ADHM\({}_{1,2}\) Seiberg-Witten equations, the arguments of Section 3 may be extended to partially deal with the term \(\langle q_{1},f(q_{0},q_{0})\rangle\) in the following way. Using Young's inequality and the fact that the Green's function (2.10) is \(L^{2}\)-integrable in dimension 3, Proposition 3.1 can be adapted to prove that diverging spinors have the form
\[\Phi_{i}=\Phi_{0}+\varphi_{i}\]
where \(\varphi_{i}=O(\varepsilon)\) in \(C_{loc}^{0}\). This provides a step in confirming the asymptotic expansions postulated in [2] (Section 5.3).
## Appendix A An Extension in \(n=3\) Dimensions
This appendix provides a minor strengthening of Corollary 1.2 and Proposition 6.4 to include the case of a family of compact sets \(K_{\varepsilon}\) parameterized by \(\varepsilon\). These results are employed in Corollary 1.3 and Appendix A of [25] and in [27].
Let \(K_{\varepsilon}\Subset\mathcal{Z}_{0}\) be an \(\varepsilon\)-parameterized family of compact subsets in the complement of \(\mathcal{Z}_{0}\). For a small constant \(c_{1}\) to be chosen momentarily, denote by \(R_{\varepsilon}=c_{1}\cdot{\rm dist}(K_{\varepsilon},\mathcal{Z})\). We restrict to the case that \(R_{\varepsilon}\to 0\), else we are in the previous situation with \(K=\bigcup_{\varepsilon}K_{\varepsilon}\). Let \(K_{\varepsilon}^{\prime}\) denote a family of slightly larger compact subsets such that for some \(\varepsilon_{0}>0\), the following conditions are met for all \(\varepsilon<\varepsilon_{0}\):
1. For some constant \(\kappa_{1}<1\), one has \[\operatorname{dist}(K_{\varepsilon},Y-K_{\varepsilon}^{\prime})\geq\kappa_{1}R_{ \varepsilon}.\]
2. If \(y_{0}\in K_{\varepsilon}\) and \(y\in B_{R_{\varepsilon}}(y_{0})\) then \[|\Lambda(y)|^{2}\geq\frac{|\Lambda(y_{0})|^{2}}{2}.\]
3. The bounds \[\frac{\|A_{\varepsilon}\|_{L^{1,3}(K_{\varepsilon}^{\prime})}}{\inf_{K_{ \varepsilon}}|\Lambda(y)|}\to 0 \|A_{\varepsilon}\|_{C^{0}(K_{\varepsilon}^{\prime})}\leq c_{2}\inf_ {K_{\varepsilon}}|\Lambda(y)|\] (A.1) are satisfied for all \(\varepsilon<\varepsilon_{0}\) where \(c_{2}<1/8\).
When assumptions (1)-(3) above are satisfied, and \(c_{1}\) is chosen sufficiently small, the following extension of Theorem 1.1 and Corollary 1.2 holds.
**Corollary A.1**.: Let \(D_{\varepsilon}\) be a concentrating Dirac operator with fixed degeneracy, and \(Q\) a non-linear operator satisfying the hypotheses of Corollary 1.2. Suppose that \(K_{\varepsilon}\Subset K_{\varepsilon}^{\prime}\Subset Y-\mathcal{Z}\) are nested family of compact subsets of \(Y-\mathcal{Z}_{0}\). If the family satisfies assumptions (1)-(3) above. Then the conclusion of Theorem 1.1 continues to hold, i.e. there exist constants \(C,c\) independent of \(\varepsilon\) and an \(\varepsilon_{0}>0\) such that for \(\varepsilon<\varepsilon_{0}\), any solution
\[(D+\tfrac{1}{\varepsilon}\mathcal{A})\mathfrak{q}+Q(\mathfrak{q})=0\]
satisfies
\[\|\pi_{\mathfrak{H}}q\|_{C^{0}(K_{\varepsilon})}\leq\frac{C}{|\operatorname{ dist}(K_{\varepsilon},\mathcal{Z})|^{3/2}}\mathrm{Exp}\left(-\frac{\Lambda_{K_{ \varepsilon}}c}{\varepsilon}\mathrm{dist}(K_{\varepsilon},\mathcal{Z})\right) \|\mathfrak{q}\|_{L^{1,2}(K_{\varepsilon}^{\prime})}.\]
Proof.: The proof is identical to Corollary 1.2, except that now for each point \(y_{0}\) the radius of the ball used in Lemma 2.4 depends on \(\varepsilon\). In addition, the second bound of (A.1) is used to obtain (3.5) in this case, and the first is used to absorb the terms of Lemma 3.2.
To conclude, we note a particular case of interest. Suppose that in Case (I), \((\mathcal{Z}_{0},A_{0},\Phi_{0})\) is a \(\mathbb{Z}_{2}\)-harmonic spinor which is **non-degenerate** in the sense that there exists a \(c_{2}>0\) such that
\[|\Phi_{0}|(x)\geq c_{2}\sqrt{\operatorname{dist}(x,\mathcal{Z}_{0})}.\]
Then one has \(\Lambda_{K_{\varepsilon}}\geq c_{2}\sqrt{R_{\varepsilon}}=c_{2}\sqrt{c_{1}} \cdot\operatorname{dist}(K_{\varepsilon},\mathcal{Z}_{0})\). In this case, the Corollary A.1 implies:
**Corollary A.2**.: Suppose \((\mathcal{Z}_{0},A_{0},\Phi_{0})\) is a non-degenerate \(\mathbb{Z}_{2}\)-harmonic spinor, and \(K_{\varepsilon}=\{y\mid\operatorname{dist}(y,\mathcal{Z}_{0})\geq c_{1} \varepsilon^{2/3}\}\) is the family of compact subsets above. Then if the deformation \((\varphi_{\varepsilon},a_{\varepsilon})\) satisfies (A.3), then
\[\|(\varphi^{\mathrm{Im}},a)\|_{C^{0}(K_{\varepsilon})}\leq\frac{C}{| \operatorname{dist}(K_{\varepsilon},\mathcal{Z})|^{3/2}}\mathrm{Exp}\left(- \frac{c_{1}}{\varepsilon}\mathrm{dist}(K_{\varepsilon},\mathcal{Z}_{0})^{3/2} \right)\|(\varphi,a)\|_{L^{1,2}(K_{\varepsilon}^{\prime})}\] (A.2)
Corollary A.2 implies the conclusions asserted in Corollary 1.3 and Appendix B of [25]. For the situation described there, one has that \(K_{\varepsilon}=\{y\mid\operatorname{dist}(y,\mathcal{Z}_{0})\geq c_{1} \varepsilon^{2/3-\gamma_{1}}\}\) for \(\gamma<<1\), the hypotheses (1)-(2) are easily seen to be satisfied. Hypothesis (3) follows from the slightly stronger assertion that
\[\|\nabla\varphi_{\varepsilon}\|_{L^{3}(K_{\varepsilon})}\leq\frac{C}{ \varepsilon^{2/3}}\qquad\text{ and }\qquad\|\varphi_{\varepsilon}\|_{C^{0}(K_{\varepsilon})}\leq\frac{C}{ \varepsilon^{2/3}}.\] (A.3)
The results of Appendix B of [25] show that the bounds A.3 are satisfied for the model solutions used in the gluing construction of [25, 27]. Thus hypotheses (1)-(3) hold, and Corollary A.2 implies the assertions in Corollary 1.3 and Appendix B of [25].
Corollary A.2 also implies a characteristic length scale that is not obvious from the convergence statements of Theorem 4.10. In particular, the exponential decay result applies on families of compact subsets with \(\operatorname{dist}(K_{\varepsilon},\mathcal{Z}_{0})\sim\varepsilon^{2/3}\). For smaller compact subsets, the exponent factor approaches \(1\) and the conclusion of (A.2) becomes trivial. This suggests that \(r=O(\varepsilon^{2/3})\) is a characteristic length scale
for the convergence to a \(\mathbb{Z}_{2}\)-harmonic spinor in the case of the two-spinor Seiberg-Witten equations. Indeed, this length scale naturally appears in the construction of the model solutions used in the gluing construction of [25, 27]. It is also the same length scale the appears in the equivalent problem for Hitchin's equations [7, 8, 20]. A promising approach to the surjectivity of the gluing problem is to attempt to dilate this characteristic length to be unit size to extract limiting profiles of the sequence \((\varphi_{\varepsilon},a_{\varepsilon})\) along the singular set \(\mathcal{Z}_{0}\) and show these necessarily arise from gluing data as in [25, 27].
## Appendix B Estimates for the Green's Function
This appendix proves assertion (iii) in the proof of Lemma 2.4. This is a consequence of the following:
**Lemma B.1**.: Let \((X,g)\) be a Riemannian manifold of dimension \(3\) or \(4\) with bounded geometry, and take \(R_{0}\) less than the injectivity radius of \(X\). Let \(M>0\), and for a point \(x_{0}\in X\), denote by \(G(x_{0},x)\) the Green's function with Dirichlet boundary condition on \(B_{R_{0}}(x_{0})\), so that
\[\begin{cases}\left(-\Delta_{g}-\frac{M^{2}}{\varepsilon^{2}}\right)G(x_{0},x )=\delta_{x_{0}}&x\in B_{R_{0}}\\ G(x_{0},x)=0&x\in\partial B_{R_{0}}\end{cases}\]
where \(\Delta_{g}\) denotes the (positive-definite) Laplacian of the Riemannian metric \(g\). Then, there exists an \(\varepsilon_{0}>0\) depending only on the geometry of \((X,g)\) and a constant \(C_{n}\) depending only on the dimension \(n\) such that the bound
\[|G(x_{0},x)|\leqslant\frac{C_{n}}{|x-x_{0}|^{n-2}}\,\operatorname{Exp}\left(- \frac{M}{2\varepsilon}|x-x_{0}|\right)\] (B.1)
holds for \(\varepsilon<\varepsilon_{0}\).
Proof.: This follows from a comparison principle argument using the Green's function on Euclidean space. We prove the lemma in dimension \(n=3\); the general case is the same using the appropriate power of \(|x-x_{0}|\).
Let \(\Delta_{0}\) denote the Laplacian on \(\mathbb{R}^{3}\) with the Euclidean metric \(g_{0}\), and \(G_{0}^{M/2}\) denote the Green's function
\[\left(-\Delta_{g}-\frac{M^{2}}{4\varepsilon^{2}}\right)G_{0}^{M/2}=\delta_{0}\]
on all of \(\mathbb{R}^{3}\). It is a standard fact that
\[G_{0}^{M/2}=\frac{C_{3}}{|x-x_{0}|}\text{Exp}\left(-\frac{M}{2\varepsilon}|x- x_{0}|\right).\]
Using geodesic normal coordinates on \(B_{R_{0}}(x_{0})\), define \(\varphi=G-G_{0}^{M/2}\) where \(G=G(x_{0},x)\). Then
\[(-\Delta_{g}-\frac{M^{2}}{\varepsilon^{2}})\varphi = \delta_{x_{0}}+(\Delta_{0}-\Delta_{g})G_{0}^{M/2}-(\Delta_{0}+ \frac{M^{2}}{4\varepsilon^{2}})G_{0}^{M/2}-\frac{3M^{2}}{4\varepsilon^{2}}G_ {0}^{M/2}\] \[= 0+(\Delta_{0}-\Delta_{g})G_{0}^{M/2}-\frac{3M^{2}}{4\varepsilon^ {2}}G_{0}^{M/2}.\]
Since \(g=g_{0}+O(r^{2})\) in geodesic normal coordinates, for a radially symmetric function \(f(r)\) one has
\[(\Delta_{0}-\Delta_{g})f(r)=O(r^{2})\left[\partial_{r}^{2}+\frac{1}{r} \partial_{r}\right]f(r)+O(r)\partial_{r}f(r)=O(r^{2})\partial_{r}^{2}+O(r) \partial_{r}\]
where \(O(r^{k})\) denotes a quantity bounded by \(Cr^{k}\) for a constant \(C\) depending only on the geometry of \((X,g)\). Differentiating,
\[\partial_{r}G_{0}^{M/2} = C\left(-\frac{1}{r^{2}}-\frac{M}{2\varepsilon r}\right)e^{- \frac{M}{2\varepsilon}r}\] \[\partial_{r}^{2}G_{0}^{M/2} = C\left(\frac{2}{r^{3}}+\frac{M}{2\varepsilon}+\frac{M^{2}}{4 \varepsilon^{2}r}\right)e^{-\frac{M}{2\varepsilon}r}\]
hence
\[|(\Delta_{0}-\Delta_{g})G_{0}^{M/2}|\leqslant C\left(\frac{1}{r^{3}}+\frac{M} {2\varepsilon}+\frac{M^{2}}{4\varepsilon^{2}r}\right)e^{-\frac{M}{2\varepsilon }r}\leqslant\frac{3M^{2}}{4\varepsilon^{2}}G_{0}^{M/2}\]
once \(\varepsilon\) is sufficiently small depending only on \(R_{0}\) and the geometry of \((X,g)\). It follows that \(\varphi\) satisfies
\[(\Delta_{g}+\tfrac{M^{2}}{\varepsilon^{2}})\varphi \leq 0\qquad\quad\text{on}\qquad\quad B_{R_{0}}(x_{0})\] \[\varphi \leq 0\qquad\quad\text{on}\qquad\quad\partial B_{R_{0}}(x_{0})\]
where the last line follows since \(G=0\) on \(\partial B_{R_{0}}\) by definition, and \(-G_{0}^{M/2}<0\) everywhere. The maximum principle implies that \(\varphi\leq 0\) on \(B_{R_{0}}\) once \(\varepsilon\) is sufficiently small, which yields the desired estimate (B.1).
|
2301.13200 | Liouville conformal field theory and the quantum zipper | Sheffield showed that conformally welding a $\gamma$-Liouville quantum
gravity (LQG) surface to itself gives a Schramm-Loewner evolution (SLE) curve
with parameter $\kappa = \gamma^2$ as the interface, and
Duplantier-Miller-Sheffield proved similar results for $\kappa =
\frac{16}{\gamma^2}$ for $\gamma$-LQG surfaces with boundaries decorated by
looptrees of disks or by continuum random trees. We study these dynamics for
LQG surfaces coming from Liouville conformal field theory (LCFT). At stopping
times depending only on the curve, we give an explicit description of the
surface and curve in terms of LCFT and SLE. This has applications to both LCFT
and SLE. We prove the boundary BPZ equations for LCFT, a crucial input for
subsequent work with Remy, Sun and Zhu deriving the structure constants of
boundary LCFT. With Yu we prove the reversibility of whole-plane SLE$_\kappa$
for $\kappa > 8$ via a novel radial mating-of-trees, and will show the space of
LCFT surfaces is closed under conformal welding. | Morris Ang | 2023-01-30T18:59:55Z | http://arxiv.org/abs/2301.13200v3 | # Liouville conformal field theory and the quantum zipper
###### Abstract
Sheffield showed that conformally welding a \(\gamma\)-Liouville quantum gravity (LQG) surface to itself gives a Schramm-Loewner evolution (SLE) curve with parameter \(\kappa=\gamma^{2}\) as the interface, and Duplantier-Miller-Sheffield proved similar stories for \(\kappa=\frac{16}{\gamma^{2}}\) for \(\gamma\)-LQG surfaces with boundaries decorated by looptrees of disks or by continuum random trees. We study these dynamics for LQG surfaces coming from Liouville conformal field theory (LCFT). At stopping times depending only on the curve, we give an explicit description of the surface and curve in terms of LCFT and SLE. This has applications to both LCFT and SLE. We prove the boundary BPZ equation for LCFT, which is crucial to solving boundary LCFT. With Yu we will prove the reversibility of whole-plane SLE\({}_{\kappa}\) for \(\kappa\geq 8\) via a novel radial mating-of-trees, and show the space of LCFT surfaces is closed under conformal welding.
## 1 Introduction
Polyakov introduced a canonical one-parameter family of random surfaces called _Liouville quantum gravity (LQG)_ to make sense of summation over surfaces [15]. The _mating-of-trees_ framework studies LQG through its coupling with random curves called _Schramm-Loewner evolution (SLE)_. Let \(\kappa>0\) and \(\gamma=\min(\sqrt{\kappa},\frac{4}{\sqrt{\kappa}})\). SLE\({}_{\kappa}\) is a simple curve when \(\kappa\leq 4\), self-intersecting when \(\kappa\in(4,8)\), and space-filling when \(\kappa\geq 8\). When \(\kappa\in(0,4]\) there is an infinite-volume \(\gamma\)-LQG surface which, when decorated by an independent SLE\({}_{\kappa}\) curve, is invariant in law under the operation of conformally welding the two boundary arcs according to their random length measures; this is called the _quantum zipper_[11, 12, 13]. Similar stories hold for other ranges of \(\kappa\) when the boundary of the LQG surface is modified to have non-trivial topology [14].
Starting with these stationary quantum zippers, the mating-of-trees approach develops a theory of conformal welding of special LQG surfaces, culminating in landmark results such as the equivalence of the Brownian map and LQG [15] and the convergence of random planar maps to LQG [16, 17]. A recent program [1, 18, 19] extends the conformal welding theory to a larger class of LQG surfaces which arise from _Liouville conformal field theory_ (LCFT). In these conformal weldings, whole boundary arcs are glued at once, in contrast to the quantum zipper where the gluing is incremental.
We study the quantum zipper dynamics of [11, 12] applied to random surfaces arising from LCFT, applying a different zipping mechanism for each parameter range of \(\kappa\). For \(\kappa\in(0,4]\), we conformally weld the left and right boundaries of the LQG surface. For \(\kappa\in(4,8)\), we add a Poisonnian collection of looptrees of LQG disks to the boundary, then mate the forested boundaries. For \(\kappa\geq 8\), we attach a pair of correlated continuum random trees to the boundary arcs of the LQG surface, then mate the continuum random trees. This gives an LQG surface with an interface curve, see figure below. In all three regimes, when the process is run until a stopping time depending only on the curve, we give an explicit description of the joint law of the field and curve. Roughly
speaking, we show the curve is described by reverse SLE\({}_{\kappa}\) and the field is described by LCFT; see Theorems 1.1, 1.2, 1.6 and 1.8 for details.
We then give an application of the LCFT quantum zipper. Belavin, Polyakov and Zamolodchikov (BPZ) proposed differential equations for conformal field theories [1], which were rigorously proved for LCFT in [10] and used in the landmark computation of the LCFT three-point function [10]. There are substantial conceptual and technical difficulties in adapting the argument of [10] to boundary LCFT. We instead prove the boundary LCFT BPZ equations via SLE martingales from the quantum zipper. These equations require a non-trivial coupling of cosmological constants (1.3) which was conjectured from special cases [11]. To the best of our knowledge, our argument gives the first conceptual explanation of (1.3), even at the physics level of rigor.
Our results have consequences for both LCFT and SLE. In [1] the BPZ equations will be used to prove the boundary three-point LCFT function equals the Ponsot-Teschner formula [12]; this is a fundamental input for the boundary LCFT conformal bootstrap. With Pu Yu we will establish a radial mating-of-trees via the LCFT quantum zipper, and use it to prove the reversibility of whole-plane SLE\({}_{\kappa}\) for \(\kappa\geq 8\). Whole-plane reversibility was shown for \(\kappa\in(0,4]\) and \(\kappa\in(4,8)\) by [14] and [15] respectively, and our result will resolve the remaining case. This answers two conjectures of [13]. Finally, with Pu Yu we will prove that the conformal welding of LCFT surfaces of arbitrary genus gives a curve-decorated LCFT surface, extending the program of [1, 1, 1, 1].
### Liouville quantum gravity and Schramm-Loewner evolution
We first briefly recall some preliminaries; a detailed introduction is given in Section 2. The free boundary Gaussian free field (GFF) on a simply-connected domain \(D\subset\mathbb{C}\) is the Gaussian process on \(D\) whose covariance kernel is the Green function; it can be understood as a random generalized function \(h\)[13]. For \(\gamma\in(0,2]\) and \(h\) a variant of the GFF, the \(\gamma\)-LQG area measure \(\mathcal{A}_{h}\) on \(D\) and boundary measure \(\mathcal{L}_{h}\) on \(\partial D\) are heuristically defined by \(\mathcal{A}_{h}(dx)=e^{\gamma h}dz\) and \(\mathcal{L}_{h}(dx)=e^{\frac{\gamma}{2}h}dx\). These definitions are made rigorous via regularization and renormalization [12, 13].
Suppose \(g:D\to\widetilde{D}\) is a conformal map and \(h\) a generalized function on \(D\). We define the generalized function \(g\bullet_{\gamma}h\) on \(\widetilde{D}\) by
\[g\bullet_{\gamma}h:=h\circ g^{-1}+Q\log|(g^{-1})^{\prime}|,\qquad Q=\frac{ \gamma}{2}+\frac{2}{\gamma}.\]
A _quantum surface_ is an equivalence class of pairs \((D,h)\) where \((D,h)\sim_{\gamma}(\widetilde{D},\widetilde{h})\) if there exists a conformal map \(g:D\to\widetilde{D}\) such that \(\widetilde{h}=g\bullet_{\gamma}h\)[13]. The pair \((D,h)\) is called an embedding of the quantum surface. The \(\gamma\)-LQG area and boundary measure are intrinsic to the quantum surface: If \(h\) is a variant of the GFF on \(D\), then writing \(g_{*}\) for the pushforward under \(g\), we have \(g_{*}\mathcal{A}_{h}=\mathcal{A}_{\widetilde{h}}\) and \(g_{*}\mathcal{L}_{h}=\mathcal{L}_{\widetilde{h}}\) where \(\widetilde{h}=g\bullet_{\gamma}h\).
Schramm-Loewner evolution (SLE) [11] is a canonical random planar curve which describes the scaling limits of many critical 2D statistical physics models, e.g. [14, 15, 16, 17].
CDCH\({}^{+}\)14]. The parameter \(\kappa>0\) describes the "roughness" of the SLE\({}_{\kappa}\) curve: the curve is simple when \(\kappa\in(0,4]\), self-intersecting (but not self-crossing) when \(\kappa\in(4,8)\), and space filling when \(\kappa\geq 8\). We will work with the _reverse_ variant of SLE\({}_{\kappa}\) where the curve grows from its base.
Liouville conformal field theory (LCFT) is the quantum field theory arising from the Liouville action introduced by the physicist Polyakov in his work on quantum gravity and string theory [11]. LCFT was rigorously constructed on the sphere in [13] by making sense of the path integral for the Liouville action, and since has been extended to other surfaces [14, 15, 16, 17]. In a series of recent breakthroughs, the _correlation functions_ of LCFT on closed surfaces were rigorously computed, by first solving for the three-point correlation function [15], and then implementing the conformal bootstrap program to recursively obtain all higher order correlation functions [16, 17].
In this paper we focus on LCFT on the disk, parametrized by the upper half-plane \(\mathds{H}\). There is an infinite measure \(\mathrm{LF}_{\mathrm{H}}\) on the space of generalized functions on \(\mathds{H}\) obtained by an additive perturbation of the GFF, which we call the _Liouville field_; see Definition 2.1. For \(\delta>0\) and finitely many \((\alpha_{j},z_{j})\in\mathds{R}\times\overline{\mathds{H}}\) we can make sense of the measure \(\mathrm{LF}_{\mathrm{H}}^{(\alpha_{j},z_{j}),(\delta,\infty)}=\prod_{j}e^{ \alpha_{j}\phi(z_{j})}e^{\delta\phi(\infty)}\mathrm{LF}_{\mathrm{H}}(d\phi)\) via regularization and renormalization. This is the Liouville field with _insertions_ of size \(\alpha_{i}\) at \(z_{i}\) and an insertion of size \(\delta\) at \(\infty\). The correlation functions of LCFT (sometimes written as \(\langle\prod e^{\alpha_{j}\phi_{j}(z_{j})}e^{\delta\phi(\infty)}\rangle\)) are defined as functionals of \(\mathrm{LF}_{\mathrm{H}}^{(\alpha_{j},z_{j}),(\delta,\infty)}\), see for instance (1.5).
### The \(\kappa\leq 4\) LCFT quantum zipper
Let \(\kappa\in(0,4)\) and \(\gamma=\sqrt{\kappa}\). See Figure 1 for a brief summary of the LCFT zipper in this regime.
Let \(n\geq 0\), let \((\alpha_{j},z_{j})\in\mathds{R}\times\overline{\mathds{H}}\) such that \(z_{1},\ldots,z_{n}\) are distinct, and let \(\delta\in\mathds{R}\). Sample a field \(\phi_{0}\sim\mathrm{LF}_{\mathrm{H}}^{(-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j },z_{j})_{j},(\delta,\infty)}\). Let \(s>0\) satisfy \(\mathcal{L}_{\phi_{0}}(-\infty,0),\mathcal{L}_{\phi_{0}}(0,\infty)>s\). For each \(u\in(0,s]\) let \(p_{u}\in(0,\infty)\) and \(q_{u}\in(-\infty,0)\) be the points such that \(\mathcal{L}_{\phi_{0}}([0,p_{u}])=\mathcal{L}_{\phi_{0}}([q_{u},0])=u\). We want to glue the boundary arcs \([q_{s},0]\) and \([0,p_{s}]\) of \(\mathds{H}\) together, identifying \(q_{u}\) with \(p_{u}\) for \(u\in(0,s]\). Almost surely there is a simple curve \(\hat{\eta}_{s}:[0,s]\to\overline{\mathds{H}}\) such that \(\hat{\eta}_{s}\cap\mathds{R}=\hat{\eta}_{s}(s)\) and a conformal map \(\hat{g}_{s}:\mathds{H}\to\mathds{H}\backslash\hat{\eta}_{s}\) fixing \(\infty\) such that \(\hat{g}_{s}(p_{u})=\hat{g}_{s}(q_{u})=\hat{\eta}_{s}(u)\) for all \(u\leq s\); this is called a _conformal welding_. The pair \((\hat{\eta}_{s},\hat{g}_{s})\) is unique modulo conformal automorphisms of \(\mathds{H}\), so specifying the _hydrodynamic normalization_\(\lim_{z\to\infty}\hat{g}_{s}(z)-z=0\) uniquely defines \((\hat{\eta}_{s},\hat{g}_{s})\). The existence and uniqueness of \((\hat{\eta}_{s},\hat{g}_{s})\) was shown in [18] for \(\gamma<2\); for \(\gamma=2\) existence was established by [14] and uniqueness by [11] (see also [19]).
Thus, for \(\phi_{0}\sim\mathrm{LF}_{\mathrm{H}}^{(-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j },z_{j})_{j},(\delta,\infty)}\), we can define a process \((\hat{\eta}_{s},\hat{g}_{s})\) for \(s<\min(\mathcal{L}_{\phi_{0}}(-\infty,0),\mathcal{L}_{\phi_{0}}(0,\infty))\). The _half-plane capacity_ of \(\hat{\eta}_{s}\) is \(\mathrm{hcap}(\hat{\eta}_{s}):=\lim_{z\to\infty}z(\hat{g}_{s}(z)-z)\). We reparametrize time to get a process \((\eta_{t},g_{t})\) such that \(\mathrm{hcap}(\eta_{t})=2t\). Define \(\phi_{t}=g_{t}\bullet_{\gamma}\phi_{0}\) and \(W_{t}=\eta_{t}\cap\mathds{R}\). See Figure 1.
We first give a description of the law of the field and curve when the process is run until a stopping time before any marked points are zipped into the curve.
**Theorem 1.1**.: _In the setting immediately above, assume \(z_{1},\ldots,z_{n}\neq 0\) and let \(\tau\) be a stopping time with respect to the filtration \(\mathcal{F}_{t}=\sigma(\eta_{t})\) such that a.s. \(g_{\tau}(z_{j})\not\in\eta_{\tau}\) for all \(j\). Let \(I=\{i\::\:z_{i}\in\mathds{H}\}\) and \(B=\{b\::\:z_{b}\in\mathds{R}\}\). Then the law of \((\phi_{\tau},\eta_{\tau})\) is_
\[\prod_{i\in I}|g_{\tau}^{\prime}(z_{i})|^{2\Delta_{\alpha_{i}}}\prod_{b\in B}| g_{\tau}^{\prime}(z_{b})|^{\Delta_{2\alpha_{b}}}\mathrm{LF}_{\mathrm{H}}^{(- \frac{1}{\sqrt{\kappa}},W_{\tau}),(\alpha_{j},g_{\tau}(z_{j}))_{j},(\delta, \infty)}(d\phi)\,\mathrm{rSLE}_{\kappa}^{\tau}(d\eta),\]
_where \(\Delta_{\alpha}:=\frac{\alpha}{2}(Q-\frac{\alpha}{2})\) and \(\mathrm{rSLE}_{\kappa}^{\tau}\) is the law of reverse \(\mathrm{SLE}_{\kappa}\) run until the stopping time \(\tau\)._
Informally, zipping up a Liouville field until a stopping time \(\tau\) that depends only on the zipping interface, the curve \(\eta_{\tau}\) is described by reverse SLE\({}_{\kappa}\), and given \(\eta_{\tau}\) the resulting field is described by a Liouville field with insertions at locations determined by \(\eta_{\tau}\).
In Theorem 1.1 the condition that \(g_{\tau}(z_{j})\not\in\eta_{\tau}\) for all \(j\) is necessary, since otherwise the law of the curve would be singular with respect to reverse SLE\({}_{\kappa}\). In Theorem 1.2, we allow boundary marked points to be zipped into the bulk, by using an SLE\({}_{\kappa}\) variant called _reverse_ SLE\({}_{\kappa}\)_with force points_ (see Section 2.3). Regardless, the zipping procedure can only be run until the _continuation threshold_, defined as the first time \(t\leq\infty\) that any neighborhood of \(W_{t}\) in \(\mathds{R}\) has infinite quantum length, i.e. \(\mathcal{L}_{\phi_{t}}((W_{t}-\varepsilon,W_{t}+\varepsilon))=\infty\) for all \(\varepsilon>0\). Once the continuation threshold is hit, there is no canonical way to continue the conformal welding.
For finitely many \((a_{j},p_{j})\in\mathds{R}\times\overline{\mathds{H}}\) such that the points \(p_{j}\) are distinct, we define
\[\mathcal{Z}((a_{j},p_{j})_{j})=\prod_{i\in I}(2\operatorname{Im}p_{i})^{-a_{i }^{2}/2}\prod_{j<k}e^{a_{j}a_{k}G(p_{j},p_{k})},\quad I=\{i:p_{i}\in\mathbb{H }\}, \tag{1.1}\]
where \(G(p,q)=-\log|p-q|-\log|p-\overline{q}|\). If the \(p_{j}\) are not distinct, we combine all pairs \((a,p)\) with the same \(p\) by summing their \(a\)'s to get a collection \((a_{j}^{\prime},p_{j}^{\prime})\) with \(p_{j}^{\prime}\) distinct, and define \(\mathcal{Z}((a_{j},p_{j})_{j}):=\mathcal{Z}((a_{j}^{\prime},p_{j}^{\prime})_{ j})\).
**Theorem 1.2**.: _In the setting above Theorem 1.1, let \(\tau\) be a stopping time with respect to the filtration \(\mathcal{F}_{t}=\sigma(\eta_{t})\) which is not beyond the continuation threshold. Then the law of \((\phi_{\tau},\eta_{\tau})\) is_
\[\frac{\mathcal{Z}((-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j},z_{j})_{j})}{ \mathcal{Z}((-\frac{1}{\sqrt{\kappa}},W_{\tau}),(\alpha_{j},g_{\tau}(z_{j}))_ {j})}\mathrm{LF}_{\mathrm{H}}^{(-\frac{1}{\sqrt{\kappa}},W_{\tau}),(\alpha_{j },g_{\tau}(z_{j}))_{j},(\delta,\infty)}(d\phi)\,\mathrm{rSLE}_{\kappa,\rho}^{ \tau}(d\eta) \tag{1.2}\]
_where \(\mathrm{rSLE}_{\kappa,\rho}^{\tau}\) denotes the law of reverse SLE\({}_{\kappa,\rho}\) with a force point at \(z_{j}\) of weight \(\rho_{j}=2\sqrt{\kappa}\alpha_{j}\) for each \(j\), run until the stopping time \(\tau\)._
We emphasize that while we study the same Liouville field as e.g. [1, 1, 1, 2], we use slightly different notation for boundary insertions, see Remark 2.4. The present choice of notation simplifies the statement of Theorem 1.2 since boundary insertions zipped into the bulk maintain the same value of \(\alpha\).
**Remark 1.3**.: _Let \(R_{t}:=\mathcal{Z}((-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j},z_{j})_{j})/\mathcal{ Z}((-\frac{1}{\sqrt{\kappa}},W_{t}),(\alpha_{j},g_{t}(z_{j}))_{j})\). If \(\tau\) is a time such that \(g_{\tau}(z_{j})=W_{\tau}\) for some \(j\), then \(\lim_{t\uparrow\tau}R_{t}\in\{0,\infty\}\) whereas \(R_{\tau}\in(0,\infty)\). This apparent discontinuity is only cosmetic: the definition of \(\mathrm{LF}_{\mathds{H}}^{(-\frac{1}{\sqrt{\kappa}},W_{t}),(\alpha_{j},g_{t}( z_{j}))_{j},(\delta,\infty)}\) includes a factor of \(C_{\kappa}^{(-\frac{1}{\sqrt{\kappa}},W_{t}),(\alpha_{j},g_{t}(z_{j}))_{j},( \delta,\infty)}\) (defined in (2.2)), and \(C_{\kappa}^{(-\frac{1}{\sqrt{\kappa}},W_{t}),(\alpha_{j},g_{t}(z_{j}))_{j},( \delta,\infty)}R_{t}\) is continuous in \(t\)._
### The \(\kappa\in(4,8)\) LCFT quantum zipper
We now need to work with _beaded quantum surfaces_ which can have nontrivial topology. Suppose \((D,h)\) are such that \(D\subset\mathbb{C}\) is a closed set such that each connected component of the interior of \(D\) and its prime-end boundary is homeomorphic to the closed disk, and \(h\) is a generalized function defined only on the interior of \(D\). We say that \((D,h)\sim_{\gamma}(\widetilde{D},\widetilde{h})\) if there is a homeomorphism \(g:D\to\widetilde{D}\) which is conformal on each component of the interior of \(D\) and which satisfies \(\widetilde{h}=g\bullet_{\gamma}h\). A _beaded quantum surface_ is an equivalence class of pairs \((D,h)\) under \(\sim_{\gamma}\).
Let \(\kappa\in(4,8)\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). A _forested line_ is a beaded quantum surface defined as a forest of looptrees of LQG disks whose structure is determined by a stable Levy process, see [14, Section 1.4.2] for details. We instead give an equivalent definition here. A quantum wedge is a canonical scale-invariant \(\gamma\)-LQG surface with a _weight_ parameter \(W>0\). When \(W\geq\frac{\gamma^{2}}{2}\) the quantum wedge is called _thick_ and has the half-plane topology, and when \(W<\frac{\gamma^{2}}{2}\) the quantum wedge is called _thin_ and is the concatenation of a chain of countably many "beads". See Section 2.4 for precise definitions.
**Definition 1.4** (Forested line).: _Sample a (thin) weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge and let \(\eta\) be a concatenation of independent \(\mathrm{SLE}_{\kappa}(\frac{\kappa}{2}-4;\frac{\kappa}{2}-4)\) curves in each bead. The forested line is the beaded quantum surface lying to the left (or right) of \(\eta\)._
The forested line has the quantum length measure on its non-forested boundary, and a _quantum forested length_ measure on its forested boundary which is defined via the Levy process construction. In our setup, the easiest description of the quantum forested length measure is that it agrees with the quantum natural parametrization of \(\eta\) on the quantum wedge from Definition 1.4.
Proposition 1.5 below states that there is a way to mate a pair of independent forested lines according to quantum forested length to obtain a curve-decorated quantum surface.
**Proposition 1.5** ([14, Theorem 1.15]).: _Sample a weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge and let \(\eta\) be a concatenation of independent \(\mathrm{SLE}_{\kappa}(\frac{\kappa}{2}-4;\frac{\kappa}{2}-4)\) curves in each bead. Then the beaded quantum surfaces to the left and right of \(\eta\) are independent forested lines with forested boundaries identified according to quantum forested length, and moreover the curve-decorated quantum wedge is measurable with respect to the pair of forested lines._
[14, Theorem 1.15] uses the Levy process definition of forested line. Consequently that definition is equivalent to Definition 1.4.
We now explain a quantum zipper for the Liouville field with forested boundary, see Figure 2. Let \(\mathrm{FL}_{\kappa}\) be the law of the pair of forested lines in Proposition 1.5 (so \(\mathrm{FL}_{\kappa}\) is a probability measure). Let \(n\geq 0\), let \((\alpha_{j},z_{j})\in\mathds{R}\times\overline{\mathds{H}}\) such that \(z_{1},\ldots,z_{n}\) are distinct, and let \(\delta\in\mathds{R}\). Sample \((\phi_{0},(F_{L},F_{R}))\sim\mathrm{LF}_{\mathds{H}}^{(-\frac{1}{\sqrt{\kappa} },0),(\alpha_{j},z_{j})_{j},(\delta,\infty)}\times\mathrm{FL}_{\kappa}\) and glue \(F_{L}\) (resp. \(F_{R}\)) to the boundary of \((\mathds{H},\phi_{0})\) according to quantum length, starting at \(0\) and gluing to the left (resp. right). If \(\mathcal{L}_{\phi_{0}}((-\infty,0))<\infty\) we only glue the initial segment of \(F_{L}\) to \((-\infty,0)\), and cut and discard the remainder of the forested line, and proceed likewise for the right boundary. This gives a beaded quantum surface. For \(s>0\)
such that the quantum forested lengths to the left and right of \(0\) are at least \(s\), let \(p_{s}\) and \(q_{s}\) be the forested boundary points to the left and right of \(0\) having quantum forested length \(s\) from \(0\). We mate the forested lines until we have identified \(p_{s}\) and \(q_{s}\), obtaining a curve-decorated forested quantum surface (with the curve being the interface). We embed the curve-containing connected component via the hydrodynamic normalization, to get \((\mathds{H},\hat{\phi}_{s},\hat{\eta}_{s})\). That is, if \(\hat{g}_{s}\) is the conformal map from \(\mathds{H}\) to the unbounded connected component of \(\mathds{H}\backslash\hat{\eta}_{s}\) satisfying \(\lim_{z\to\infty}\hat{g}_{s}(z)-z=0\), then \(\hat{g}_{s}^{-1}\bullet_{\gamma}\hat{\phi}_{s}=\phi_{0}\).
We reparametrize the process \((\hat{\phi}_{s},\hat{\eta}_{s},\hat{g}_{s})\) according to half-plane capacity to get \((\phi_{t},\eta_{t},g_{t})\) such that \(\operatorname{hcap}(\eta_{t})=2t\). Let \(W_{t}\) be the endpoint of \(\eta_{t}\) lying in \(\mathds{R}\). The _continuation threshold_ is the first time \(t\leq\infty\) that any neighborhood of \(W_{t}\) has infinite quantum forested length.
**Theorem 1.6**.: _For the \(\kappa\in(4,8)\) setting immediately above, the conclusions of Theorems 1.1 and 1.2 hold._
### The \(\kappa\geq 8\) LCFT quantum zipper
In this section, we describe a mating of correlated continuum random trees. Let \((X_{t})_{t\geq 0}\) and \((Y_{t})_{t\geq 0}\) be correlated Brownian motions with \(\operatorname{Var}(X_{t})=\operatorname{Var}(Y_{t})=\operatorname{a}^{2}t\) and \(\operatorname{Cov}(X_{t},Y_{t})=-\operatorname{a}^{2}\cos(\frac{4\pi}{\kappa})t\) where \(\operatorname{a}^{2}=2/\sin(\frac{4\pi}{\kappa})\). Informally, we can construct a _continuum random tree_ from \((X_{t})_{t\geq 0}\) by plotting the graph of \(t\mapsto X_{t}\), letting \(S\subset\mathds{R}^{2}\) be the set of points lying on or below the graph, and identifying points in \(S\) which lie on a horizontal chord which stays below the graph; see Figure 3. In the same way we may construct a continuum random tree from \((Y_{t})_{t\geq 0}\), so \((X_{t},Y_{t})_{t\geq 0}\) describes a pair of correlated continuum random trees. We write \(\operatorname{CRT}_{\kappa}\) for the law of \((X_{t},Y_{t})_{t\geq 0}\).
Recall that when \(\kappa\geq 8\), the \(\operatorname{SLE}_{\kappa}\) curve is space-filling. In this regime, the seminal mating-of-trees theorem1 of Duplantier, Miller and Sheffield can be stated as follows. See Figure 4.
Footnote 1: There is a mating-of-trees theorem for \(\kappa\in(4,8)\) but with more complicated topology, see Proposition 7.6.
**Proposition 1.7**.: _Let \(\kappa\geq 8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Let \((\mathds{H},\phi,0,\infty,\eta)\) be an embedding of a weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge decorated by an independent \(\operatorname{SLE}_{\kappa}\) curve \(\eta\). Parametrize \(\eta\) by quantum area, so \(A_{\phi}(\eta([0,t]))=t\). On the counterclockwise (resp. clockwise) boundary arc of \(\eta([0,t])\) from \(0\) to \(\eta(t)\), let \(X_{t}^{-}\) and \(X_{t}^{+}\) (resp. \(Y_{t}^{-}\) and \(Y_{t}^{+}\)) be the quantum lengths of the boundary segments in \(\mathds{R}\) and \(\mathds{H}\) respectively. Then the law of \((X_{t},Y_{t}):=(X_{t}^{+}-X_{t}^{-},Y_{t}^{+}-Y_{t}^{-})\) is \(\operatorname{CRT}_{\kappa}\). Moreover, the curve-decorated quantum surface \((\mathds{H},\phi,0,\infty,\eta)/{\sim_{\gamma}}\) is measurable with respect to \((X_{t},Y_{t})_{t\geq 0}\)._
Figure 2: Let \(\kappa\in(4,8)\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). **Left:** We sample a Liouville field \(\phi_{0}\) and attach independent forested lines to the left and right boundary arcs. Marked points are not depicted. **Right:** Mating the forested boundary arcs corresponds to “zipping up the quantum zipper”. The curve \(\eta_{t}\) is the interface in \(\mathds{H}\) between the purple and green forests. The conformal map \(g_{t}\) sends the pink region on the left to that on the right and satisfies \(\lim_{z\to\infty}g_{t}(z)-z=0\).
The measurability statement and the fact that \((X_{t},Y_{t})\) evolve as correlated Brownian motion was shown in [14], the factor \(\cos(\frac{\pi\gamma^{2}}{4})\) was obtained in [11], and the value of \(\mathrm{a}^{2}\) was identified in [1]. The mating-of-trees theorem stated here only covers the range \(\kappa\geq 8\) for topological simplicity; the range \(\kappa\in(4,8)\) is stated as Proposition 7.6.
We can now state the quantum zipper for the Liouville field with correlated CRTs glued to the boundary, see Figure 5. Suppose \(\kappa\geq 8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Sample \((\phi_{0},(X_{t},Y_{t})_{t\geq 0})\sim\mathrm{LF}^{(-\frac{1}{\sqrt{\kappa}} 0),(\alpha_{j},z_{j})_{j},(\delta,\infty)}\times\mathrm{CRT}_{\kappa}\) and let \(X_{s}^{-}=-\inf_{u\leq s}X_{u}\) and \(Y_{s}^{-}=-\inf_{u\leq s}Y_{u}\). For \(s>0\) such that \(L_{\phi_{0}}((-\infty,0))>X_{s}^{-}\) and \(L_{\phi_{0}}((0,\infty))>Y_{s}^{-}\), let \(a_{s}\in(-\infty,0)\) and \(b_{s}\in(0,\infty)\) satisfy \(L_{\phi_{0}}((a_{s},0))=X_{s}^{-}\) and \(L_{\phi_{0}}((0,b_{s}))=Y_{s}^{-}\). Mate the pair of continuum random trees for \(s\) units of quantum area to get a quantum surface with boundary arcs of lengths \(X_{s}^{-},X_{s}^{+},Y_{s}^{-},Y_{s}^{+}\), and conformally weld the boundary arcs of lengths \(X_{s}^{-}\) and \(Y_{s}^{-}\) to the boundary arcs \((a_{s},0)\) and \((0,b_{s})\) of \((\mathbb{H},\phi_{0})\) respectively. Embed the resulting curve-decorated quantum surface via the hydrodynamic normalization to get \((\mathbb{H},\hat{\phi}_{s},\hat{\eta}_{s})\). That is, if \(\hat{g}_{s}\) is the conformal map from \(\mathbb{H}\) to the unbounded connected component of \(\mathbb{H}\backslash\hat{\eta}_{s}\) satisfying \(\lim_{z\to\infty}\hat{g}_{s}(z)-z=0\), then \(\hat{g}_{s}^{-1}\bullet_{\gamma}\hat{\phi}_{s}=\phi_{0}\).
We reparametrize the process \((\hat{\phi}_{s},\hat{\eta}_{s},\hat{g}_{s})\) according to half-plane capacity to get \((\phi_{t},\eta_{t},g_{t})\) such that \(\mathrm{hcap}(\eta_{t})=2t\). Let \(W_{t}\) be the endpoint of \(\eta_{t}\) lying in \(\mathbb{R}\). The continuation threshold is the first time \(t\) that any neighborhood of \(W_{t}\) in \(\mathbb{R}\) has infinite quantum length.
**Theorem 1.8**.: _For the \(\kappa\geq 8\) setting immediately above, the conclusions of Theorems 1.1 and 1.2 hold._
**Remark 1.9**.: _An analogous result for \(\kappa\in(4,8)\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\in(\sqrt{2},2)\) can be obtained by comparing Theorem 1.6 to the mating-of-trees theorem Proposition 7.6. The difference is that for typical \(s\)
Figure 4: **Left:** A pair of correlated continuum random trees. **Right:** There is a way of mating the pair of CRTs to get a weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge decorated by an independent (space-filling) \(\mathrm{SLE}_{\kappa}\) curve. **Middle:** Running the mating procedure until the quantum area is \(t\), we write \(X_{t}^{-},X_{t}^{+},Y_{t}^{-},Y_{t}^{+}\) for the quantum lengths of the red, orange, blue and green boundary arcs respectively, and set \((X_{t},Y_{t})=(X_{t}^{+}-X_{t}^{-},Y_{t}^{+}-Y_{t}^{-})\).
Figure 3: **Left:** Plot the graphs of the correlated Brownian motions \(X_{t}\) and \(Y_{t}\) (orange and green) and draw horizontal segments below the graphs (black). **Right:** Identifying points on the same horizontal segment gives a pair of correlated continuum random trees.
the quantum surface obtained by mating \((X_{\cdot},Y_{\cdot})_{[0,s]}\) and gluing to \(\phi_{0}\) is_ beaded _(see Figure 11 (middle)). If we restrict the process to the times when the quantum surface is simply-connected, and replace the space-filling interface with a non-space-filling curve measurable with respect to it, the result is the process described by Theorem 1.6. This is explained and used in Section 7.3._
### BPZ equation for Liouville conformal field theory
We now give an application of the quantum zipper to LCFT, by proving a novel Belavin-Polyakov-Zamolodchikov (BPZ) equation for correlation functions with a degenerate boundary insertion.
To give a statement that matches the literature, in this section we will use a different notation for bulk and boundary insertions. Let \(m,n\geq 0\). Let \((\alpha_{j},z_{j})\in\mathds{R}\times\mathds{H}\) for \(j\leq m\) and assume the \(z_{j}\) are distinct. Let \(-\infty=x_{0}<x_{1}<\cdots<x_{n}<x_{n+1}=+\infty\) be boundary points, let \(\beta_{1},\ldots,\beta_{n}\in\mathds{R}\), and let \(\delta\in\mathds{R}\). Let \(\beta_{*}\in\{-\frac{\gamma}{2},-\frac{2}{\gamma}\}\).
For \(k=0,\ldots,n\) let \(I_{k}=(x_{k},x_{k+1})\). Distinguish an index \(k_{*}\in\{0,\ldots,n\}\) and let \(w\in I_{k_{*}}\). Let \(I_{L}=(x_{k_{*}},w)\) and \(I_{R}=(w,x_{k_{*}+1})\). See Figure 6. For \(k\neq k_{*}\) let \(\mu_{k}\in\mathbb{C}\) satisfy \(\operatorname{Re}\mu_{k}\geq 0\), and assume \(\mu_{L},\mu_{R}\in\mathbb{C}\) satisfy \(\operatorname{Re}\mu_{L},\operatorname{Re}\mu_{R}\geq 0\) and are defined in terms of \(\sigma_{L},\sigma_{R}\in\mathbb{C}\) as follows:
\[\mu_{L}=g(\sigma_{L}),\ \mu_{R}=g(\sigma_{R})\quad\text{ where }g(\sigma)= \frac{\cos(\pi\gamma(\sigma-\frac{Q}{2}))}{\sqrt{\sin(\pi\gamma^{2}/4)}}\text{ and }\sigma_{L}-\sigma_{R}=\pm\frac{\beta_{*}}{2}. \tag{1.3}\]
Suppose the _Seiberg bounds_ hold:
\[\sum_{j}\alpha_{j}+\sum_{k}\frac{\beta_{k}}{2}+\frac{\delta}{2}+\frac{\beta_{ *}}{2}>Q,\qquad\alpha_{j},\beta_{k}<Q\text{ for all }j,k,\quad\delta<Q. \tag{1.4}\]
Figure 5: Let \(\kappa\geq 8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). **Left:** We sample a Liouville field \(\phi_{0}\) and attach correlated continuum random trees to the left and right boundaries. Marked points are not depicted. **Right:** Mating the trees corresponds to “zipping up the quantum zipper”. The curve \(\eta_{t}\) is the interface in \(\mathds{H}\) between the orange and green trees. The conformal map \(g_{t}\) sends the pink region on the left to that on the right.
Figure 6: Marked boundary points on \(\partial\mathbb{H}\) and the intervals between them. The bulk insertions are \((\alpha_{j},z_{j})_{j}\) (not depicted), the boundary insertions are \((\frac{\beta_{*}}{2},w),(\frac{\delta}{2},\infty)\) and \((\frac{\beta_{k}}{2},x_{k})\) for \(1\leq k\leq n\). Each boundary interval \(I_{\bullet}\) has an associated boundary cosmological constant \(\mu_{\bullet}\).
We define the LCFT correlation function \(F_{\beta_{*}}(w,(z_{j})_{j},(x_{k})_{k})\) to be
\[\operatorname{LF}_{\operatorname{H}}^{(\frac{\beta_{*}}{2},w),(\alpha_{j},z_{j} )_{j},(\frac{\beta_{k}}{2},x_{k}),(\frac{\xi}{2},\infty)}[\exp(-\mathcal{A}_{ \phi}(\mathbb{H})-\sum_{\begin{subarray}{c}0\leq k\leq n\\ k\neq k_{*}\end{subarray}}\mu_{k}\mathcal{L}_{\phi}(I_{k})-\mu_{L}\mathcal{L}_ {\phi}(I_{L})-\mu_{R}\mathcal{L}_{\phi}(I_{R}))]. \tag{1.5}\]
Because the Seiberg bounds hold, the integral (1.5) converges absolutely [14, Theorem 3.1].
**Theorem 1.10**.: _The correlation functions \(F_{\beta_{*}}\) are smooth and satisfy the BPZ equation_
\[\Bigg{(}\frac{1}{\beta_{*}^{2}}\partial_{ww}+\sum_{j}(\frac{1}{w-z_{j}} \partial_{z_{j}}+\frac{1}{w-\overline{z}_{j}}\partial_{\overline{z}_{j}})+ \sum_{k}\frac{1}{w-x_{k}}\partial_{x_{k}}+\sum_{j}\operatorname{Re}\frac{2 \Delta_{\alpha_{j}}}{(w-z_{j})^{2}}+\sum_{k}\frac{\Delta_{\beta_{k}}}{(w-x_{k} )^{2}}\Bigg{)}F_{\beta_{*}}=0.\]
We sketch the proof under the assumption that \(F_{\beta_{*}}\) is smooth. We set \(\kappa=\frac{4}{\beta_{*}^{2}}\), and interpret the quantum zipper as describing reverse \(\operatorname{SLE}_{\kappa}\) whose law is weighted by a term closely related to \(F_{\beta_{*}}(W_{t},g_{t}(z_{j})_{j},g_{t}(x_{k})_{k})\). Let \(A_{t},L_{t},R_{t}\) be the quantum area and left and right quantum boundary lengths at time \(t\) of the quantum zipper. For \(\beta_{*}=-\frac{2}{\gamma}\) and \(\kappa\leq 4\), the coupling of \(\sigma_{L}\) and \(\sigma_{R}\) gives \(\mu_{L}+\mu_{R}=0\), so \(\mu_{L}L_{t}+\mu_{R}R_{t}=\mu_{L}(L_{t}-R_{t})\). This quantity is invariant under the conformal welding zipper process, giving rise to an \(\operatorname{SLE}_{\kappa}\) martingale from which the BPZ equation is immediate. For \(\beta_{*}=-\frac{\gamma}{2}\) and \(\kappa>4\), although \(e^{-A_{t}-\mu_{L}L_{t}-\mu_{R}R_{t}}\) is not invariant, it evolves as a martingale (Lemma 7.3); the proof uses the mating-of-trees Brownian motion and requires the coupling (1.3). This again gives an \(\operatorname{SLE}_{\kappa}\) martingale and thus the BPZ equation. A hypoellipticity argument following [15, 16] is used to sidestep the smoothness assumption.
In their pioneering work, Belavin, Polyakov and Zamolodchikov used representation theoretic methods to derive BPZ equations for the sphere [13]. These were recently mathematically proved for LCFT [15] using a rather subtle argument involving cancellations of not absolutely convergent integrals. See also [12, 14, 15] for BPZ equations on the disk with bulk cosmological constant zero, that is, in (1.5) the term \(-\mathcal{A}_{\phi}(\mathbb{H})\) is removed.
When the bulk cosmological constant is nonzero, the BPZ equation does not hold unless there is a coupling of the cosmological constants \(\mu_{L}\) and \(\mu_{R}\) via (1.3). This was proposed by [13] after examining special cases. As far as we know, there was no prior conceptual explanation of this constraint, even in the physics literature. Our martingale argument explains (1.3).
**Remark 1.11**.: _The statement of Theorem 1.10 was chosen for simplicity. Our argument is quite robust, and many of the conditions can be loosened._
* _We can choose_ \(\beta_{k}\geq Q\) _if the boundary cosmological constants for the intervals adjacent to_ \(x_{k}\) _are zero. We can choose_ \(\delta\geq Q\) _if_ \(\mu_{0}=\mu_{n}=0\)_._
* _The condition that_ \(\operatorname{Re}\mu_{k},\operatorname{Re}\mu_{L},\operatorname{Re}\mu_{R}\geq 0\) _can be relaxed so long as (_1.5_) converges absolutely._
* _The bound_ \(\sum_{j}\alpha_{j}+\sum_{k}\frac{\beta_{k}}{2}+\frac{\delta}{2}+\frac{\beta_{ *}}{2}>Q\) _is needed to ensure convergence of (_1.5_). There are ways to relax this condition to regimes where (_1.5_) is nonconvergent, by introducing_ truncations _where the exponential in (_1.5_) is replaced by_ \(e^{z}-1\) _or_ \(e^{z}-1-z\) _for instance. See e.g._ _[_1, Proposition 3.4]__,_ _[_15_, (_1.7_)]_ _and_ _[_13_, Theorem B]__._
We note that the smoothness result of Theorem 1.10 is itself rather nontrivial. [15] proved the bulk correlation functions are \(C^{2}\) using approximations of the GFF, and this was extended to \(C^{\infty}\) by [16]. It is not clear to us whether these arguments can be adapted for boundary LCFT.
### Outlook
We state here a few applications of the LCFT quantum zipper, and mention some future directions and open questions.
#### 1.6.1 Integrability of boundary LCFT
The basic objects of conformal field theories are their correlation functions, and solving a conformal field theory means obtaining exact formulae for them. For the case of LCFT on surfaces without boundary, this was carried out in a series of landmark works that proved the three-point structure constant equals the DOZZ formula proposed in physics [13], then made rigorous the conformal bootstrap program of physicists to recursively solve for all correlation functions on all surfaces [14, 15].
A similar program is currently being carried out for LCFT on surfaces with boundary. In [16] we computed the one-point bulk structure constant using mating-of-trees. Taking the BPZ equation Theorem 1.10 as input, in future work with Remy, Sun and Zhu [16] we compute the boundary three-point structure constant. The boundary conformal bootstrap program has been initiated, see [20].
#### 1.6.2 Reversibility of whole plane \(\mathrm{SLE}_{\kappa}\) when \(\kappa\geq 8\)
For \(\kappa\in(0,8]\), chordal SLE is reversible in the following sense. Let \(f:\mathbb{H}\to\mathbb{H}\) be a conformal automorphism with \(f(0)=\infty\) and \(f(\infty)=0\). If \(\eta\) is \(\mathrm{SLE}_{\kappa}\) in \(\mathbb{H}\) from \(0\) to \(\infty\), then the time-reversal of \(f\circ\eta\) has the same law as \(\eta\) up to time-reparametrization [17, 18]. However, reversibility fails for \(\kappa>8\) chordal \(\mathrm{SLE}_{\kappa}\)[18].
_Whole-plane SLE_ is a variant of SLE in \(\mathbb{C}\) that starts at \(\infty\) and targets \(0\). Its reversibility was established by [17] for \(\kappa\leq 4\) and [18] for \(\kappa\in(4,8]\). For \(\kappa>8\) the reversibility of whole-plane \(\mathrm{SLE}_{\kappa}\) was conjectured in [20] via reversibility of the \(\kappa\to\infty\) large deviations rate function, which they obtained from a field-foliation coupling they interpret as describing a "\(\kappa\to\infty\) radial mating-of-trees". Inspired by [20], in future work with Pu Yu we establish a radial mating-of-trees using the LCFT quantum zipper, then exploit mating-of-trees reversibility and LCFT reversibility to show whole-plane \(\mathrm{SLE}_{\kappa}\) reversibility for \(\kappa\geq 8\).
#### 1.6.3 A general theory of conformal welding in LCFT
It is known in many cases that conformally welding quantum surfaces described by LCFT produces a quantum surface also described by LCFT. This was first demonstrated for quantum wedges [19], and many more cases followed [14, 1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 286, 287, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 31, 320, 321, 322, 333, 34, 34, 35, 36, 37, 38, 39, 40, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 83, 85, 87, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 11, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 11, 19, 12, 13, 14, 15, 16, 17, 19, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 31, 29, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 89, 91, 92, 93, 94, 95, 96, 97, 99, 10, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 61, 62, 63, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 89, 91, 92, 93, 94, 95, 96, 97, 99, 10, 101
are conformally welded, it can be shown that the result is a four-pointed LCFT surface decorated by multiple SLE. With Xin Sun and Pu Yu, we will investigate the resulting random cross-ratio via the LCFT quantum zipper and relate it to the partition functions of LCFT and multiple SLE. Our dynamical approach is orthogonal to that of [1], which obtained laws of the moduli of natural random annuli by comparing random planar map observables with LCFT observables.
#### 1.6.5 Other BPZ equations in random conformal geometry
Our argument uses the mating-of-trees framework to prove the boundary BPZ equation for LCFT on the disk. The BPZ equation for LCFT on the sphere has already been shown [14], but can something similar be done to give an alternative proof?
The _conformal loop ensemble (CLE)_[15, 16] is a canonical conformally invariant collection of loops that locally look like SLE [15], and arises as the scaling limit of the collection of interfaces of statistical physics models. It is expected that CLE is described by a CFT - indeed a suitably-defined CLE three-point function agrees with the _generalized minimal model (GMM)_ CFT structure constant [1] - so CLE multipoint functions should satisfy the BPZ equations. In the present work we obtained the boundary LCFT BPZ equation from a BPZ equation for SLE (in the sense that using Ito calculus on the martingale of Lemma 3.1 gives a second-order differential equation resembling BPZ), using mating-of-trees. One might hope to turn this argument around: could a BPZ equation for CLE be obtained from the BPZ equation for LCFT [14] using mating-of-trees?
**Organization of the paper.** In Section 2 we recall some preliminaries about LQG, LCFT, SLE and the mating-of-trees framework. In Section 3 we explain that reverse SLE\({}_{\kappa,\rho}\) is reverse SLE\({}_{\kappa}\) weighted by a GFF partition function. In Section 4 we adapt a Neumann GFF quantum zipper of [13] to obtain the \(\kappa\leq 4\) LCFT quantum zipper. In Section 5 we prove the \(\kappa\geq 8\) LCFT quantum zipper, and in Section 6 the \(\kappa\in(4,8)\) LCFT quantum zipper. Finally, we establish the LCFT boundary BPZ equations in Section 7.
**Acknowledgements.** We thank Guillaume Remy, Xin Sun and Tunan Zhu for earlier discussions on alternative approaches towards proving the boundary LCFT BPZ equations. We thank Xin Sun and Pu Yu for helpful discussions. The author was supported by the Simons Foundation as a Junior Fellow at the Simons Society of Fellows.
## 2 Preliminaries
### The Gaussian free field and Liouville quantum gravity
Let \(m\) be the uniform probability measure on the unit half-circle \(\{z\;:\;z\in\mathbb{H},|z|=1\}\). The Dirichlet inner product is defined by \(\langle f,g\rangle_{\nabla}=(2\pi)^{-1}\int_{\mathbb{H}}\nabla f\cdot\nabla g\). Consider the collection of smooth functions \(f\) with \(\langle f,f\rangle_{\nabla}<\infty\) and \(\int f(z)m(dz)=0\). Let \(H\) be its Hilbert space closure with respect to the inner product \(\langle\cdot,\cdot\rangle_{\nabla}\). Let \((f_{n})\) be an orthonormal basis of \(H\) and let \((\alpha_{n})\) be a collection of independent standard Gaussians. The summation
\[h=\sum_{n}\alpha_{n}f_{n}\]
a.s. converges in the space of distributions. Then \(h\) is the _Gaussian free field on \(\mathbb{H}\) normalized so \(\int h(z)m(dz)=0\)_[_1_, Section 4.1.4].
Write \(|z|_{+}:=\max(|z|,1)\). For \(z,w\in\overline{\mathbb{H}}\) we define
\[\begin{split} G_{\rm H}(z,w)=-\log|z-w|-\log|z-\overline{w}|+2\log|z |_{+}+2\log|w|_{+},\\ G_{\rm H}(z,\infty)=\lim_{w\to\infty}G_{\rm H}(z,w)=2\log|z|_{+}. \end{split} \tag{2.1}\]
The GFF \(h\) is the centered Gaussian field with covariance structure formally given by \(\mathbb{E}[h(z)h(w)]=G_{\rm H}(z,w)\). This is formal because \(h\) is a distribution and so does not admit pointwise values, but for smooth compactly supported test functions \(f,g\) on \(\mathbb{H}\) we have \(\mathbb{E}[(h,f)(h,g)]=\iint G_{\rm H}(z,w)f(z)g(w)\,dz\,dw\).
Suppose \(\phi=h+g\) where \(g\) is a (random) function on \(\mathbb{H}\) which is continuous at all but finitely many points. Let \(\phi_{\varepsilon}(z)\) denote the average of \(\phi\) on \(\partial B_{\varepsilon}(z)\cap\mathbb{H}\). For \(\gamma\in(0,2)\), the \(\gamma\)-LQG area measure \(\mathcal{A}_{\phi}\) on \(\mathbb{H}\) can be defined by the almost sure weak limit \(\mathcal{A}_{\phi}(dz)=\lim_{\varepsilon\to 0}\varepsilon^{\gamma^{2}/2}e^{ \gamma\phi_{\varepsilon}(z)}dz\)[4]. Similarly, the \(\gamma\)-LQG boundary length measure \(\mathcal{L}_{\phi}\) on \(\mathbb{R}\) can be defined by \(\mathcal{L}_{\phi}(dx):=\lim_{\varepsilon\to 0}\varepsilon^{\gamma^{2}/4}e^{ \frac{2}{2}\phi_{\varepsilon}(x)}dx\). For the critical parameter \(\gamma=2\) a correction is needed to make the measure nonzero; we set \(\mathcal{A}_{\phi}(dz)=\lim_{\varepsilon\to 0}(\log(1/\varepsilon)-\phi_{ \varepsilon}(z))\varepsilon^{2}e^{2\phi_{\varepsilon}(z)}dz\) and \(\mathcal{L}_{\phi}(dz)=\lim_{\varepsilon\to 0}(\log(1/\varepsilon)- \frac{1}{2}\phi_{\varepsilon}(x))\varepsilon e^{\phi_{\varepsilon}(x)}dx\)[10]. See e.g. [11] for more details.
### The Liouville field
Let \(\gamma\in(0,2]\) be the LQG parameter, and \(Q=\frac{\gamma}{2}+\frac{2}{\gamma}\). Write \(|z|_{+}:=\max(|z|,1)\).
**Definition 2.1**.: _Let \((h,\mathbf{c})\) be sampled from \(P_{\rm H}\times[e^{-Qc}\,dc]\) and let \(\phi(z)=h(z)-2Q\log|z|_{+}+\mathbf{c}\). We call \(\phi\) the Liouville field on \(\mathbb{H}\) and denote its law by \({\rm LF}_{\rm H}\)._
Define
\[C_{\gamma}^{(\alpha,z)}=\left\{\begin{array}{ll}|z|_{+}^{-2\alpha(Q-\alpha)} (2\operatorname{Im}z)^{-\alpha^{2}/2}&\text{if }z\in\mathbb{H}\\ |z|_{+}^{-2\alpha(Q-\alpha)}&\text{if }z\in\mathbb{R}\end{array}\right..\]
For \(\delta\in\mathbb{R}\), \(n\geq 0\) and \((\alpha_{j},z_{j})\in\mathbb{R}\times\overline{\mathbb{H}}\) for \(j\leq n\) such that the \(z_{j}\) are distinct, let
\[C_{\gamma}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}=\prod_{j}C_{\gamma}^{( \alpha_{j},z_{j})}e^{\alpha_{j}\delta G_{\mathbb{H}}(z_{j},\infty)}\times\prod _{1\leq j<k\leq n}e^{\alpha_{j}\alpha_{k}G_{\mathbb{H}}(z_{j},z_{k})}. \tag{2.2}\]
More generally, if the \((\alpha_{j},z_{j})\) are such that the \(z_{j}\) are not distinct, we combine all pairs \((\alpha,z)\) with the same \(z\) by summing their \(\alpha\)'s to get a collection \((\alpha_{j}^{\prime},z_{j}^{\prime})\) where the \(z_{j}^{\prime}\) are distinct, and define \(C_{\gamma}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}=C_{\gamma}^{(\alpha_{j}^{ \prime},z_{j}^{\prime})_{j},(\delta,\infty)}\).
**Definition 2.2**.: _For \(\delta\in\mathbb{R}\), \(n\geq 0\) and \((\alpha_{j},z_{j})\in\mathbb{R}\times\overline{\mathbb{H}}\cup\{\infty\}\), let \((h,\mathbf{c})\) be sampled from \(C_{\gamma}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}P_{\rm H}\times[e^{(\sum_{j} \alpha_{j}+\delta-Q)c}\,dc]\), and set_
\[\phi(z)=h(z)+\sum_{j}\alpha_{j}G_{\rm H}(z,z_{j})+(\delta-Q)G_{\rm H}(z,\infty )+\mathbf{c}.\]
_We call \(\phi\) the Liouville field with insertions \((\alpha_{j},z_{j})_{j}\), \((\delta,\infty)\) and we write \({\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}\) for its law._
Note that \({\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}\) depends implicitly on the parameter \(\gamma\). When \(\delta=0\) there is no insertion at \(\infty\), so we write \(C_{\gamma}^{(\alpha_{j},z_{j})_{j}}\) and \({\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j}}\) rather than \(C_{\gamma}^{(\alpha_{j},z_{j})_{j},(0,\infty)}\) and \({\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(0,\infty)}\).
**Remark 2.3**.: _Suppose \(\sum_{j}\alpha_{j}+\delta=Q\), then Definition 2.2 has the following simplification. Sample \((h,{\bf c})\sim C_{\gamma}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}P_{\rm H} \times dc\), and let_
\[\phi(z)=h(z)+\sum_{j}\alpha_{j}G(z,z_{j})+{\bf c} \tag{2.3}\]
_where \(G(z,w)=-\log|z-w|-\log|z-\overline{w}|\). Then the law of \(\phi\) is \({\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}\). Moreover, writing \(I=\{i\,:\,z_{i}\in{\rm I\!H}\}\), we have the simplification \(C_{\gamma}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}={\cal Z}((\alpha_{j},z_{j} )_{j})\) with \({\cal Z}\) defined in (1.1)._
**Remark 2.4**.: _Let \(m,n\geq 0\), let \((\alpha_{j},z_{j})\in{\rm R}\times{\rm I\!H}\) for \(j\leq m\), let \((\beta_{k},x_{k})\in{\rm R}\times{\rm I\!R}\) for \(k\leq n\), and let \(\delta\in{\rm R}\). The measure that we call \({\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\frac{1}{2}\beta_{k},x_{k})_{k},( \frac{1}{2}\delta,\infty)}\) is instead called \(\ {}^{*}{\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\beta_{k},x_{k})_{k},( \delta,\infty)}\), in [1]; there, the boundary insertions are described by their log-singularities (near \(x_{k}\) the field blows up as \(-\beta_{k}\log|\cdot-x_{k}|\)), whereas our notation instead uses the Green function coefficient (near \(x_{k}\) the field blows up like \(\frac{\beta_{k}}{2}G_{\rm H}(\cdot,x_{k})\)). For us, our notation is more convenient since the Green function coefficient is invariant when a boundary point is zipped into the bulk._
Finally, we will need the following rooted measure statement: sampling a point from the quantum area measure of a Liouville field is equivalent to adding a \(\gamma\)-insertion to the field.
**Proposition 2.5**.: _Suppose \(\gamma\in(0,2)\). Let \(n\geq 0\) and \((\alpha_{j},z_{j})\in{\rm R}\times\overline{\rm I\!H}\) for \(j\leq n\), and let \(\delta\in{\rm I\!R}\). Then_
\[{\cal A}_{\phi}(dz)\,{\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\delta,\infty) }(d\phi)={\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\gamma,z),(\delta,\infty)} (d\phi)\,dz.\]
Formally, Proposition 2.5 holds because "\({\cal A}_{\phi}(dz)=e^{\gamma\phi(z)}dz\)" and "\(e^{\gamma\phi(z)}{\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\delta,\infty)}(d \phi)={\rm LF}_{\rm H}^{(\alpha_{j},z_{j})_{j},(\gamma,z),(\delta,\infty)}(d \phi)\)". A rigorous proof of the analogous statement for the sphere is given in [1, Lemma 2.31]; the proof of Proposition 2.5 is identical so we omit it, but see [BP, Section 2.4] for a good exposition of rooted GMC.
### Reverse Schramm-Loewner evolution
In this section we recall some basic properties of SLE, see [16] for a more detailed introduction. Let \(\kappa>0\). Let \(\rho_{1},\ldots,\rho_{n}\in{\rm R}\) and let \(z_{1},\ldots,z_{n}\in\overline{\rm I\!H}\) be distinct points. With \((B_{t})_{t\geq 0}\) a standard Brownian motion, the driving function \((W_{t})_{t\geq 0}\) for reverse SLE\({}_{\kappa,\rho}\) is defined by the stochastic differential equations
\[\begin{split} W_{0}&=0,\quad dW_{t}=\sum_{j=1}^{n}{ \rm Re}\!\left(\frac{-\rho_{j}}{Z_{t}^{j}-W_{t}}\right)dt+\sqrt{\kappa}\,dB_{t },\\ Z_{0}^{j}&=z_{j},\quad dZ_{t}^{j}=-\frac{2}{Z_{t}^ {j}-W_{t}}\,dt\quad\mbox{for $j=1,\ldots,n$.}\end{split} \tag{2.4}\]
For each \(z\in{\rm I\!H}\) and \(s\geq 0\), let \((g_{s,t}(z))_{t\geq s}\) be the solution to \(g_{s,s}(z)=z\), \(\frac{d}{dt}g_{s,t}(z)=-\frac{2}{g_{s,t}(z)-W_{t}}\). This defines a family of conformal maps \(g_{s,t}\); we also write \(g_{t}:=g_{0,t}\). For each \(t>0\) we can define a curve \(\eta_{t}:[0,t]\to\overline{\rm I\!H}\) by \(\eta_{t}(u):=\lim_{z\to W_{t-u}}g_{t-u,t}(z)\). We call the family of curves \((\eta_{t})_{t\geq 0}\)_reverse_ SLE\({}_{\kappa}\)_with force points at \(z_{j}\) of weight \(\rho_{j}\). Note that \(g_{t}\) is the unique conformal map from \({\rm I\!H}\) to the unbounded connected component of \({\rm I\!H}\backslash\eta_{t}\) such that \(\lim_{z\to\infty}g_{t}(z)-z=0\), and that reverse SLE is parametrized by half-plane capacity in the sense that \({\rm hcap}(\eta_{t}):=\lim_{z\to\infty}z(g_{t}(z)-z)\) equals \(2t\). The curve \(\eta_{t}\) is simple when \(\kappa\leq 4\), self-intersecting (but not self-crossing) when \(\kappa\in(4,8)\), and space-filling when \(\kappa\geq 8\).
In the case of no force points (\(n=0\)), this process is well-defined for all time. For \(n\geq 0\), by absolute continuity with respect to the \(n=0\) case, the SDE (2.4) can be run until the time \(\tau\) that \(g_{t}(z_{j})=W_{t}\) for some \(j\). Let \(J_{\tau}=\{j\;:\;g_{\tau}(z_{j})=W_{\tau}\}\). If \(\sum_{j\in J_{\tau}}\rho_{j}<\frac{\kappa}{2}+4\), then there is a unique way of continuing the process such that \(g_{t}(z_{j})\in\mathbb{H}\) for all \(t>\tau\)[1, Proposition 3.8]. Applying this continuation procedure to each time a force point hits \(W_{t}\), we see that reverse \(\mathrm{SLE}_{\kappa,\rho}\) is well-defined until the _continuation threshold_: the first time \(\tau\leq\infty\) that the sum of the weights of force points hitting \(W_{\tau}\) is at least \(\frac{\kappa}{2}+4\).
### Quantum wedges and quantum cells
In this section we define the _quantum wedges_ introduced in [16, 17], and the _quantum cell_ which arises from partially mating correlated continuum random trees.
We first define quantum wedges. For symmetry reasons it is easier to work in the strip \(\mathcal{S}:=\mathbb{R}\times(0,\pi)\) than in \(\mathbb{H}\). Let \(m\) be the uniform probability measure on the segment \(\{0\}\times(0,\pi)\). As in Section 2.1, consider the space of smooth functions \(f\) with Dirichlet energy and \(\int f(z)m(dz)=0\), and let \(H\) be the Hilbert space closure with respect to the Dirichlet inner product. As before, let \(h=\sum_{j}\alpha_{j}f_{j}\) where the \(\alpha_{j}\) are i.i.d. standard Gaussians and \(\{f_{j}\}\) is an orthonormal basis for \(H\). We call \(h\) the _GFF on \(\mathcal{S}\) normalized so \(\int h(z)m(dz)=0\)_, and denote its law by \(P_{\mathcal{S}}\).
We can decompose \(H=H_{\mathrm{av}}\oplus H_{\mathrm{lat}}\) where \(H_{\mathrm{av}}\) (resp. \(H_{\mathrm{lat}}\)) is the subspace of functions which are constant (resp. have mean zero) on \(\{t\}\times(0,\pi)\) for all \(t\in\mathbb{R}\); this gives a decomposition \(h=h_{\mathrm{av}}+h_{\mathrm{lat}}\) of \(h\) into independent components.
We now define thick quantum wedges. These are \(\gamma\)-LQG surfaces with the half-plane topology, and have a _weight_ parameter \(W\geq\frac{\gamma^{2}}{2}\).
**Definition 2.6** (Thick quantum wedge).: _For \(W\geq\frac{\gamma^{2}}{2}\), let \(\alpha=\frac{1}{2}(Q+\frac{\gamma}{2}-\frac{W}{\gamma})\). Let_
\[Y_{t}=\left\{\begin{array}{ll}B_{2t}+(Q-2\alpha)t&\text{if }t\geq 0\\ \widetilde{B}_{-2t}+(Q-2\alpha)t&\text{if }t<0\end{array}\right.\]
_where \((B_{s})_{s\geq 0}\) is standard Brownian motion, and \((\widetilde{B}_{s})_{s\geq 0}\) is standard Brownian motion conditioned on \(\widetilde{B}_{2s}-(Q-2\beta)s<0\) for all \(s>0\). Let \(\phi_{\mathrm{av}}(z)=Y_{\mathrm{Re}\,z}\), let \(\phi_{\mathrm{lat}}\) be the projection of an independent GFF onto \(H_{\mathrm{lat}}\), and let \(\phi_{0}=\phi_{\mathrm{av}}+\phi_{\mathrm{lat}}\). We call the quantum surface \((\mathcal{S},\phi_{0},-\infty,+\infty)/{\sim_{\gamma}}\) a weight \(W\) quantum wedge and write \(\mathcal{M}^{\mathrm{wed}}(W)\) for its law._
Thin quantum wedges are an ordered collection of two-pointed quantum surfaces.
**Definition 2.7** (Thin quantum wedge).: _For \(W\in(0,\frac{\gamma^{2}}{2})\), sample a Bessel process of dimension \(\delta=1+\frac{2W}{\gamma^{2}}\). It decomposes as a countable ordered collection of Bessel excursions; for each excursion
\(e\) we construct a two-pointed quantum surface \(\mathcal{B}_{e}=(\mathcal{S},\phi^{e},-\infty,\infty)\) as follows. Let \((Y^{e}_{t})_{t\in\mathds{R}}\) be any time-reparametrization of \(\frac{2}{\gamma}\log e\) which has quadratic variation \(2dt\). Let \(\phi^{e}_{\rm av}(z)=Y^{e}_{{\rm Re}\,z}\), let \(\phi^{e}_{\rm lat}\) be the projection of an independent GFF onto \(H_{\rm lat}\), and set \(\phi^{e}=\phi^{e}_{\rm av}+\phi^{e}_{\rm lat}\). Then the weight \(W\) quantum wedge is the ordered collection \((\mathcal{B}_{e})\)._
Now, to avoid topological complications, assume \(\gamma\in(0,\sqrt{2}]\) and \(\kappa=\frac{16}{\gamma^{2}}\). Let \((\mathds{H},h,0,\infty)\) be an embedding of a sample from \(\mathcal{M}^{\rm wed}(2-\frac{\gamma^{2}}{2})\). Let \(\eta\) be an independent SLE\({}_{\kappa}\) from \(0\) to \(\infty\); parametrize \(\eta\) by its quantum length. For each \(a>0\) let \(D_{a}=\eta([0,a])\) and let \(p=\eta(0)\) and \(q=\eta(a)\). Let \(x_{L}\) (resp. \(x_{R}\)) be the last point on the left (resp. right) boundary arc of \(D\) hit by \(\eta\).
**Definition 2.8**.: _We call the SLE\({}_{\kappa}\)-decorated quantum surface \(\mathcal{C}_{a}=(D_{a},h,\eta|_{[0,a]},p,q,x_{L},x_{R})/{\sim_{\gamma}}\) an area \(a\) quantum cell. We denote its law by \(P_{a}\)._
Recall that there is a way of mating correlated continuum random trees to obtain LQG decorated by SLE, see Proposition 1.7. By its definition, the area \(a\) quantum cell is the decorated quantum surface arising from mating a pair of correlated Brownian motions \((X_{t},Y_{t})_{t\geq 0}\sim{\rm CRT}_{\kappa}\), stopping when the quantum area is \(a\).
**Remark 2.9**.: _In fact, \(\mathcal{C}_{a}\) is measurable with respect to \((D_{a},h,\eta|_{[0,a]})/{\sim_{\gamma}}\). First, we can recover two of the marked points \(p=\eta(0)\) and \(q=\eta(a)\). Let \(\psi:D_{a}\to\mathbb{H}\) be a conformal map with \(\psi(p)=0\) and \(\psi(q)=\infty\). From the imaginary geometry construction of space-filling SLE\({}_{\kappa}\)[11], modulo time-parametrization the law of \(\psi\circ\eta\) given \(\psi(x_{L})\) and \(\psi(x_{R})\) is SLE\({}_{\kappa,\rho}\) with force points of size \(\kappa/2-4\) at \(\psi(x_{L})\) and \(\psi(x_{R})\). Consequently, \(\psi(x_{L})\) and \(\psi(x_{R})\) are measurable with respect to \(\psi\circ\eta\), so we can recover \(x_{L}\) and \(x_{R}\) also. Thus, we will often omit the marked points of \(\mathcal{C}_{a}\) for notational simplicity._
Since Brownian motion is reversible, it is unsurprising that the area \(a\) quantum cell is reversible:
**Lemma 2.10**.: \(\mathcal{C}_{a}\) _is symmetric in law in the following sense. If \(\widetilde{\eta}\) is the time-reversal of \(\eta|_{[0,a]}\), then \((D_{a},h,\eta|_{[0,a]},p,q,x_{L},x_{R})/{\sim_{\gamma}}\) and \((D_{a},h,\widetilde{\eta},q,p,x_{R},x_{L})/{\sim_{\gamma}}\) have the same law._
Proof.: [14, Theorem 1.9] states that if a \(\gamma\)_-quantum cone_\((\mathbb{C},h,0,\infty)/{\sim_{\gamma}}\) is decorated by an independent SLE\({}_{\kappa}\) curve \(\eta\) from \(\infty\) to \(\infty\), then parametrizing \(\eta:\mathds{R}\to\mathbb{C}\) by quantum area and fixing \(\eta(0)=0\), the quantum surface \((\eta((0,\infty)),h,0,\infty)/{\sim_{\gamma}}\) has law \(\mathcal{M}^{\rm wed}(2-\frac{\gamma^{2}}{2})\). Thus \((\eta([0,a]),h,\eta|_{[0,a]})/{\sim_{\gamma}}\) has law \(P_{a}\). Let \(z=\eta(a)\), then [14, Theorem 1.9] states that \((\mathbb{C},h(\cdot+z),\eta(\cdot+a)-z)/{\sim_{\gamma}}\) also has the law of a \(\gamma\)-quantum cone decorated by an independent SLE\({}_{\kappa}\) from \(\infty\) to \(\infty\), and by the reversibility of this SLE\({}_{\kappa}\) curve, the decorated quantum surface \((\mathbb{C},h(\cdot+z),\eta(a-\cdot)-z)/{\sim_{\gamma}}\) also has this law. Thus, \((\eta([0,a]),h,\tilde{\eta})\) also has law \(P_{a}\), where \(\tilde{\eta}:[0,a]\to\mathbb{C}\) is given by \(\tilde{\eta}(t)=\eta(a-t)\).
## 3 Reverse SLE\({}_{\kappa,\rho}\) and the GFF partition function
Let \(n\geq 0\) and let \((\alpha_{j},z_{j})\in\mathds{R}\times\overline{\mathds{H}}\) for \(j\leq n\), where the \(z_{j}\) are distinct. Let \(w\in\mathds{R}\). Let \(\kappa>0\), \(\gamma=\min(\sqrt{\kappa},\frac{4}{\sqrt{\kappa}})\). The GFF partition function \(\mathcal{Z}\) defined in (1.1), times a factor arising from uniformizing in \(\mathds{H}\), is a martingale observable for reverse SLE.
**Lemma 3.1**.: _Let \(\kappa>0\). Let \(n\geq 0\), and let \((\alpha_{j},z_{j})\in\mathds{R}\times\overline{\mathds{H}}\) for \(j\leq n\). Sample reverse SLE\({}_{\kappa}\) with no force points (as in (2.4)) and define_
\[M_{t}=\prod_{i\in I}|g^{\prime}_{t}(z_{i})|^{2\Delta_{\alpha_{i}}}\prod_{b\in B }|g^{\prime}_{t}(z_{b})|^{\Delta_{2\alpha_{b}}}\mathcal{Z}\big{(}(-\frac{1}{ \sqrt{\kappa}},W_{t}),(\alpha_{j},g_{t}(z_{j}))_{j}\big{)}\]
_where \(\Delta_{\alpha}:=\frac{\alpha}{2}(Q-\frac{\alpha}{2})\) and \(Q=\frac{\gamma}{2}+\frac{2}{\gamma}\). Let \(\tau\) be a stopping time such that almost surely \(g_{t}(z_{j})\neq W_{t}\) for all \(t\leq\tau\). Then \(M_{t}\) is a martingale up until time \(\tau\)._
Proof.: To lighten notation, we assume there are only bulk insertions (i.e. \(z_{j}\in\mathbb{H}\) for all \(j\)); the general case follows from the same arguments. We write \(Z_{j}=g_{t}(z_{j})\) and write \(W=W_{t}\), leaving the \(t\)-dependence implicit to simplify notation. Our goal is to compute \(d\log M_{t}\). We have
\[\log M_{t}= \sum_{j}2\Delta_{\alpha_{j}}\log|g_{t}^{\prime}(z_{j})|-\frac{ \alpha_{j}^{2}}{2}\log(2\operatorname{Im}Z_{j})+\frac{2\alpha_{j}}{\sqrt{ \kappa}}\log|Z_{j}-W|+\sum_{1\leq j<k\leq m}\alpha_{j}\alpha_{k}G(Z_{j},Z_{k}). \tag{3.1}\]
By the definition of the reverse Loewner flow and \(Z_{j}=g_{t}(z_{j})\), we have
\[dW=\sqrt{\kappa}dB_{t},\;\;d\langle W\rangle=\kappa dt,\;\;dZ_{j}=-\frac{2dt}{ Z_{j}-W},\;\;d\overline{Z}_{j}=-\frac{2dt}{\overline{Z}_{j}-W},\;\;d\log|g_{t}^{ \prime}(z_{j})|=\operatorname{Re}\frac{2dt}{(Z_{j}-W)^{2}}.\]
The last identity holds since \(d(g_{t}(z))=-\frac{2}{g_{t}(z)-W}dt\) implies \(d(g_{t}^{\prime}(z))=-\frac{2}{(g_{t}(z)-W)^{2}}g_{t}^{\prime}(z)dt\). Using these we have \(d\log(2\operatorname{Im}Z_{j})=d\log(Z_{j}-\overline{Z}_{j})=\frac{1}{Z_{j}- \overline{Z}_{j}}dZ_{j}+\frac{1}{Z_{j}-Z_{j}}d\overline{Z}_{j}=\frac{1}{|Z_{j }-W|^{2}}dt\) and, since \(\log|Z_{j}-W|=\operatorname{Re}\log(Z_{j}-W)\), we have \(d\log|Z_{j}-W|=\operatorname{Re}\frac{\sqrt{\kappa}}{W-Z_{j}}dB_{t}- \operatorname{Re}\frac{\sqrt{\kappa}Q}{(W-Z_{j})^{2}}dt\). Thus
\[d\Bigg{(}2\Delta_{\alpha_{j}}\log|g_{t}^{\prime}(z_{j})|-\frac{ \alpha_{j}^{2}}{2}\log(2\operatorname{Im}Z_{j})+\frac{2\alpha_{j}}{\sqrt{ \kappa}}\log|Z_{j}-W|\Bigg{)} \tag{3.2}\] \[=\operatorname{Re}\frac{2\alpha_{j}}{W-Z_{j}}dB_{t}-(\operatorname {Re}\frac{1}{(Z_{j}-W)^{2}}+\frac{1}{|Z_{j}-W|^{2}})\alpha_{j}^{2}dt= \operatorname{Re}\frac{2\alpha_{j}}{W-Z_{j}}dB_{t}-2\alpha_{j}^{2}\bigg{(} \operatorname{Re}\frac{1}{Z_{j}-W}\bigg{)}^{2}dt.\]
In the last equality, we used the identity \(\operatorname{Re}(z^{2})+|z|^{2}=2(\operatorname{Re}z)^{2}\). Next, since \(G(Z_{j},z)=-\operatorname{Re}(\log(Z_{j}-z)+\log(Z_{j}-\overline{z}))\), we have \(dG(Z_{j},z)=\operatorname{Re}\Bigl{(}(\frac{1}{Z_{j}-z}+\frac{1}{Z_{j}- \overline{z}})\frac{2}{Z_{j}-W}\Bigr{)}dt\), so
\[dG(Z_{j},Z_{k})=\operatorname{Re}\biggl{(}(\frac{1}{Z_{j}-Z_{k} }+\frac{1}{Z_{j}-\overline{Z}_{k}})\frac{2}{Z_{j}-W}+(\frac{1}{Z_{k}-Z_{j}}+ \frac{1}{Z_{k}-\overline{Z}_{j}})\frac{2}{Z_{k}-W}\bigg{)}dt\\ =\operatorname{Re}\biggl{(}-\frac{2}{(Z_{j}-W)(Z_{k}-W)}-\frac{2} {(Z_{j}-W)(\overline{Z}_{k}-W)}\biggr{)}dt=-4\operatorname{Re}\biggl{(}\frac{ 1}{Z_{k}-W}\biggr{)}\operatorname{Re}\biggl{(}\frac{1}{Z_{j}-W}\biggr{)}dt.\]
Combining this with (3.2), we can compute \(d\log M_{t}\) from (3.1):
\[d\log M_{t}=(\sum_{j}\operatorname{Re}\frac{2\alpha_{j}}{W-Z_{j}})dB_{t}-\frac {1}{2}(\sum_{j}\operatorname{Re}\frac{2\alpha_{j}}{W-Z_{j}})^{2}dt.\]
Thus, \(M_{t}\) is a martingale, completing the proof for the case of only bulk insertions. The general case is essentially the same: we get the same expression for \(d\log M_{t}\) so \(M_{t}\) is a martingale.
**Proposition 3.2**.: _In the setting of Lemma 3.1, the law of reverse \(\operatorname{SLE}_{\kappa}\) run until \(\tau\) and weighted by \(\frac{M_{\tau}}{M_{0}}\) is precisely that of \(\operatorname{SLE}_{\kappa,\rho}\) run until the stopping time \(\tau\), where at each \(z_{j}\) there is a force point of weight \(\rho_{j}=2\sqrt{\kappa}\alpha_{j}\)._
Proof.: We use the notation \(\mathcal{Z}_{\kappa}^{\alpha}(w,(z_{j})_{j}):=\mathcal{Z}((-\frac{1}{\sqrt{ \kappa}},w),(\alpha_{j},z_{j})_{j})\). By Girsanov's theorem, under the reweighted law, the driving function satisfies the SDE
\[dW_{t}=\sqrt{\kappa}\,dB_{t}+\kappa\partial_{w}\log\mathcal{Z}_{\kappa}^{ \alpha}(W_{t},(g_{t}(z_{j}))_{j})dt.\]
We have \(\partial_{w}\log\mathcal{Z}_{\kappa}^{\alpha}(w,(z_{j})_{j})=\sum_{j}-\frac{2}{ \sqrt{\kappa}}\alpha_{j}\partial_{w}\operatorname{Re}\log(z_{j}-w)=\sum_{j} \operatorname{Re}\Bigl{(}\frac{-2\alpha_{j}/\sqrt{\kappa}}{z_{j}-w}\Bigr{)}\), so the SDE agrees with (2.4) as desired.
The observation that reverse SLE with force points is intimately related to the Coulomb gas partition function is not original. For instance [14, Section 1.4] and the reverse SLE variant of [14, Section 1.2] give Proposition 3.2. Similarly, [13, 15] note that multiple reverse SLE is driven by the Coulomb gas partition function \(\mathcal{Z}\) where the boundary weights all satisfy \(\alpha_{j}=-\frac{1}{\gamma}\). This perspective arose first in the setting of forward SLE, see e.g. [1, 1, 1].
## 4 Sheffield's coupling and the \(\kappa\leq 4\) LCFT zipper
In this section we prove Theorems 1.1 and 1.2. These results are fairly straightforward consequences of Sheffield's coupling of LQG and SLE.
A version of the following was first proved in [11], and the full version where force points may be zipped into the bulk was proved in [12]. Recall \(G(z,w)=-\log|z-w|-\log|z-\overline{w}|\).
**Proposition 4.1** ([12, Theorem 5.1]).: _Let \(\kappa>0\), \(\gamma=\min(\sqrt{\kappa},\frac{4}{\sqrt{\kappa}})\), let \(n\geq 0\) and \((\rho_{j},z_{j})\in\mathbbm{R}\times\overline{\mathbb{H}}\) for \(j\leq n\). Suppose that \((W_{t})_{t\geq 0}\) is the driving function for reverse \(\mathrm{SLE}_{\kappa,\rho}\) with force points at \(z_{j}\) of size \(\rho_{j}\), and let \(g_{t}\) be the Loewner map. For each \(t\geq 0\) let_
\[\mathfrak{h}_{t}(z)=-\frac{1}{\sqrt{\kappa}}G(z,W_{t})+\frac{1}{2\sqrt{\kappa }}\sum_{j=1}^{n}\rho_{j}G(g_{t}(z),g_{t}(z_{j}))+Q\log|g_{t}^{\prime}(z)|.\]
_Let \(h\) be an independent Neumann GFF modulo additive constant on \(\mathbbm{H}\). Suppose \(\tau\) is an a.s. finite stopping time such that \(\tau\) occurs before or at the continuation threshold for \(W_{t}\). Then, as distributions modulo additive constant,_
\[\mathfrak{h}_{0}+h\stackrel{{ d}}{{=}}\mathfrak{h}_{\tau}+h \circ g_{\tau}. \tag{4.1}\]
Note that if there is a force point at \(0\) with weight \(\rho\geq\frac{\kappa}{2}+4\) then the reverse \(\mathrm{SLE}_{\kappa}\) immediately hits the continuation threshold, i.e. \(\tau=0\).
As we see next, adding a constant chosen from Lebesgue measure to the field essentially gives Theorem 1.2 when \(\delta=Q-\sum_{j}\alpha_{j}+\frac{1}{\sqrt{\kappa}}\).
**Proposition 4.2**.: _Let \(\kappa>0\) and \(\gamma=\min(\sqrt{\kappa},\frac{4}{\sqrt{\kappa}})\). Let \(n\geq 0\) and let \((\alpha_{j},z_{j})\in\mathbbm{R}\times\overline{\mathbb{H}}\) for \(j\leq n\). Let \(\delta=Q-\sum_{j}\alpha_{j}+\frac{1}{\sqrt{\kappa}}\). Let \((\eta_{t})\) be reverse \(\mathrm{SLE}_{\kappa,\rho}\) with force points at \(z_{j}\) of size \(\rho_{j}=2\sqrt{\kappa}\alpha_{j}\), run until a stopping time \(\tau\) which a.s. occurs before or at the continuation threshold. Let \(g_{\tau}\) be the conformal map from \(\mathbbm{H}\) to the unbounded connected component of \(\mathbbm{H}\backslash\eta_{\tau}\) such that \(\lim_{z\to\infty}g_{\tau}(z)-z=0\). Then for any nonnegative measurable function \(F\),_
\[\frac{\mathrm{LF}_{\mathrm{H}}^{(-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j},z_{j })_{j},(\delta,\infty)}[F(\phi)]}{\mathcal{Z}((-\frac{1}{\sqrt{\kappa}},0),( \alpha_{j},z_{j})_{j})}=\mathbb{E}\Bigg{[}\frac{\mathrm{LF}_{\mathrm{H}}^{(- \frac{1}{\sqrt{\kappa}},W_{\tau}),(\alpha_{j},g_{\tau}(z_{j}))_{j},(\delta, \infty)}}{\mathcal{Z}((-\frac{1}{\sqrt{\kappa}},W_{\tau}),(\alpha_{j},g_{ \tau}(z_{j}))_{j})}\Bigg{]}. \tag{4.2}\]
Proof.: We will use the Liouville field description from Remark 2.3. Write \(\mathfrak{f}_{\tau}(z)=-\frac{1}{\sqrt{\kappa}}G(z,W_{\tau})+\sum_{j}\alpha_{j }G(z,g_{\tau}(z_{j}))\). Writing \(\mathbbm{E}\) to denote expecation with respect to \(\eta_{\tau}\) and and independently \(h\sim P_{\mathrm{H}}\), the right hand side of (4.2) equals
\[\mathbbm{E}[\int_{\mathds{R}}F(g_{\tau}^{-1}\bullet_{\gamma}(h+\mathfrak{f}_{ \tau}+c))\,dc]=\mathbbm{E}[\int_{\mathds{R}}F(h\circ g_{\tau}+\mathfrak{h}_{ \tau}+c)\,dc].\]
By Proposition 4.1 this equals \(\mathbbm{E}[\int_{\mathds{R}}F(h+\mathfrak{h}_{0}+c)\,dc]\), which is the left hand side of (4.2).
To remove the constraint that \(\delta=Q-\sum_{j}\alpha_{j}+\frac{1}{\sqrt{\kappa}}\), in Proposition 4.5 we will weight by the average of the field on very large semicircles to change the value of \(\delta\), see Lemma 4.4. As an intermediate step we need the following collection of identities. Recall \(G_{\mathrm{H}}\) from (2.1) and \(G(z,w)=-\log|z-w|-\log|z-\overline{w}|\).
**Lemma 4.3**.: _Suppose \(K\subset\overline{\mathbb{H}}\) is compact, \(\mathbb{H}\backslash K\) is simply connected and there is a conformal map \(g:\mathbb{H}\to\mathbb{H}\backslash K\) such that \(\lim_{|z|\to\infty}g(z)-z=0\). Let \(\varepsilon>0\) and let \(\theta_{\varepsilon,\infty}\) be the uniform probability measure on \(\{z\in\mathbb{H}:|z|=1/\varepsilon\}\). Suppose \(g(-1/\varepsilon),g(1/\varepsilon)\in\mathbb{R}\) and \(u\in g(B_{1/\varepsilon}(0)\cap\mathbb{H})\). Then \(\int G(u,v)(g_{*}\theta_{\varepsilon,\infty})(dv)=2\log\varepsilon\). Moreover, \(\int\log g^{\prime}(v)\theta_{\varepsilon,\infty}(dv)=0\). Finally, if \(|g(z)|>1\) for \(|z|=1/\varepsilon\), then \(\int G_{\mathrm{H}}(u,v)(g_{*}\theta_{\varepsilon,\infty})(dv)=G_{\mathrm{H} }(u,\infty)\) and \(\int G_{\mathrm{H}}(\infty,v)(g_{*}\theta_{\varepsilon,\infty})(dv)=-2\log\varepsilon\)._
Proof.: Extend \(g\) by Schwarz reflection to a map with image \(\mathbb{C}\backslash(K\cup\overline{K})\). For \(v\in B_{1/\varepsilon}(0)\backslash\{0\}\) define \(f(v)=(u-g(\frac{1}{v}))^{-2}v^{-2}\); this is a holomorphic map which extends continuously to \(f(0)=1\), hence the extended map \(f:B_{1/\varepsilon}(0)\to\mathbb{C}\) is holomorphic. Since \(\log|f|\) is harmonic we have \((\log|f|,\theta_{\varepsilon,0})=\log|f(0)|=0\), and rephrasing this gives \(\int-2\log|u-g(\frac{1}{v})|\theta_{\varepsilon,0}(dv)=2\log\varepsilon\). Now the change of variables \(v^{\prime}=\frac{1}{v}\) gives the assertion. The second assertion is proved similarly. The third assertion follows from \(G_{\mathrm{H}}\big{(}u,v)=G(u,v)-G(0,v)+G_{\mathrm{H}}(u,\infty)\) and \(G_{\mathrm{H}}(\infty,v)=-G(0,v)\) for \(|v|>1\).
Weighting the Liouville field by the field average near \(\infty\) changes the value of \(\delta\):
**Lemma 4.4**.: _Let \(\gamma\in(0,2]\). In the setting of Lemma 4.3, suppose \((\alpha_{j},w_{j})\in\mathbb{R}\times\overline{\mathbb{H}}\) and \(\delta\in\mathbb{R}\). Let \(\mathbb{H}_{\varepsilon}=B_{1/\varepsilon}(0)\cap\mathbb{H}\). Then for any function \(F\) such that \(F(\phi)\) depends only on \(\phi|_{g(\mathbb{H}_{\varepsilon})}\),_
\[\mathrm{LF}^{(\alpha_{j},w_{j})_{j,(\delta,\infty)}}_{\mathrm{H}}[\varepsilon ^{(\delta^{\prime}-\delta)(\delta^{\prime}+\delta-2Q)}e^{(\delta^{\prime}- \delta)(g^{-1}\bullet_{\gamma}\phi,\theta_{\varepsilon,\infty})}F(\phi)]= \mathrm{LF}^{(\alpha_{j},w_{j})_{j,(\delta^{\prime},\infty)}}_{\mathrm{H}}[F( \phi)].\]
Proof.: Write \(\theta_{\varepsilon}=(\delta^{\prime}-\delta)\theta_{\varepsilon,\infty}\). Let \(Z_{\varepsilon}=\mathbb{E}[e^{(h,g_{*}\theta_{\varepsilon})}]\) where \(h\sim P_{\mathrm{H}}\), then Lemma 4.3 gives \(Z_{\varepsilon}=\varepsilon^{-(\delta^{\prime}-\delta)^{2}}\). With \(\mathfrak{f}(z):=\sum_{j}\alpha_{j}G_{\mathrm{H}}(z,w_{j})+(\delta-Q)G_{ \mathrm{H}}(z,\infty)\), Lemma 4.3 also gives
\[(\mathfrak{f},g_{*}\theta_{\varepsilon}) =(\delta^{\prime}-\delta)\sum_{j}2\alpha_{j}\log|w_{j}|_{+}-2( \delta^{\prime}-\delta)(\delta-Q)\log\varepsilon\] \[=\log\frac{C_{\gamma}^{(\alpha_{j},w_{j})_{j,(\delta^{\prime}, \infty)}}}{C_{\gamma}^{(\alpha_{j},w_{j})_{j,(\delta,\infty)}}}-2(\delta^{ \prime}-\delta)(\delta-Q)\log\varepsilon.\]
Writing \(\phi=h+\mathfrak{f}+c\), Lemma 4.3 gives \((g^{-1}\bullet_{\gamma}\phi,\theta_{\varepsilon})=(\phi,g_{*}\theta_{ \varepsilon})+Q(\log|g^{\prime}|,\theta_{\varepsilon})=(\phi,g_{*}\theta_{ \varepsilon})\), so
\[e^{(g^{-1}\bullet_{\gamma}\phi,\theta_{\varepsilon})}=\varepsilon^{-2(\delta^ {\prime}-\delta)(\delta-Q)}e^{(\delta^{\prime}-\delta)\varepsilon}\frac{C_{ \gamma}^{(\alpha_{j},w_{j})_{j,(\delta^{\prime},\infty)}}}{C_{\gamma}^{( \alpha_{j},w_{j})_{j,(\delta,\infty)}}}e^{(h,g_{*}\theta_{\varepsilon})}.\]
Thus, writing \(\widetilde{F}(\phi)=\varepsilon^{(\delta^{\prime}-\delta)(\delta^{\prime}+ \delta-2Q)}e^{(\delta^{\prime}-\delta)(g^{-1}\bullet_{\gamma}\phi,\theta_{ \varepsilon,\infty})}F(\phi)\), we have
\[\mathrm{LF}^{(\alpha_{j},w_{j})_{j,(\delta,\infty)}}_{\mathrm{ H}}[\widetilde{F}(\phi)] =C_{\gamma}^{(\alpha_{j},w_{j})_{j,(\delta,\infty)}}\int_{-\infty}^{ \infty}e^{(\sum_{j}\alpha_{j}+\delta-Q)c}\mathbb{E}[\widetilde{F}(h+ \mathfrak{f}+c)]\,dc\] \[=C_{\gamma}^{(\alpha_{j},w_{j})_{j,(\delta^{\prime},\infty)}}\int_{- \infty}^{\infty}e^{(\sum_{j}\alpha_{j}+\delta^{\prime}-Q)c}\mathbb{E}[\frac{e^ {(h,\theta_{\varepsilon})}}{Z_{\varepsilon}}F(h+\mathfrak{f}+c)]\,dc\] \[=C_{\gamma}^{(\alpha_{j},w_{j})_{j,(\delta^{\prime},\infty)}}\int_{- \infty}^{\infty}e^{(\sum_{j}\alpha_{j}+\delta^{\prime}-Q)c}\mathbb{E}[F(h+ \mathfrak{f}+\int G_{\mathrm{H}}(\cdot,v)g_{*}\theta_{\varepsilon}(dv)+c)]\,dc.\]
where in the last line we use Girsanov's theorem. Lemma 4.3 gives for any \(u\in g(\mathbb{H}_{\varepsilon})\) that \(\int G(u,v)g_{*}\theta_{\varepsilon}(dv)=(\delta^{\prime}-\delta)G_{\mathrm{H }}(u,\infty)\), so since \(F\) only depends on the field on \(g(\mathbb{H}_{\varepsilon})\), the last line equals the right hand side of the desired identity.
Now we are able to remove the constraint on the sum of insertions.
**Proposition 4.5**.: _Suppose any of Theorems 1.1, 1.2, 1.6 or 1.8 holds for insertions \((\alpha_{j},z_{j})_{j}\) and \((\delta,\infty)\). Then the same theorem holds for insertions \((\alpha_{j},z_{j})\) and \((\delta^{\prime},\infty)\) where \(\delta^{\prime}\) is arbitrary._
Proof.: We discuss the cases of Theorems 1.1 and 1.2 (\(\kappa\leq 4\)) in detail; the \(\kappa>4\) results are obtained identically. Proposition 3.2 implies that Theorem 1.1 is equivalent to Theorem 1.2 when \(\tau\) a.s. satisfies \(g_{\tau}(z_{j})\not\in\eta_{\tau}\) for all \(j\). Thus it suffices to discuss only Theorem 1.2.
Assume Theorem 1.2 for insertions \((\alpha_{j},z_{j})_{j}\) and \((\delta,\infty)\): starting with a field \(\phi_{0}\) sampled from \(\operatorname{LF}_{\operatorname{H}}^{(-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j },z_{j})_{j},(\delta,\infty)}\), zipping up gives \((\phi_{\tau},\eta_{\tau})\) whose law is given by (1.2).
Let \(\varepsilon>0\) and \(\operatorname{H}_{\varepsilon}=B_{1/\varepsilon}(0)\cap\operatorname{H}\). Assume that \(\tau\) satisfies \(g_{\tau}(-1/\varepsilon),g_{\tau}(1/\varepsilon)\in\mathbb{R}\) and \(|g_{\tau}(z)|>1\) for \(z\in\operatorname{H}_{\varepsilon}\). Weight by \(\varepsilon^{(\delta^{\prime}-\delta)(\delta^{\prime}+\delta-2Q)}e^{(\delta^{ \prime}-\delta)(\phi_{0},\theta_{\varepsilon,\infty})}\). By Lemma 4.4 with \(K=\emptyset\), the weighted law of \(\phi_{0}|_{\operatorname{H}_{\varepsilon}}\) agrees with that of \(\phi|_{\operatorname{H}_{\varepsilon}}\) where \(\phi\sim\operatorname{LF}_{\operatorname{H}}^{(-\frac{1}{\sqrt{\kappa}},0),( \alpha_{j},z_{j})_{j},(\delta^{\prime},\infty)}\). By Lemma 4.4 with \(K\) the hull of \(\eta_{\tau}\), the weighted law of \((\phi_{\tau}|_{g_{\tau}(\operatorname{H}_{\varepsilon})},\eta_{\tau})\) agrees with that of \((\phi|_{g_{\tau}(\operatorname{H}_{\varepsilon})},\eta)\) where \((\phi,\tau)\) has law given by (1.2) with \(\delta\) replaced by \(\delta^{\prime}\). Our assumptions on \(\tau\) imply that the zipping up procedure until time \(\tau\) only depends on \(\phi_{0}|_{\operatorname{H}_{\varepsilon}}\). Thus sending \(\varepsilon\to 0\) gives Theorem 1.2 for insertions \((\alpha_{j},z_{j})_{j},(\delta^{\prime},\infty)\).
Combining the above, we now prove the \(\kappa\leq 4\) theorems.
Proof of Theorem 1.2.: Suppose \(-\frac{1}{\sqrt{\kappa}}+\sum_{j}\alpha_{j}+\delta=Q\). Proposition 4.2 implies that if we sample \((\phi,\eta)\) from (1.2) and let \(g:\operatorname{H}\to\operatorname{H}\setminus\eta\) be the conformal map such that \(\lim_{z\to\infty}g(z)-z=0\), then the law of \(\phi_{0}=g^{-1}\bullet_{\gamma}\phi\) is \(\operatorname{LF}_{\operatorname{H}}^{(-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j },z_{j})_{j},(\delta,\infty)}\). Moreover, \((\phi,\eta)\) is obtained from \(\phi_{0}\) by conformal welding; this is proved for \(\kappa<4\) in [11] and for \(\kappa=4\) in [10, 12]. Thus Theorem 1.2 holds when \(-\frac{1}{\sqrt{\kappa}}+\sum_{j}\alpha_{j}+\delta=Q\). Proposition 4.5 then removes this constraint.
Proof of Theorem 1.1.: This is immediate from Theorem 1.2 and Proposition 3.2.
## 5 The \(\kappa\geq 8\) LCFT zipper
In this section we prove Theorem 1.8. There are two key steps. First, we need to produce the quantum cell in the setting of LCFT. In Section 5.1 we prove that the uniform embedding of the quantum wedge is a Liouville field, and in Section 5.2 we use this to transfer a statement about cutting a quantum cell from a quantum wedge to one about cutting a quantum cell from a Liouville field. Second, we must show the quantum cell arises in the time-evolution described by Sheffield's coupling (see Proposition 4.2). This is accomplished for a special case in Section 5.3 by a technical limiting argument. In Section 5.4 we bootstrap the special case to the full result.
### Uniform embedding of quantum wedge
We show that when a quantum wedge is embedded in the upper half-plane uniformly at random, the resulting field is a Liouville field.
**Proposition 5.1**.: _Let \(W>\frac{\gamma^{2}}{2}\) and \(\alpha=\frac{1}{2}(Q+\frac{\gamma}{2}-\frac{W}{\gamma})\). Sample \((\mathcal{W},T)\sim\mathcal{M}^{\operatorname{wed}}(W)\times dT\) and let \((\operatorname{H},\phi_{0},0,\infty)\) be an embedding of \(\mathcal{W}\) chosen in a way not depending on \(T\). Let \(f_{T}(z)=e^{T}z\). Then the law of \(\phi=f_{T}\bullet_{\gamma}\phi_{0}\) is \(\frac{1}{Q-2\alpha}\operatorname{LF}_{\operatorname{H}}^{(\alpha,0),(Q-\alpha, \infty)}\)._
Proposition 5.1 is analogous to [1, Theorem 2.13] which proves a similar statement for quantum disks. The proof hinges on the following Brownian motion identity.
**Lemma 5.2**.: _The random processes \(X_{1},X_{2}\) defined below have the same law:_
* _Let_ \(P\) _be the law of_ \(Y_{t}\) _from Definition_ 2.6_. Sample_ \((Y,T)\sim P\times dT\) _and let_ \(X_{1}(t)=Y_{t-T}\)_._
* _Let_ \(P^{\prime}\) _be the law of standard two-sided Brownian motion_ \((B_{t})_{t\in\mathds{R}}\) _(with_ \(B_{0}=0\)_). Sample_ \(((B_{t})_{t\in\mathds{R}},\mathbf{c})\) _from_ \((Q-2\alpha)^{-1}P^{\prime}\times dc\)_, and let_ \(X_{2}(t)=B_{2t}+(Q-2\alpha)t+\mathbf{c}\)_._
Proof.: We claim that for any \(y\in\mathds{R}\) we have \(X_{1}+y\stackrel{{ d}}{{=}}X_{1}\). Indeed, the process \(Y_{t}\) is a variance 2 Brownian motion with drift \((Q-2\alpha)\) started at \(-\infty\) and run until it reaches \(+\infty\), so setting \(\widetilde{Y}_{t}=Y_{t}+y\) and \(\widetilde{\tau}=\inf\{t:\widetilde{Y}_{t}\geq 0\}\), we have \((\widetilde{Y}_{t+\widetilde{\tau}})\in\mathbb{R}\stackrel{{ d}}{{=}}(Y_{t})_{t\in \mathds{R}}\). Now the translation invariance of Lebesgue measure \(dT\) yields \(X_{1}+y\stackrel{{ d}}{{=}}X_{1}\). We call this fact _translation symmetry_.
We first show that the law of \(X_{1}(0)\) is \((Q-2\alpha)^{-1}dx\). By Fubini's theorem, the measure of \(\{X_{1}(0)\in(0,b)\}\) is \(\mathbb{E}[\int_{\mathds{R}}1_{Y_{t}\in(0,b)}\,dt]\). This expectation is \(((Q-2\alpha)^{-1}+o(1))b\) as \(b\to\infty\); indeed the drift term of \(Y_{t}=B_{2t}+(Q-2\alpha)t\) dominates the Gaussian fluctuations of \(B_{2t}\) for large \(t\). Translation symmetry implies the law of \(X_{1}(0)\) is \(c\,dx\) for some constant \(c\), and comparing with the above gives \(c=1/(Q-2\alpha)\).
Now, let \(I\) be a finite interval. We show that the conditional law of \(X_{1}|_{[0,\infty)}\) given \(X_{1}|_{(-\infty,0]}\) and \(X_{1}(0)\in I\) is \(X_{1}(t)=B_{2t}+(Q-2\alpha)t+X_{1}(0)\). By translation symmetry we may assume \(I\subset(0,\infty)\), then we may restrict to the event \(\{T>0\}\) since \(Y_{t}<0\) for \(t<0\). By the Markov property of Brownian motion, given \((Y_{t})_{(-\infty,T]}\) and \(Y_{T}\in I\), the conditional law of \((Y_{t})_{[T,\infty)}\) is \(B_{2t}+(Q-2\alpha)t+Y_{T}\); rephrasing in terms of \(X_{t}\) gives the desired statement.
Finally, a similar argument gives that the conditional law of \(X_{1}|_{(-\infty,0]}\) given \(X_{1}|_{[0,\infty)}\) and \(X_{1}(0)\in I\) is \(X_{1}(t)=B_{-2t}+(Q-2\alpha)t+X_{1}(0)\). Since \(X_{1}(0)\stackrel{{ d}}{{=}}X_{2}(0)\), and the conditional law of \(X_{1}\) given \(X_{1}(0)=x\) agrees with the conditional law of \(X_{2}\) given \(X_{2}(0)=x\) for all \(x\), we conclude \(X_{1}\stackrel{{ d}}{{=}}X_{2}\).
Proof of Proposition 5.1.: Let \(\log:\mathds{H}\to\mathcal{S}\) be the confomal map sending \((0,1,\infty)\) to \((-\infty,0,+\infty)\), and let \(\phi_{0}^{\mathcal{S}}=\log\bullet_{\gamma}\phi_{0}\). By definition there is a random \(x\in\mathds{R}\) such that \((\phi_{0}^{\mathcal{S}},T)\stackrel{{ d}}{{=}}(\phi_{1}(\cdot+x )+\phi_{2}(\cdot+x),T^{\prime})\) where the law of \(T^{\prime}\) is Lebesgue measure on \(\mathds{R}\) and \((\phi_{1},\phi_{2})\) are independently sampled as in Definition 2.6. By the translation invariance of Lebesgue measure on \(\mathds{R}\), and the translation invariance in law of \(\phi_{2}\), Lemma 5.2 implies \(\phi_{1}(\cdot+x-T^{\prime})+\phi_{2}(\cdot+x-T^{\prime})\stackrel{{ d}}{{=}}X_{2}(\mathds{R}\cdot)+h_{2}\) where \(X_{2}\) is as in Lemma 5.2 and \(h_{2}\) is the projection of an independent GFF on \(\mathcal{S}\) to \(H_{2}(\mathcal{S})\). Since the projection of a GFF to \(H_{1}(\mathcal{S})\) has the law of \((B_{2t})_{t\in\mathds{R}}\) where \(B_{t}\) is standard two-sided Brownian motion, we conclude that \(\log\bullet_{\gamma}(f_{T}\bullet_{\gamma}\phi_{0})\) agrees in law with \(h+(Q-2\alpha)\,\mathrm{Re}(\cdot)+\mathbf{c}\) where \((h,\mathbf{c})\sim\frac{1}{Q-2\alpha}P_{\mathcal{S}}\times dc\). But by definition \(\log^{-1}\bullet_{\gamma}(h+(Q-2\alpha)\,\mathrm{Re}(\cdot)+\mathbf{c})\) has law \(\frac{1}{Q-2\alpha}\mathrm{LF}_{\mathrm{H}}^{(\alpha,0),(Q-\alpha,\infty)}\), concluding the proof.
### Cutting a quantum cell from the Liouville field
The goal of this section is to prove the following. We write \(\mathrm{SLE}_{\kappa}\) for the law of forward \(\mathrm{SLE}_{\kappa}\) in \((\mathds{H},0,\infty)\). Recall that \(P_{a}\) is the law of the area \(a\) quantum cell.
**Proposition 5.3**.: _Suppose \(\kappa\geq 8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Sample a triple \((\psi,z,\eta)\) from the measure_
\[\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),(Q-\frac{3\gamma}{4},\infty),( \gamma,z)}(d\psi)\,dz\times\mathrm{SLE}_{\kappa}(d\eta). \tag{5.1}\]
_Parametrize \(\eta\) by quantum area and let \(A\) be the time \(\eta\) first hits \(z\). Let \(g:\mathds{H}\to\mathds{H}\backslash\eta([0,A])\) be the unique conformal map with \(g(0)=z\) and \(g(w)=w+O(1)\) as \(w\to\infty\), and let \(\phi=\psi\circ g+Q\log|g^{\prime}|\)._
_Let \(\mathcal{C}=(\eta([0,A]),h,\eta|_{[0,A]})/{\sim_{\gamma}}\). Then the law of \((\phi,\mathcal{C},A)\) is_
\[\operatorname{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),(Q-\frac{3\gamma}{4}, \infty)}(d\phi)P_{A}(d\mathcal{C})1_{A>0}dA. \tag{5.2}\]
To that end, we prove a similar statement for the quantum wedge (Lemma 5.5), then transfer to the Liouville field using uniform embedding (Proposition 5.1 and Lemma 5.4).
For the rest of this section, let \(P\) be the law of a field \(\psi_{0}\) such that \(\mathcal{M}^{\mathrm{wed}}(2-\frac{\gamma^{2}}{2})\) is the law of \((\mathbb{H},\psi_{0},0,\infty)/{\sim_{\gamma}}\). For instance \(P\) could be the law of the field of Definition 2.6 after parametrizing in \(\mathbb{H}\) via \(z\mapsto e^{z}\).
**Lemma 5.4**.: _Sample \((z_{0},\psi_{0},T)\sim\mathcal{A}_{\psi_{0}}(dz_{0})\,P(d\psi_{0})\,dT\). Let \(f_{T}(z)=e^{T}z\). Then the law of \((\psi,z):=(f_{T}\bullet_{\gamma}\psi_{0},f_{T}(z_{0}))\) is \((Q-\frac{3\gamma}{2})^{-1}\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),( \gamma,z),(Q-\frac{3\gamma}{4},\infty)}(d\psi)\,dz\)._
Proof.: Sampling a point then uniformly embedding is the same as uniformly embedding then sampling a point. Also, Proposition 2.5 gives \(\mathcal{A}_{\psi}(dz)\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),(Q- \frac{3\gamma}{4},\infty)}(d\psi)=\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4 },0),(\gamma,z),(Q-\frac{3\gamma}{4},\infty)}(d\psi)\,dz\). Thus Proposition 5.1 yields the result.
**Lemma 5.5**.: _Sample \((z_{0},\psi_{0},\eta_{0})\sim\mathcal{A}_{\psi_{0}}(dz_{0})\,P(d\psi_{0})\, \mathrm{SLE}_{\kappa}(d\eta_{0})\). Parametrize \(\eta_{0}\) by quantum area and let \(A\) be the time it hits \(z_{0}\). Then the marginal law of \(A\) is the Lebesgue measure \(\mathrm{Leb}_{(0,\infty)}\), and the conditional law of the pair of quantum surfaces \((\mathbb{H}\backslash\eta_{0}([0,A]),\psi_{0},z_{0},\infty)/{\sim_{\gamma}}\) and \((\eta_{0}([0,A]),\psi_{0},\eta_{0}|_{[0,A]})/{\sim_{\gamma}}\) given \(A\) is \(\mathcal{M}^{\mathrm{wed}}(2-\frac{\gamma^{2}}{2})\times P_{A}\)._
Proof.: Fix \(a>0\) and sample \((\psi_{0},\eta_{0})\sim P(d\psi_{0})\,\mathrm{SLE}_{\kappa}(d\eta_{0})\). Since \((\mathbb{H},\psi_{0},0,\infty)/{\sim_{\gamma}}\) has law \(\mathcal{M}^{\mathrm{wed}}(2-\frac{\gamma^{2}}{2})\), by definition, the law of \(\mathcal{C}_{a}:=(\eta([0,a]),\psi_{0},\eta_{0}|_{[0,a]})/{\sim_{\gamma}}\) is \(P_{a}\). Moreover, by the last two claims of [16, Theorem 1.9], \((\eta_{0}([a,\infty)),\psi_{0},\eta(a),\infty)/{\sim_{\gamma}}\) has law \(\mathcal{M}^{\mathrm{wed}}(2-\frac{\gamma^{2}}{2})\) and is independent of \(\mathcal{C}_{a}\).
Now we will construct the same random objects \((z_{0},\psi_{0},\eta_{0})\) from a different approach. Sample \((A,\psi_{0},\eta_{0})\sim\mathrm{Leb}_{(0,\infty)}(dA)\,P(d\psi_{0})\,\mathrm{SLE }_{\kappa}(d\eta_{0})\). Let \(z_{0}=\eta(A)\). Since \(\mathcal{A}_{\psi_{0}}=\eta_{*}\mathrm{Leb}_{(0,\infty)}\), the law of \((z_{0},\psi_{0},\eta_{0})\) is \(\mathcal{A}_{\psi_{0}}(dz_{0})P(d\psi_{0})\,\mathrm{SLE}_{\kappa}(d\eta_{0})\). By our construction the marginal law of \(A\) is \(\mathrm{Leb}_{(0,\infty)}\), and by the previous paragraph the conditional law of \((\mathbb{H}\backslash\eta_{0}([0,A]),\psi_{0},z_{0},\infty)/{\sim_{\gamma}}\) and \((\eta_{0}([0,A]),\psi_{0},\eta_{0}|_{[0,A]})/{\sim_{\gamma}}\) given \(A\) is \(\mathcal{M}^{\mathrm{wed}}(2-\frac{\gamma^{2}}{2})\times P_{A}\).
We are now ready to prove Proposition 5.3.
Proof of Proposition 5.3.: Sample \((z_{0},\psi_{0},\eta_{0},T)\sim\mathcal{A}_{\psi_{0}}(dz_{0})\,P(d\psi_{0})\, \mathrm{SLE}_{\kappa}(d\eta_{0})\,dT\). Let \(f_{T}(z)=e^{T}z\), and let \((z,\psi,\eta)=(f_{T}(z_{0}),f_{T}\bullet_{\gamma}\psi_{0},f_{T}\circ\eta_{0})\). By Lemma 5.4 and the scale-invariance of \(\mathrm{SLE}_{\kappa}\) the law of \((z,\psi,\eta)\) is (5.1) times \((Q-\frac{3\gamma}{2})^{-1}\).
Let \(A\) be the time that \(\eta_{0}\) hits \(z_{0}\). Let \(g_{0}:\mathbb{H}\to\mathbb{H}\backslash\eta_{0}([0,A])\) be the conformal map with \(g_{0}(0)=z_{0}\) and \(g_{0}(w)=w+O(1)\) as \(w\to\infty\), and let \(\phi_{0}=\psi_{0}\circ g_{0}+Q\log|g_{0}^{\prime}|\). Let \(g:\mathbb{H}\to\mathbb{H}\backslash\eta([0,A])\) be the conformal map with \(g(0)=z\) and \(\lim_{w\to\infty}g(w)-w=0\), and let \(\phi=\psi\circ g+Q\log|g_{T}^{\prime}|\). Let \(\mathcal{C}=(\eta([0,A]),\psi,\eta|_{[0,A]})/{\sim_{\gamma}}=(\eta_{0}([0,A]), \psi_{0},\eta_{0}|_{[0,A]})/{\sim_{\gamma}}\).
By Lemma 5.5 the law of \(((\mathbb{H},\phi_{0},0,\infty)/{\sim_{\gamma}},\mathcal{C},A,T)\) is \(\mathcal{M}^{\mathrm{wed}}(2-\frac{\gamma^{2}}{2})\,P_{A}(d\mathcal{C})\,1_{A>0} dA\,dT\). It is easy to check that \(f_{T}\circ g_{0}=g\circ f_{T}\), so \(f_{T}\bullet_{\gamma}\phi_{0}=\phi\). Thus by Proposition 5.1 the law of \((\phi,\mathcal{C},A)\) is (5.2) times \((Q-\frac{3\gamma}{2})^{-1}\).
### The \(n=0\) case of Theorem 1.8
In this section, we prove the following.
**Proposition 5.6**.: _Theorem 1.8 holds for the special case where \(n=0\)._
In Lemma 5.7 we construct a process \((\phi_{t},\eta_{t})_{t\geq 0}\) on field-curve pairs where the evolution is given by Sheffield's coupling. In Proposition 5.8 we run this process until the time \(\tau\) that the quantum area has increased by a Lebesgue-typical amount, and identify the law of \((\phi_{\tau},\eta_{\tau})\). In Proposition 5.9 we use Propositions 5.3 and 5.8 to show that \((\phi_{\tau},\eta_{\tau})\) comes from conformally welding \(\phi_{0}\) with a quantum cell. Proposition 5.6 then follows quickly.
We write \(\mathrm{SLE}^{t}_{\kappa}\) to denote the law of forward \(\mathrm{SLE}_{\kappa}\) in \(\mathbbm{H}\) from \(0\) to \(\infty\) run for time \(t\) (so its half-plane capacity is \(2t\)), and write \(\mathrm{rSLE}^{t}_{\kappa}\) for reverse \(\mathrm{SLE}\) in \((\mathbbm{H},0,\infty)\) run for time \(t\). Let \(\mathfrak{F}\) be the space of distributions on \(\mathbbm{H}\) and let \(\mathfrak{C}\) be the space of bounded curves in \(\overline{\mathbbm{H}}\) equipped with the metric \(d(\eta_{1},\eta_{2})=\inf\sup_{0\leq t\leq 1}|\tilde{\eta}_{1}(t)-\tilde{ \eta}_{2}(t)|\) where the infimum is taken over all parametrizations \(\tilde{\eta}_{j}:[0,1]\to\overline{\mathbbm{H}}\) of \(\eta_{j}\).
**Lemma 5.7**.: _For any \(\kappa>0\) and \(\gamma=\min(\sqrt{\kappa},\frac{4}{\sqrt{\kappa}})\), there is an infinite measure \(M\) on \(C([0,\infty),\mathfrak{F}\times\mathfrak{C})\) such that, for \((\phi_{t},\eta_{t})_{t\geq 0}\) sampled from \(M\),_
* \(\eta_{t}\) _satisfies_ \(\mathrm{hcap}(\eta_{t})=2t\) _and is parametrized by half-plane capacity (i.e._ \(\mathrm{hcap}(\eta_{t}([0,s]))=2s\)_);_
* _for_ \(\tau\) _an a.s. finite stopping time for the filtration_ \(\mathcal{F}_{t}=\sigma(\eta_{t})\)_, the law of_ \((\phi_{\tau},\eta_{\tau})\) _is_ \(\mathrm{LF}^{(-\frac{\gamma}{4},\eta_{\tau}(0)),(Q-\frac{3\gamma}{4},\infty)} _{\mathrm{H}}\mathrm{SLE}^{\tau}_{\kappa}\)_;_
* _for_ \(t_{1}<t_{2}\)_, let_ \(g_{t_{1},t_{2}}:\mathbbm{H}\to\mathbbm{H}\backslash\eta_{t_{2}}([0,t_{2}-t_{ 1}])\) _be the conformal map with_ \(g_{t_{1},t_{2}}(\eta_{t_{1}}(0))=\eta_{t_{2}}(t_{2}-t_{1})\) _and_ \(\lim_{z\to\infty}g_{t_{1},t_{2}}(z)-z=0\)_. Then_ \(\phi_{t_{1}}=g_{t_{1},t_{2}}^{-1}\bullet_{\gamma}\phi_{t_{2}}\)_._
Proof.: For fixed \(T\), it is immediate from Proposition 4.2 with \(n=0\) and \(\delta=Q-\frac{3\gamma}{4}\) that there is a measure \(M_{T}\) on \(C([0,T],\mathfrak{F}\times\mathfrak{C})\) which satisfies the above two conditions for \(t_{1},t_{2}\leq T\). The Kolmogorov extension theorem then gives the existence of \(M\).
For \((\phi_{t},\eta_{t})_{t\geq 0}\sim M\), for each \(t\) let \(W_{t}:=\eta_{t}(0)\in\mathds{R}\). We note that the field-curve pair \((\phi_{t}(\cdot+W_{t}),\eta_{t}-W_{t})\) is the translation of \((\phi_{t},\eta_{t})\) sending \(\eta_{t}(0)\) to \(0\).
**Proposition 5.8**.: _Let \(\kappa\geq 8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Sample \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A)\) from \(M\times 1_{A>0}\,dA\). Let \(\tau>0\) be the time \(t\) that \(\mathcal{A}_{\phi_{t}}(\eta_{t}([0,t]))=A\). Then the law of \((\phi_{\tau}(\cdot+W_{\tau}),\eta_{\tau}-W_{\tau},\tau)\) is \(C\cdot\mathrm{LF}^{(\frac{3\gamma}{4},0),(Q-\frac{3\gamma}{4},\infty)}_{ \mathrm{H}}\mathrm{SLE}^{t}_{\kappa}\,1_{t>0}dt\) for some constant \(C\in(0,\infty)\)._
At high level, the proof of Proposition 5.8 goes as follows. First, if we instead sample \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A,T)\sim\delta^{-1}1_{T\in[\tau,\tau+\delta]}M \times 1_{A>0}\,dA\times 1_{T>0}dT\) then the marginal law of \((\phi_{\tau},\eta_{\tau},\tau)\) is the same as in Proposition 5.8. Secondly, by the definition of \(\tau\) the point \(z=\eta_{T}(T-\tau)\) is quantum typical and thus \(\phi_{T}\) has a singularity \(\gamma G(\cdot,z)\) (Proposition 2.5). As \(\delta\to 0\) we have \(T-\tau\to 0\) so \(z\to W_{T}\), so in the limit the field \(\phi_{T}\) has the singularity \(\gamma G(\cdot,W_{T})-\frac{\gamma}{4}G(\cdot,W_{T})\) at \(W_{T}\). Finally, since \(T\to\tau\) as \(\delta\to 0\), we have \((\phi_{T}(\cdot+W_{T}),\eta_{T}-W_{T},T)\to(\phi_{\tau}(\cdot+W_{\tau}),\eta_{ \tau}-W_{\tau},\tau)\). The primary complication is in taking limits of infinite measures; below we truncate on events to work with finite measures.
Proof.: To simplify notation, let \((\phi^{1},\eta^{1}):=(\phi_{\tau}(\cdot+W_{\tau}),\eta_{\tau}-W_{\tau})\). Let \(\rho\) be the uniform probability measure on \(B_{1/2}(i)\) (the exact choice of \(\rho\) is unimportant). Let \(N>0\) and define \(E_{N}:=\{\tau,|(\phi^{1},\rho)|,|(f^{-1}\bullet_{\gamma}\phi^{1},\rho)|<N\}\) where \(f:\mathbbm{H}\to\mathbbm{H}\backslash\eta^{1}([0,\tau])\) is the conformal map with \(f(0)=\eta^{1}(\tau)\) and \(f(z)=z+O(1)\) as \(z\to\infty\). Let \(P_{N}\) denote the conditional law of \((\phi^{1},\eta^{1},\tau)\) given \(E_{N}\)
this is well defined because the measure of \(E_{N}\) is finite, since \(\phi_{0}=f^{-1}\bullet_{\gamma}\phi^{1}\) and \(M[|(\phi_{0},\rho)|<N]=\operatorname{LF}^{(-\frac{\gamma}{4},0),(Q-\frac{3\gamma }{4},\infty)}[|(\phi,\rho)|<N]<\infty\). Likewise let \(\widetilde{P}_{N}\) denote the conditional law of \((\phi,\eta,t)\sim\operatorname{LF}_{\operatorname{H}}^{(\frac{\gamma}{4},0),( Q-\frac{\gamma\gamma}{4},\infty)}\operatorname{EL}_{\kappa}^{t}1_{t>0}dt\) given \(\{t,|(\phi,\rho)|,|(g^{-1}\bullet_{\gamma}\phi)|<N\}\) with \(g:\mathbb{H}\to\mathbb{H}\backslash\eta\) the conformal map with \(g(0)=\eta(t)\) and \(g(z)=z+O(1)\) as \(z\to\infty\). We will show that \(P_{N}=\widetilde{P}_{N}\) for all \(N\); sending \(N\to\infty\) concludes the proof.
Sample \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A,T)\) from \(1_{T>\tau}M\times 1_{A>0}dA\times 1_{T>0}dT\) where as before \(\tau\) is the time \(t\) that \(\mathcal{A}_{\phi_{t}}(\eta_{t}([0,t]))=A\). As before let \((\phi^{1},\eta^{1}):=(\phi_{\tau}(\cdot+W_{\tau}),\eta_{\tau}-W_{\tau})\). Let \(F_{\delta}=\{T-\tau<\delta\}\). Let \((\phi^{2},\eta^{2})=(\phi_{T}(\cdot+W_{T}),\eta_{T}-W_{T})\) and let \(G_{\varepsilon}=\{\eta^{2}([0,T-\tau])\subset B_{\varepsilon}(0)\}\). See Figure 8.
**Step 1: Law of \((\phi^{1},\eta^{1},\tau)\) given \(E_{N}\cap F_{\delta}\cap G_{\varepsilon}\) converges to \(P_{N}\) as \(\delta\to 0\) then \(\varepsilon\to 0\).**
Let \(x=T-\tau\), then the conditional law of \((\phi^{1},\eta^{1},\tau,x)\) given \(E_{N}\cap F_{\delta}\) is \(P_{N}\times\delta^{-1}1_{x\in[0,\delta]}\,dx\). As \(\delta\to 0\), we have \(x\to 0\) in probability, so by continuity of the Loewner chain the diameter of \(\eta_{\tau+x}([0,x])\) shrinks to \(0\) in probability. Hence the conditional probability of \(G_{\varepsilon}\) is \(1-o_{\delta}(1)\). Thus, as \(\delta\to 0\) then \(\varepsilon\to 0\), the conditional law of \((\phi^{1},\eta^{1})\) given \(E_{N}\cap F_{\delta}\cap G_{\varepsilon}\) converges to \(P_{N}\) in total variation.
**Step 2: Law of \((\phi^{2},\eta^{2},T)\) given \(E_{N}\cap F_{\delta}\cap G_{\varepsilon}\) converges to \(\widetilde{P}_{N}\) as \(\delta\to 0\) then \(\varepsilon\to 0\).**
By the second property of \(M\) in Lemma 5.7, and the fact that \(\operatorname{SLE}_{\kappa}^{t}\) and the centered \(\operatorname{rSLE}_{\kappa}^{t}\) trace have the same law, the unconditioned law of \((\phi^{2},\eta^{2},A,T)\) is
\[1_{A\in(0,\mathcal{A}_{\phi^{2}}(\eta^{2}([0,T]))}dA\operatorname{LF}_{ \operatorname{H}}^{(-\frac{\gamma}{4},0),(Q-\frac{3\gamma}{4},\infty)}(d\phi^ {2})\operatorname{SLE}_{\kappa}^{T}(d\eta^{2})1_{T>0}dT.\]
Let \(z=\eta^{2}(T-\tau)\). The conditional law of \(z\) given \((\phi^{2},\eta^{2},T)\) is the probability measure proportional to \(\mathcal{A}_{\phi^{2}}|_{\eta^{2}([0,T])}\). Thus, using Proposition 2.5, the unconditioned law of \((\phi^{2},\eta^{2},z,T)\) is
\[1_{z\in\eta^{2}([0,T])}\mathcal{A}_{\phi^{2}}(dz)\operatorname{LF }_{\operatorname{H}}^{(-\frac{\gamma}{4},0),(Q-\frac{3\gamma}{4},\infty)}(d \phi^{2})\operatorname{SLE}_{\kappa}^{T}(d\eta^{2})1_{T>0}dT\] \[=\operatorname{LF}_{\operatorname{H}}^{(-\frac{\gamma}{4},0),( \gamma,z),(Q-\frac{3\gamma}{4},\infty)}(d\phi^{2})1_{z\in\eta^{2}([0,T])}dz \operatorname{SLE}_{\kappa}^{T}(d\eta^{2})1_{T>0}dT. \tag{5.3}\]
Now we express the event \(E_{N}\cap F_{\delta}\cap G_{\varepsilon}\) in terms of \((\phi^{2},\eta^{2},z,T)\). Let \(\tau_{z}\) be the time \(\eta^{2}\) hits \(z\), let \(f_{1}:\mathbb{H}\to\mathbb{H}\backslash\eta^{2}([0,\tau_{z}])\) (resp. \(f_{0}:\mathbb{H}\to\mathbb{H}\backslash\eta_{2}([0,T])\)) be the conformal map sending \(0\) to \(\eta_{2}(\tau_{z})\) (resp. \(\eta_{2}(T)\)) and with asymptotic behavior \(w\mapsto w+O(1)\) as \(w\to\infty\). Then \(E_{N}=\{T-\tau_{z},|(f_{1}^{-1}\bullet_{\gamma}\phi_{2},\rho)|,|(f_{2}^{-1} \bullet_{\gamma}\phi_{2},\rho)|<N\}\), \(F_{\delta}=\{\tau_{z}<\delta\}\) and \(G_{\varepsilon}=\{\eta_{2}([0,\tau_{z}])\subset B_{\varepsilon}(0)\}\).
We now have the description (5.3) of the unconditioned law of \((\phi^{2},\eta^{2},z,T)\), and the previous paragraph's description of \(E_{N},F_{\delta},G_{\varepsilon}\). Since \(z\to 0\) by the definition of \(G_{\varepsilon}\) and since \(\lim_{\varepsilon\to 0}G_{\operatorname{H}}(\cdot,z)=G_{\operatorname{H}}(\cdot,0)\) in \(\mathfrak{F}\), the law of \((\phi^{2},\eta^{2},T)\) conditioned on \(E_{N}\cap F_{\delta}\cap G_{\varepsilon}\) converges to \(\widetilde{P}_{N}\) as \(\delta\to 0\) then \(\varepsilon\to 0\).
**Conclusion.** By the definition of \(G_{\varepsilon}\) the conformal map \(f_{1}\) converges to the identity map, so combining Steps 1 and 2 gives \(P_{N}=\widetilde{P}_{N}\).
**Proposition 5.9**.: _In the setting of Proposition 5.8, let \(\mathcal{C}=(\eta_{\tau}([0,\tau]),\phi_{\tau},\eta_{\tau})/{\sim_{\gamma}}\). Then the law of \((\phi_{0},\mathcal{C},A)\) is_
\[\operatorname{LF}_{\operatorname{H}}^{(-\frac{\gamma}{4},0),(Q-\frac{3\gamma}{4}, \infty)}P_{a}\,1_{a>0}da.\]
Proof.: Sample \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A_{1},A_{2})\) from \(M\times 1_{A_{1},A_{2}>0}dA_{1}dA_{2}\). Let \(\tau^{1}\) (resp. \(\tau^{2}\)) be the time \(t\) that \(\mathcal{A}_{\phi_{t}}(\eta_{t}([0,t]))\) equals \(A_{1}\) (resp. \(A_{1}+A_{2}\)). Let \(\mathcal{C}_{2}=(\eta_{\tau^{2}}([0,\tau^{2}-\tau^{1}]),\phi_{\tau^{2}},\eta_{ \tau^{2}}|_{[0,\tau^{2}-\tau^{1}]})/{\sim_{\gamma}}\). We will show that the law of \((\phi_{0},\mathcal{C}_{2},A_{1},A_{2})\) is
\[\operatorname{LF}_{\operatorname{H}}^{(-\frac{\gamma}{4},0),(Q-\frac{3\gamma}{4}, \infty)}(d\phi^{0})P_{A_{2}}(d\mathcal{C}_{2})1_{A_{1},A_{2}>0}dA_{1}dA_{2}. \tag{5.4}\]
To simplify notation, let \((\phi^{j},\eta^{j})=(\phi_{j}(\cdot+W_{\tau_{j}}),\eta_{j}-W_{\tau_{j}})\) for \(j=1,2\) and let \(\eta^{12}=\eta^{2}|_{[0,\tau^{2}-\tau^{1}]}\). Let \(z=\eta^{2}(\tau^{2}-\tau^{1})\). See Figure 9.
Let \(S=A_{1}+A_{2}\), then by a change of variables the law of \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A_{1},S)\) is \(M\,1_{A_{1}\in[0,S]}dA_{1}\,1_{S>0}dS\). By Proposition 5.8 the law of \((A_{1},\phi^{2},\eta^{2},\tau^{2})\) is
\[C\cdot 1_{A_{1}\in[0,S]}dA_{1}\,\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0), (Q-\frac{3\gamma}{4},\infty)}(d\phi^{2})\,\mathrm{SLE}_{\kappa}^{\tau^{2}}(d \eta^{2})1_{\tau^{2}>0}d\tau^{2},\quad S:=\mathcal{A}_{\phi^{2}}(\eta^{2}([0, \tau^{2}])).\]
Since \(z\) is the point such that \(\eta^{2}\) covers \(S-A_{1}\) units of quantum area before hitting \(z\), we conclude the law of \((z,\phi^{2},\eta^{2},\tau^{2})\) is
\[C\cdot 1_{z\in\eta^{2}([0,\tau^{2}])}\mathcal{A}_{\phi^{2}}(dz)\mathrm{LF}_{ \mathrm{H}}^{(\frac{3\gamma}{4},0),(Q-\frac{3\gamma}{4},\infty)}(d\phi^{2})\, \mathrm{SLE}_{\kappa}^{\tau^{2}}(d\eta^{2})1_{\tau^{2}>0}d\tau^{2}.\]
Proposition 2.5 gives \(\mathcal{A}_{\phi^{2}}(dz)\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),(Q- \frac{3\gamma}{4},\infty)}(d\phi^{2})=\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma }{4},0),(\gamma,z),(Q-\frac{3\gamma}{4},\infty)}(d\phi^{2})dz\). Combining this with Lemma 5.10 stated below, the law of \((\phi^{2},\eta^{1},\eta^{12},z,\tau^{1})\) is
\[C\cdot\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),(\gamma,z),(Q-\frac{3 \gamma}{4},\infty)}(d\phi^{2})\,\mathrm{SLE}_{\kappa}^{\tau^{1}}(d\eta^{1}) \,\mathrm{SLE}_{\kappa}^{z}(d\eta^{12})\,dz\,1_{\tau^{1}>0}d\tau^{1}\]
where \(\mathrm{SLE}_{\kappa}^{z}\) denotes forward \(\mathrm{SLE}_{\kappa}\) run until the time it hits \(z\). By Proposition 5.3, the law of \((\phi^{1},\eta^{1},\tau^{1},\mathcal{C}_{2},A_{2})\) is
\[C\cdot\mathrm{LF}_{\mathrm{H}}^{(\frac{3\gamma}{4},0),(Q-\frac{3\gamma}{4}, \infty)}(d\phi^{1})\,\mathrm{SLE}_{\kappa}^{\tau^{1}}(d\eta^{1})1_{\tau^{1}>0} d\tau^{1}P_{A_{2}}(d\mathcal{C}_{2})1_{A_{2}>0}dA_{2}.\]
Figure 8: Figure for Proposition 5.8. Sample \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A,T)\) from \(1_{T>\tau}M\times 1_{A>0}dA\times 1_{T>0}dT\) and let \(\tau\) be the time the curve (grey) has quantum area \(A\). Roughly speaking, \(E_{N}\) is the event that \(\tau\) and certain field observables of \(\phi_{0}\) and \(\phi^{1}\) are not too large, \(F_{\delta}=\{T-\tau<\delta\}\), and \(G_{\varepsilon}\) is the event that the dark grey region is small in a Euclidean sense. As \(\delta\to 0\) then \(\varepsilon\to 0\), \((\phi^{2},\eta^{2})\) approximates \((\phi^{1},\eta^{1})\). We use this to show \(P_{N}=\widetilde{P}_{N}\).
Figure 9: Figure for Proposition 5.9. The pair \((\phi^{j},\eta^{j})\) corresponds to \((\phi_{\tau_{j}},\eta_{\tau_{j}})\) translated so that the curve starts from \(0\) rather than \(W_{\tau_{j}}\). The curve \(\eta^{12}\) is the initial segment of \(\eta^{2}\).
By Proposition 5.8 applied to the measure \(C\cdot\operatorname{LF}_{\operatorname{H}}^{(\frac{3\gamma}{4},0),(Q-\frac{3\gamma} {4},\infty)}(d\phi^{1})\operatorname{SLE}_{\kappa^{\prime}}^{\tau^{1}}(d\eta^{ 1})1_{\tau^{1}>0}d\tau^{1}\), we conclude that the law of \((\phi_{0},\mathcal{C}_{2},A_{1},A_{2})\) is (5.4).
As a consequence of (5.4), if we sample \((\{(\phi_{t},\eta_{t})\}_{t\geq 0},A_{1},A_{2})\) from \(M\times\operatorname{Unif}_{[0,\varepsilon]}(dA_{1})1_{A_{2}>0}dA_{2}\) where \(\operatorname{Unif}_{[0,\varepsilon]}\) is the uniform probability measure on \([0,\varepsilon]\), then the law of \((\phi_{0},\mathcal{C}_{2},A_{1},A_{2})\) is \(\operatorname{LF}_{\operatorname{H}}^{(-\frac{\gamma}{4},0),(Q-\frac{3\gamma} {4},\infty)}(d\phi_{0})P_{A_{2}}(d\mathcal{C}_{2})\operatorname{Unif}_{[0, \varepsilon]}(dA_{1})1_{A_{2}>0}dA_{2}\). Sending \(\varepsilon\to 0\) yields the desired result.
**Lemma 5.10**.: _Let \(z\in\mathbbm{H}\) and consider a pair \((\eta,T)\sim 1_{E}\operatorname{SLE}_{\kappa}^{t}1_{t>0}dt\) where \(E\) is the event that \(z\) lies in the range of the curve. Let \(\tau_{z}\) be the time \(\eta\) hits \(z\), let \(\eta^{12}=\eta|_{[0,\tau_{z}]}\), let \(T^{1}=T-\tau_{z}\) and let \(\eta^{1}=f^{-1}\circ\eta(\cdot+\tau_{z})|_{[0,T^{1}]}\) where \(f:\mathbbm{H}\to\mathbbm{H}\backslash\eta([0,\tau_{z}])\) is the conformal map with \(f(0)=z\) and \(f(w)=w+O(1)\) as \(w\to\infty\). Then the law of \((\eta^{12},\eta^{1},T^{1})\) is \(\operatorname{SLE}_{\kappa}^{z}\operatorname{SLE}_{\kappa}^{t^{1}}1_{t^{1}>0 }dt^{1}\), where \(\operatorname{SLE}_{\kappa}^{z}\) is the law of \(\operatorname{SLE}_{\kappa}\) run until it hits \(z\)._
Proof.: From the change of variables \(T^{1}=T-\tau_{z}\), the law of \(T^{1}\) is \(1_{t^{1}>0}dt^{1}\). Conditioned on \(T^{1}\) the conditional law of \(\eta^{12}\) is \(\operatorname{SLE}_{\kappa}^{z}\), and by the domain Markov property, conditioned on \(T^{1}\) and \(\eta^{12}\) the conditional law of \(\eta^{1}\) is \(\operatorname{SLE}_{\kappa}^{T^{1}}\).
Proof of Proposition 5.6.: By Proposition 1.7 and Definition 2.8, the area \(a\) quantum cell is the quantum surface obtained by mating \((X_{t},Y_{t})_{t\geq 0}\sim\operatorname{CRT}_{\kappa}\) for quantum area \(a\).
Recall \(M\) from Lemma 5.7. Proposition 5.9 implies that the time-evolution of \(M\) for quantum area \(a\) arises from conformally welding an area \(a\) quantum cell to \(\phi_{0}\), and so corresponds to mating trees sampled from \(\operatorname{CRT}_{\kappa}\) for quantum area \(a\). Sending \(a\to\infty\), we conclude that \(M\) is the process of Theorem 1.8 when \(n=0\) and \(\delta=Q-\frac{3\gamma}{4}\). The first property of \(M\) in Lemma 5.7 is the desired description of \((\phi_{\tau},\eta_{\tau})\), so we obtain Theorem 1.8 for \(n=0,\delta=Q-\frac{3\gamma}{4}\). Finally, we extend to arbitrary \(\delta\in\mathbbm{R}\) by Proposition 4.5.
### Proof of Theorem 1.8
In this section, we prove Theorem 1.8. The first step is to extend Proposition 5.6 to the setting where insertions are allowed but the process stops before any insertions hit the curve:
**Proposition 5.11**.: _Theorem 1.8 holds for any stopping time \(\tau\) such that a.s. \(g_{\tau}(z_{j})\not\in\eta_{\tau}\) for all \(j\)._
Given this, we can complete the proof of Theorem 1.8.
Proof of Theorem 1.8.: Proposition 4.2 gives us a process \((\widetilde{\phi}_{t},\widetilde{\eta}_{t})_{0\leq t\leq\tau}\) such that \(\widetilde{\phi}_{0}\) has law \(\operatorname{LF}_{\operatorname{H}}^{(-\frac{1}{\sqrt{\kappa}},0),(\alpha_{j},z_{j})_{j},(\delta,\infty)}\) and \((\widetilde{\phi}_{\tau},\widetilde{\eta}_{\tau})\) has law (1.2), where \(\tau\) is a stopping time for \(\mathcal{F}_{t}=\sigma(\widetilde{\eta}_{t})\). Let \(S=\{t:g_{t}(z_{j})=W_{t}\text{ for some }j\}\). By Proposition 5.11, if \(\sigma,\sigma^{\prime}\) are stopping times for \(\mathcal{F}_{t}\) such that \([\sigma,\sigma^{\prime}]\cap S=\emptyset\), then on \([\sigma,\sigma^{\prime}]\) this process can alternatively be described by mating continuum random trees until a stopping time. Since \(S\) is at most finite, by continuity we conclude that the _whole_ process \((\widetilde{\phi}_{t},\widetilde{\eta}_{t})_{0\leq t\leq\tau}\) arises from the mating-of-trees procedure described in Theorem 1.8. This completes the proof.
The proof of Proposition 5.11 is identical to that of Proposition 4.5, but with different calculations which we detail below. We will show how to weight the Liouville field to introduce insertions in Lemma 5.13, then apply this to the special case of Theorem 1.8 to obtain the more general statement. We need Lemma 4.3 and the following Green function facts.
**Lemma 5.12**.: _Suppose \(K\subset\overline{\mathbb{H}}\) is compact, \(\mathbb{H}\backslash K\) is simply connected and there is a conformal map \(g:\mathbb{H}\to\mathbb{H}\backslash K\) such that \(\lim_{|z|\to\infty}g(z)-z=0\). Let \(\varepsilon>0\). Then for \(z_{0}\in\mathbb{H}\) such that \(\operatorname{Im}z_{0}>\varepsilon\) and \(u\not\in g(B_{\varepsilon}(z_{0}))\) we have \(\int G(u,v)(g_{*}\theta_{\varepsilon,z_{0}})(dv)=G(u,g(z_{0}))\), where \(\theta_{\varepsilon,z_{0}}\) is the uniform probability measure on \(\partial\theta_{\varepsilon,z_{0}}\). For \(x_{0}\in\partial\mathbb{H}\) such that \(g([x_{0}-\varepsilon,x_{0}+\varepsilon])\subset\mathbb{R}\) and \(u\not\in g(B_{\varepsilon}(x_{0})\cap\mathbb{H})\) we have \(\int G(u,v)(g_{*}\theta_{\varepsilon,x_{0}})(dv)=G(u,g(x_{0}))\), where \(\theta_{\varepsilon,x_{0}}\) is the uniform probability measure on \(B_{\varepsilon}(x_{0})\cap\mathbb{H}\)._
Proof.: For the first assertion, the function \(G(u,g(\cdot))\) is harmonic on \(\overline{B_{\varepsilon}(z_{0})}\) because \(G(u,\cdot)\) is harmonic and \(g\) is conformal. Thus, by the mean value property of harmonic functions, \(\int G(u,v)(g_{*}\theta_{\varepsilon,z_{0}})(dv)=\int G(u,g(v))\theta_{ \varepsilon,z_{0}}(dv)=G(u,g(z_{0}))\). The second assertion is proved similarly.
Now we add insertions to the Liouville field by weighting. We place constraints on the insertions so that they sum to \(Q\); this allows us to work with \(G(\cdot,\cdot)\) rather than the more complicated \(G_{\mathrm{H}}(\cdot,\cdot)\).
**Lemma 5.13**.: _In the setting of Lemma 5.12, let \((\alpha_{j},z_{j})\in\mathbb{R}\times\overline{\mathbb{H}}\) and \(\beta=-\sum_{j}\alpha_{j}\). Let \(I=\{i:z_{i}\in\mathbb{H}\}\) and \(B=\{b:z_{b}\in\mathbb{R}\}\). Suppose \(\varepsilon>0\) satisfies \(|z_{j}-z_{k}|>2\varepsilon\) for \(j\neq k\) and \(\operatorname{Im}z_{i}>\varepsilon\) for \(i\in I\), and \(g(-1/\varepsilon),g(1/\varepsilon),g((z_{b}-\varepsilon,z_{b}+\varepsilon)) \subset\mathbb{R}\) for \(b\in B\). Let \(\mathbb{H}_{\varepsilon}^{g}=\mathbb{H}\backslash(g(\mathbb{H}\backslash B_{1/ \varepsilon}(0))\cup\bigcup g(B_{\varepsilon}(z_{j})))\) and \(\theta_{\varepsilon}:=\sum_{j}\alpha_{j}\theta_{\varepsilon,z_{j}}+\beta \theta_{\varepsilon,\infty}\). For any \((\alpha,w)\in\mathbb{R}\times\mathbb{H}_{\varepsilon}^{g}\) and function \(F(\phi)\) depending only on \(\phi|_{\mathbb{H}_{\varepsilon}^{g}}\),_
\[\operatorname{LF}_{\mathrm{H}}^{(\alpha,w),(Q-\alpha,\infty)}[ \varepsilon^{-\beta^{2}+2\alpha\beta}\prod_{i\in I}\varepsilon^{\alpha_{i}^{2 }/2}\prod_{b\in B}\varepsilon^{\alpha_{b}^{2}}e^{(g^{-1}\bullet_{\gamma} \phi,\theta_{\varepsilon})}F(\phi)]\] \[=\prod_{i\in I}|g_{\tau}^{\prime}(z_{i})|^{2\Delta_{\alpha_{i}}} \prod_{b\in B}|g_{\tau}^{\prime}(z_{b})|^{\Delta_{2\alpha_{b}}} \operatorname{LF}_{\mathrm{H}}^{(\alpha,w),(\alpha_{j},g(z_{j}))_{j},(\beta+ Q-\alpha,\infty)}[F(\phi)].\]
Proof.: The proof of this is identical to that of Lemma 4.4; the only difference is in the computations of \(Z_{\varepsilon}:=\mathbb{E}[e^{(h,g_{*}\theta_{\varepsilon})}]\) and \(e^{(g_{t}^{-1}\bullet_{\gamma}\phi,\theta_{\varepsilon})}\). To shorten notation write \(\theta_{\varepsilon,j}=\theta_{\varepsilon,z_{j}}\).
To compute \(Z_{\varepsilon}\) we first need \(\operatorname{Var}((h,g_{*}\theta_{\varepsilon}))\). Since \(\int 1\,g_{*}\theta_{\varepsilon}(dv)=0\), the law of \((h,g_{*}\theta_{\varepsilon})\) when \(h\sim r_{\mathrm{H}}\) agrees with that when \(h\) is instead considered as a distribution modulo additive constant. Thus, \(\operatorname{Var}((h,g_{*}\theta_{\varepsilon}))=\iint G(u,v)g_{*}\theta_{ \varepsilon}(du)g_{*}\theta_{\varepsilon}(dv)\) where \(G(u,v)=-\log|u-v|-\log|u-\overline{v}|\).
For \(i\in I\) we have
\[\iint G(u,v)g_{*}\theta_{\varepsilon,i}(du)g_{*}\theta_{\varepsilon,i}(dv)= \int G(g(z_{i}),g(v))\theta_{\varepsilon,i}(dv)=-\log|g^{\prime}(z_{i})|+\log (2\operatorname{Im}(g(z_{i})))+\log\varepsilon,\]
where in the first equality we use Lemma 5.12 and in the second equality we use the fact that \(w\mapsto-\log|g_{t}(w)-g_{t}(z_{i})|+\log|w-z_{i}|\) (with the special value \(z_{i}\mapsto-\log|g_{t}^{\prime}(z_{i})|\)) is harmonic. Similarly, \(\iint G(u,v)g_{*}\theta_{\varepsilon,b}(du)g_{*}\theta_{\varepsilon,b}(dv)=-2 \log|g_{\tau}^{\prime}(z_{b})|+2\log\varepsilon\), and proceeding similarly we can compute all cross-terms. Summing gives the value of \(\operatorname{Var}((h,g_{*}\theta_{\varepsilon}))\), and finally
\[Z_{\varepsilon}=e^{\frac{1}{2}\operatorname{Var}((h,g_{*}\theta_{\varepsilon}))} =\varepsilon^{-\beta^{2}+\sum_{i}\alpha_{i}^{2}/2+\sum_{b}\alpha_{b}^{2}} \prod_{i\in I}|g^{\prime}(z_{i})|^{-\alpha_{i}^{2}/2}\prod_{b\in B}|g^{\prime}(z _{b})|^{-\alpha_{b}^{2}}C_{\gamma}^{(\alpha_{j},g(z_{j}))_{j},(\beta,\infty)}.\]
Next, by Lemmas 4.3 and 5.12 we have \((G(\cdot,w),g_{*}\theta_{\varepsilon})=-2\beta\log\varepsilon+\sum_{j}\alpha_{j} G(z_{j},w)\), so with \(\phi=h+\alpha G(\cdot,w)+c\), we have
\[e^{(g^{-1}\bullet_{\gamma}\phi,\theta_{\varepsilon})}=\varepsilon^{2\alpha \beta}\frac{C_{\gamma}^{(\alpha,w),(\alpha_{j},g(z_{j}))_{j},(\delta,\infty)}}{C_ {\gamma}^{(\alpha_{j},g(z_{j}))_{j},(\beta,\infty)}}e^{(h,g_{*}\theta_{\varepsilon })}.\]
The rest of the argument is identical to that of Lemma 4.4.
Proof of Proposition 5.11.: We only consider \(\tau\) a stopping time that occurs before an insertion is zipped into the curve. By Proposition 5.6, Theorem 1.8 holds for \(n=0,\delta=Q+\frac{\gamma}{4}\). Applying the argument of Proposition 4.5 with Lemma 5.13 as input, we obtain the \(n\geq 0\) case when the insertions \((\alpha_{j},z_{j})_{j}\) and \((\delta,\infty)\) satisfy \(-\frac{1}{\sqrt{\kappa}}+\sum_{j}\alpha_{j}+\delta=Q\). A final application of Proposition 4.5 removes this constraint.
The \(\kappa\in(4,8)\) LCFT zipper
In this section we prove Theorem 1.6. Using either of Propositions 6.1 and 6.2, we obtain the \(n=0\) case, then the argument of Section 5.4 lets us extend to the full Theorem 1.6. We present both Propositions 6.1 and 6.2 to emphasize firstly that an argument of [10] with a Lebesgue constant added already essentially contains the desired \(n=0\) result, and secondly that our arguments in Sections 5.2 can be adapted to this setting.
[10, Theorem 6.9] says that when a particular GFF variant is cut by a forward SLE\({}_{\kappa}\) with \(\kappa\in(4,8)\), the connected components in the complement of the curve are a pair of independent forested lines. In fact, their argument proves the following stronger result.
**Proposition 6.1**.: _Theorem 1.6 holds for \(n=0\) and \(\delta=Q+\frac{\gamma}{4}\)._
Proof.: The setup of [10, Theorem 6.9] starts with a field \(h_{0}=h+\frac{\gamma}{2}\log|\cdot|\) where \(h\sim P_{\mathrm{H}}\), and uses Proposition 4.1 to get a field and curve \(h_{t},\eta_{t}\) where the law of \(\eta_{t}\)s reverse SLE\({}_{\kappa}\) run for time \(t\) and the conditional law of \(h_{t}\) given \(\eta_{t}\) is \(\widetilde{h}+\frac{\gamma}{2}\log|\cdot-\eta_{t}(0)|\) where \(\widetilde{h}\) is a Neumann GFF with a normalization that depends on \(\eta_{t}\).
[10, Theorem 6.9] states that the bounded connected components in \(\mathbbm{H}\setminus\eta_{t}\) are a countable collection of quantum surfaces arising from a Poisson point process of LQG disks, and by the definition of forested line in [10] these quantum surfaces comprise a pair of independent forested lines. In fact, their argument not only proves independence of these quantum surfaces, but also implicitly proves their independence from \(h_{0}\); this gives the independence of \(h_{0}\) and the forested lines.
Finally, if we add a constant \(\mathbf{c}\) chosen from Lebesgue measure on \(\mathbbm{R}\), both \(h_{0}\) and \(h_{t}(\cdot+\eta_{t}(0))\) have the marginal law of the Liouville field \(\mathrm{LF}_{\mathrm{H}}^{(-\frac{\gamma}{4},0),(Q+\frac{\gamma}{4},\infty)}\). Since forested lines are scale-invariant in law, this completes the proof of Theorem 1.6 when \(n=0,\delta=Q+\frac{\gamma}{4}\) and \(\tau=t\) is a deterministic time. The result where \(\tau\in\{2^{-m}k:k,m\in\mathbbm{Z}\}\) a.s. is then immediate, and a limit gives the result for general \(\tau\) (this is the same argument proving the strong Markov property of Brownian motion from the Markov property).
In fact, the argument of [10, Theorem 6.9] can be easily generalized to prove the special case of Theorem 1.6 where there is no force point at \(0\), the neutrality condition is satisfied, and the stopping time occurs a.s. before any force points hit the origin.
Alternatively, the arguments of Sections 5.2 and 5.3 can be adapted to yield a similar result.
**Proposition 6.2**.: _Theorem 1.6 holds for \(n=0\) and \(\delta=Q-\frac{2}{\gamma}+\frac{\gamma}{4}\)._
Proof sketch.: Sample a weight \((\frac{3\gamma^{2}}{2}-2)\) quantum wedge with a forested boundary and decorated by an independent SLE\({}_{\kappa}\) curve. [10, Theorem 6.22] states that for any fixed \(\ell>0\), if we cut along \(\ell\) units of quantum length for the SLE\({}_{\kappa}\) curve, then the resulting curve-decorated forested quantum surface has the same law as the initial curve-decorated forested quantum surface. Using this input instead of Lemma 5.5, the arguments of Sections 5.2 and 5.3 can be applied: replacing "sampling a point from quantum area" with "sampling a point from quantum length measure on the SLE\({}_{\kappa}\) curve" and replacing "sampling a point from Euclidean area" with "sampling a point from Minkowski content measure on the SLE\({}_{\kappa}\) curve" in all arguments gives Proposition 6.2.
Given either of the above two propositions, we can establish Theorem 1.6 in full as in the previous sections.
Proof of Theorem 1.6.: Starting with either Proposition 6.1 or 6.2, Proposition 4.5 gives Theorem 1.6 for \(n=0\). Then the arguments of Sections 5.4 yield the full Theorem 1.6.
The boundary BPZ equation for Liouville conformal field theory
In this section we use the LCFT quantum zipper to prove the boundary BPZ equations stated in Theorem 1.10. Let \(m,n\geq 0\). Let \((\alpha_{j},z_{j})\in\mathbb{R}\times\mathbb{H}\) for \(j\leq m\) and assume \(z_{1},\ldots,z_{m}\) are distinct. Let \(-\infty=x_{0}<x_{1}<\cdots<x_{n}<x_{n+1}=+\infty\), let \(\beta_{1},\ldots,\beta_{n}\in\mathbb{R}\) and let \(\delta\in\mathbb{R}\). Let \(\beta_{*}\in\{-\frac{\gamma}{2},-\frac{2}{\gamma}\}\) and recall the LCFT correlation function \(F_{\beta_{*}}(w,(z_{j})_{j},(x_{k})_{k})\) defined in (1.5).
In Section 7.1 we prove the BPZ equation holds for \(\beta_{*}=-\frac{2}{\gamma}\), in Section 7.2 we handle the case \(\beta_{*}=-\frac{\gamma}{2}\) and \(\gamma\in(0,\sqrt{2}]\) and in Section 7.3 we settle the case \(\beta_{*}=-\frac{\gamma}{2}\) and \(\gamma\in(\sqrt{2},2)\) case. These sections use the coupling of LCFT with \(\mathrm{SLE}_{\kappa}\) for \(\kappa\in(0,4],[8,\infty),(4,8)\) respectively.
Proof of Theorem 1.10.: For \(\gamma\in(0,2]\) the \(\beta_{*}=-\frac{2}{\gamma}\) BPZ equation is shown in Lemma 7.2. For \(\gamma\in(0,\sqrt{2}]\) and \(\beta_{*}=-\frac{\gamma}{2}\) it is shown in Lemma 7.5, and for \(\gamma\in(\sqrt{2},2)\) and \(\beta_{*}=-\frac{\gamma}{2}\) see Lemma 7.8. Finally, when \(\gamma=2\) we have \(-\frac{\gamma}{2}=-\frac{2}{\gamma}\) so the \(\beta_{*}=-\frac{\gamma}{2}\) BPZ equation has already been settled by the first case.
### Case: \(\beta_{*}=-\frac{2}{\gamma}\)
Let \(\kappa\leq 4\) and \(\gamma=\sqrt{\kappa}\). Consider reverse \(\mathrm{SLE}_{\kappa}\) where the driving function \(W_{t}\) is given by \(W_{0}=w\) and \(dW_{t}=\sqrt{\kappa}dB_{t}\). Let \((\eta_{t})_{t\geq 0}\) be the family of curves and \(g_{t}\) the corresponding Loewner maps, and let \(T\leq\infty\) be the first time \(t\) that \(g_{t}(x_{k})\in\eta_{t}\) for some \(k\). For \(t<T\) define
\[M_{t}=\prod_{j}|g_{t}^{\prime}(z_{j})|^{2\Delta_{\alpha_{j}}}\prod_{k}|g_{t}^{ \prime}(x_{k})|^{\Delta_{\beta_{k}}}F_{-\frac{2}{\gamma}}(W_{t},(g_{t}(z_{j}) )_{j},(g_{t}(x_{k}))_{k}). \tag{7.1}\]
**Lemma 7.1**.: \(M_{t}\) _is a local martingale._
Proof.: Let \(\overline{F}_{-\frac{2}{\gamma}}(w,(z_{j})_{j},(x_{k})_{k})<\infty\) be defined via (1.5) except with the integrand replaced by its absolute value, and let \(\overline{M}_{t}=\prod_{j}|g_{t}^{\prime}(z_{j})|^{2\Delta_{\alpha_{j}}}\prod _{k}|g_{t}^{\prime}(x_{k})|^{\Delta_{\beta_{k}}}\overline{F}_{-\frac{2}{ \gamma}}(w,(z_{j})_{j},(x_{k})_{k})\), so \(\overline{M}_{t}\geq|M_{t}|\). For \(N>0\) let \(T_{N}=T\wedge\inf\{t:\overline{M}_{t}\geq N\}\). Since \(t\mapsto\overline{M}_{t}\) is continuous on \([0,T)\) we have \(\lim_{N\to\infty}T_{N}=T\) almost surely. Thus we are done once we show \(M_{t}\) stopped at \(T_{N}\) is a martingale.
It suffices to show that for stopping times \(\tau_{1}\leq\tau_{2}\leq T_{N}\) we have \(\mathbb{E}[M_{\tau_{2}}\mid\eta_{\tau_{1}}]=M_{\tau_{1}}\). Instead, to simplify notation, we show that if \(\tau\leq T_{N}\) is a stopping time then \(\mathbb{E}[M_{\tau}]=M_{0}\); the proof of the desired claim is identical.
Sample \(\phi_{0}\sim\mathrm{LF}_{\mathrm{H}}^{(\frac{\beta_{*}}{2},w),(\alpha_{j},z_ {j})_{j},(\frac{\beta_{k}}{2},x_{k})_{k},(\frac{\delta}{2},\infty)}\), and define \((\phi_{t},\eta_{t})\) by conformally welding the boundary arcs to the left and right of \(w\) as in Theorem 1.1. Let \(A_{t}=\mathcal{A}_{\phi_{t}}(\mathbb{H})\), let \(L_{k,t}=\mathcal{L}_{\phi_{t}}(g_{t}(I_{k}))\) for \(k\neq k_{*}\), and let \(L_{t}=\mathcal{L}_{\phi_{t}}((g_{t}(x_{k_{*}}),W_{t}))\) and \(R_{t}=\mathcal{L}_{\phi_{t}}((W_{t},g_{t}(x_{k_{*}+1}))\). Since the conformal welding does not affect the quantum area nor quantum lengths of boundary segments not adjacent to \(w\), the processes \(A_{t}\) and \(L_{k,t}\) are constant. Moreover, the conformal welding identifies segments of equal quantum length, so \(L_{t}-R_{t}=L_{0}-R_{0}\) for all \(t\). Thus, \(G_{t}=\exp(-A_{t}-\sum_{k\neq k_{*}}\mu_{k}L_{k,t}-\mu_{L}(L_{t}-R_{t}))\) is constant as \(t\) varies. Consequently, writing \(\mathbb{E}\) to denote expectation with respect to \(\mathrm{rSLE}_{\kappa}^{\tau}\) and using Theorem 1.1, we have
\[\begin{split} M_{0}&=\mathrm{LF}_{\mathrm{H}}^{(- \frac{1}{\sqrt{\kappa}},w),(\alpha_{j},z_{j})_{j},(\frac{\beta_{k}}{2},x_{k})_{ k},(\frac{\delta}{2},\infty)}[G_{0}]\\ &=\mathbb{E}[\prod_{j}|g_{\tau}^{\prime}(z_{j})|^{2\Delta_{ \alpha_{j}}}\prod_{k}|g_{\tau}^{\prime}(x_{k})|^{\Delta_{\beta_{k}}}\mathrm{LF}_ {\mathrm{H}}^{(-\frac{1}{\sqrt{\kappa}},W_{\tau}),(\alpha_{j},g_{\tau}(z_{j}) )_{j},(\frac{\beta_{k}}{2},g_{\tau}(x_{k}))_{k},(\frac{\delta}{2},\infty)}[G_{ \tau}]]=\mathbb{E}[M_{\tau}].\end{split} \tag{7.2}\]
Here, the integrals converge absolutely by the definition of \(T_{N}\). This completes the proof.
If we assume that \(F_{-\frac{2}{\gamma}}\) is smooth, then setting the drift term of \(dM_{t}\) to zero immediately gives Theorem 1.10 for \(\beta_{*}=-\frac{2}{\gamma}\). Since we do not have smoothness a priori, we only know \(F_{-\frac{2}{\gamma}}\) is a weak solution to the BPZ equation. A hypoellipticity argument originally due to [14, Lemma 5] shows that weak solutions are also strong solutions; we instead follow [13, Lemma 4.4, Proposition 2.6] which is closer to our setting.
**Lemma 7.2**.: _Theorem 1.10 holds for \(\beta_{*}=-\frac{2}{\gamma}\)._
Proof.: Consider the diffusion \(X_{t}=(W_{t},(g_{t}(z_{j}))_{j},(g_{t}(x_{k}))_{k},(|g_{t}^{\prime}(z_{j})|)_{j},(|g_{t}^{\prime}(x_{k})|)_{k})\) where \(W_{t}\) and \(g_{t}\) are defined above (7.1). For each \(t\) we have \(X_{t}\in\mathds{R}\times\mathds{H}^{m}\times\mathds{R}^{n}\times\mathds{R}^{ m}\times\mathds{R}^{n}\). We denote the coordinates of an element of \(\mathds{R}\times\mathds{H}^{m}\times\mathds{R}^{n}\times\mathds{R}^{m}\times \mathds{R}^{n}\) by \((\mathbf{w},(\mathbf{z}_{j})_{j},(\mathbf{x}_{k})_{k},(\mathbf{a}_{j})_{j}, (\mathbf{b}_{k})_{k})\); with this notation, since \(dg_{t}(z)=-\frac{2}{g_{t}(z)-W_{t}}dt\) and \(d|g_{t}^{\prime}(z)|=\operatorname{Re}\frac{2|g_{t}^{\prime}(z)|}{(g_{t}(z)-W _{t})^{2}}dt\), the infinitesimal generator \(A\) of \(X_{t}\) is
\[A=\frac{\kappa}{2}\partial_{\mathbf{w}}^{2}-\sum_{j}(\frac{2}{\mathbf{z}_{j}- \mathbf{w}}\partial_{\mathbf{z}_{j}}+\frac{2}{\overline{\mathbf{z}}_{j}- \mathbf{w}}\partial_{\mathbf{z}_{j}})-\sum_{k}\frac{2}{\mathbf{x}_{k}-\mathbf{ w}}\partial_{\mathbf{x}_{k}}+\sum_{j}\operatorname{Re}\frac{2\mathbf{a}_{j}}{( \mathbf{z}_{j}-\mathbf{w})^{2}}\partial_{\mathbf{a}_{j}}+\sum_{j}\frac{2 \mathbf{b}_{k}}{(\mathbf{x}_{k}-\mathbf{w})^{2}}\partial_{\mathbf{b}_{k}}.\]
Let \(F(\mathbf{w},(\mathbf{z}_{j})_{j},(\mathbf{x}_{k})_{k},(\mathbf{a}_{j})_{j}, \mathbf{b}_{k})_{k})=\prod_{j}\mathbf{a}_{j}^{2\Delta_{\alpha_{j}}}\prod_{k} \mathbf{b}_{k}^{\Delta_{\beta_{k}}}F_{-\frac{2}{\gamma}}(\mathbf{w},(\mathbf{ z}_{j})_{j},(\mathbf{x}_{k})_{k})\). Since \(M_{t}\) is a local martingale (Lemma 7.1), \(F\) is a weak solution to \(AF=0\). The product rule then yields
\[0=A\Big{(}\prod_{j}\mathbf{a}_{j}^{2\Delta_{\alpha_{j}}}\prod_{k}\mathbf{b}_{ k}^{\Delta_{\beta_{k}}}F_{-\frac{2}{\gamma}}(\mathbf{w},(\mathbf{z}_{j})_{j},( \mathbf{x}_{k})_{k})\Big{)}=\prod_{j}\mathbf{a}_{j}^{2\Delta_{\alpha_{j}}} \prod_{k}\mathbf{b}_{k}^{\Delta_{\beta_{k}}}\mathcal{D}F_{-\frac{2}{\gamma}}( \mathbf{w},(\mathbf{z}_{j})_{j},(\mathbf{x}_{k})_{k})\]
where \(\mathcal{D}\) is the differential operator on \(\mathds{R}\times\mathds{H}^{m}\times\mathds{R}^{n}\) given by
\[\mathcal{D}:=\frac{\kappa}{2}\partial_{\mathbf{w}}^{2}-\sum_{j}(\frac{2}{ \mathbf{z}_{j}-\mathbf{w}}\partial_{\mathbf{z}_{j}}+\frac{2}{\overline{ \mathbf{z}}_{j}-\mathbf{w}}\partial_{\mathbf{z}_{j}})-\sum_{k}\frac{2}{ \mathbf{x}_{k}-\mathbf{w}}\partial_{\mathbf{x}_{k}}+\sum_{j}\operatorname{Re} \frac{4\Delta_{\alpha_{j}}}{(\mathbf{z}_{j}-\mathbf{w})^{2}}+\sum_{j}\frac{2 \Delta_{\beta_{k}}}{(\mathbf{x}_{k}-\mathbf{w})^{2}}.\]
We conclude that \(\mathcal{D}F_{-\frac{2}{\gamma}}=0\) in the distributional sense.
The operator \(\mathcal{D}\) is called _hypoelliptic_ if the weak solutions of \(\mathcal{D}F=0\) are smooth. Hormander's condition gives a criterion for hypoellipticity which is applicable to \(\mathcal{D}\); this is explained in [13, Proposition 2.6] for a similar differential operator but the argument carries over directly. Since \(\mathcal{D}\) is hypoelliptic, \(F_{-\frac{2}{\gamma}}\) is smooth and \(\mathcal{D}F_{-\frac{2}{\gamma}}=0\) is the desired BPZ equation.
### Case: \(\gamma\in(0,\sqrt{2}]\) and \(\beta_{*}=-\frac{\gamma}{2}\)
Let \(\kappa>4\). Recall that \(\operatorname{CRT}_{\kappa}\) is the joint law of the correlated Brownian motion \((X_{s},Y_{s})_{s\geq 0}\) defined by \(\operatorname{Var}(X_{s})=\operatorname{Var}(Y_{s})=\operatorname{a}^{2}s\) and \(\operatorname{Cov}(X_{s},Y_{s})=-\operatorname{a}^{2}\cos(\frac{4\pi}{\kappa})s\), where \(\operatorname{a}^{2}=2/\sin(\frac{4\pi}{\kappa})\). The key to the BPZ equation is the following martingale, which explains the coupling between \(\mu_{L}\) and \(\mu_{R}\).
**Lemma 7.3**.: _For \(\kappa>4\), let \(\theta=\frac{4\pi}{\kappa}\) and \(x\in\mathbb{C}\). Set \(\mu_{L}=\sqrt{1/\sin\theta}\cos x\) and \(\mu_{R}=\sqrt{1/\sin\theta}\cos(x+\theta)\). Then the process \(e^{-s-\mu_{L}X_{s}-\mu_{R}Y_{s}}\) is a martingale._
Proof.: We need the trigonometric identity
\[\cos^{2}x+\cos^{2}(x+\theta)-2\cos x\cos(x+\theta)\cos\theta=\sin^{2}\theta. \tag{7.3}\]
To see that this holds, we compute
\[2\cos x\cos(x+\theta)\cos\theta=(\cos(2x+\theta)+\cos(\theta))\cos\theta\] \[=\frac{1}{2}\cos(2(x+\theta))+\frac{1}{2}\cos 2x+1-\sin^{2}\theta=\cos^{2}(x+ \theta)+\cos^{2}x-\sin^{2}\theta,\]
where the first equality uses the product-to-sum formula, the second equality uses the product-to-sum formula and \(\sin^{2}+\cos^{2}=1\), and the last equality uses the double-angle formula. Now, using (7.3) gives
\[\mathbb{E}[e^{-s-\mu_{L}L_{s}-\mu_{R}R_{s}}]=\exp(-s+\frac{1}{2}(\mu_{L}^{2}+\mu _{R}^{2}-2\mu_{L}\mu_{R}\cos\theta)\mathrm{a}^{2}s)=\exp(-s+\frac{1}{2\sin \theta}\cdot\sin^{2}\theta\cdot\mathrm{a}^{2}s)=1.\]
This and the strong Markov property of Brownian motion imply that \(e^{-s-\mu_{L}L_{s}-\mu_{R}R_{s}}\) is a martingale.
Consider reverse SLE\({}_{\kappa}\) where the driving function \(W_{t}\) is given by \(W_{0}=w\) and \(dW_{t}=\sqrt{\kappa}dB_{t}\). Let \((\eta_{t})_{t\geq 0}\) be the family of curves, and let \(T\leq\infty\) be the first time \(t\) that \(g_{t}(x_{k})\in\eta_{t}\) for some \(k\). For \(t<T\) define
\[M_{t}=\prod_{j}|g_{t}^{\prime}(z_{j})|^{2\Delta_{\alpha_{j}}}\prod_{k}|g_{t}^{ \prime}(x_{k})|^{\Delta_{\beta_{k}}}F_{-\frac{\gamma}{2}}(W_{t},(g_{t}(z_{j}) )_{j},(g_{t}(x_{k}))_{k}).\]
**Lemma 7.4**.: _In the case \(\kappa\geq 8\), \(M_{t}\) is a local martingale._
Proof.: For \(N>0\) define the stopping time \(T_{N}\) as in Lemma 7.1, then it suffices to show that \(M_{t}\) stopped at \(T_{N}\) is a martingale. As before, we need to show that for stopping times \(\tau_{1}\leq\tau_{2}\leq T_{N}\) we have \(\mathbb{E}[M_{\tau_{2}}\mid\eta_{\tau_{1}}]=M_{\tau_{1}}\). We instead show that if \(\tau\leq T_{N}\) is a stopping time then \(\mathbb{E}[M_{\tau}]=M_{0}\); the former is proved identically. Similarly, to lighten notation we assume that \(\mu_{k}=0\) for all \(k\neq k_{*}\).
Sample \((\phi_{0},(X.,Y.)_{[0,\infty)})\sim\mathrm{LF}_{\mathrm{H}}^{(\frac{\phi_{k}} {2},w),(\alpha_{j},z_{j}),(\frac{\phi_{k}}{2},x_{k}),(\frac{\delta}{2},\infty) }\times\mathrm{CRT}_{\kappa}\), and define \((\phi_{t},\eta_{t})\) as in Theorem 1.8. For each \(t\leq\tau\) let \(s(t):=\mathcal{A}_{\phi_{t}}(\eta_{t}((0,t)))\), i.e. \((\phi_{t},\eta_{t})\) arises from mating the trees \((X.,Y.)_{[0,\infty)}\) for time \(s\). Let \(\sigma=s(\tau)\). The time \(\sigma\) is a stopping time for the filtration \(\widetilde{\mathcal{F}}_{s}=\sigma(\phi_{0},(X.,Y.)_{[0,s]})\), since for \(s=s(t)\) the pair \((\phi_{t},\eta_{t})\) is obtained from \((\phi_{0},(X.,Y.)_{[0,s]})\) by mating the trees for time \(s\).
Let \(A_{t}=\mathcal{A}_{\phi_{t}}(\mathbb{H})\), \(L_{t}=\mathcal{L}_{\phi_{t}}((g_{t}(x_{k_{*}}),W_{t}))\) and \(R_{t}=\mathcal{L}_{\phi_{t}}((W_{t},g_{t}(x_{k_{*}+1})))\). Let \(G_{t}=e^{-A_{t}-\mu_{L}L_{t}-\mu_{R}R_{t}}\). We claim that
\[\mathbb{E}[G_{\tau}\mid\phi_{0}]=G_{0}.\]
Indeed, the mating-of-trees procedure gives \(A_{\tau}-A_{0}=\sigma,L_{\tau}-L_{0}=X_{\sigma}\) and \(R_{\tau}-R_{0}=Y_{\sigma}\) (see Figure 10), and \(\sigma\) is a stopping time for \(\widetilde{\mathcal{F}}_{s}\) given \(\phi_{0}\). Then \(\mathbb{E}[G_{\tau}\mid\phi_{0}]=G_{0}\) follows from Lemma 7.3.
Consequently, by Theorem 1.8, Equation (7.2) holds in this setting, so \(\mathbb{E}[M_{\tau}]=M_{0}\) as desired.
**Lemma 7.5**.: _Theorem 1.10 holds for \(\beta_{*}=-\frac{\gamma}{2}\) and \(\gamma\in(0,\sqrt{2}]\)._
Proof.: Given Lemma 7.4 the argument of Lemma 7.2 implies the result.
Figure 10: Let \(\kappa\geq 8\). If \(s=s(t)\) is such that \((\phi_{t},\eta_{t})\) arises from mating the trees for time \(s\), then the changes in boundary arc lengths are \((L_{t}-L_{0},R_{t}-R_{0})=(X_{s},Y_{s})\).
### Case: \(\gamma\in(\sqrt{2},2)\) and \(\beta_{*}=-\frac{\gamma}{2}\)
In this section we prove Theorem 1.10 for \(\gamma\in(\sqrt{2},2)\) and \(\beta_{*}=-\frac{\gamma}{2}\).
We set \(\kappa=\frac{16}{\gamma^{2}}\in(4,8)\). In this regime, SLE\({}_{\kappa}\) is self-hitting but not space-filling. There is a variant called _space-filling_ SLE\({}_{\kappa}\)[13] with the property that if \(\eta^{\rm SF}\) is space-filling SLE\({}_{\kappa}\) in \((\mathbb{H},0,\infty)\), and \(T\) is the set of times \(t\) that \(\eta^{\rm SF}(t)\) is on the boundary of the unbounded connected component of \(\mathbb{H}\setminus\eta^{\rm SF}([0,t])\), then the ordered collection of points \(\{\eta^{\rm SF}(t):t\in T\}\) has the law of ordinary SLE\({}_{\kappa}\) in \((\mathbb{H},0,\infty)\). This gives a coupling of space-filling SLE\({}_{\kappa}\) and ordinary SLE\({}_{\kappa}\).
The weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge is thin since \(2-\frac{\gamma^{2}}{2}\in(0,\frac{\gamma^{2}}{2})\) for \(\gamma\in(\sqrt{2},2)\). We define space-filling SLE\({}_{\kappa}\) on the thin quantum wedge to be the concatenation of independent space-filling SLE\({}_{\kappa}\) curves between the marked points in each connected component, and similarly define ordinary SLE\({}_{\kappa}\) on the thin quantum wedge. We state the mating of trees theorem for this range of \(\gamma\), see Figure 11.
**Proposition 7.6** ([14, Theorem 1.9]).: _Let \(\kappa\in(4,8)\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Sample a weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge decorated by an independent space-filling SLE\({}_{\kappa}\)\(\eta^{\rm SF}\). Parametrize \(\eta^{\rm SF}\) by quantum area. On the counterclockwise (resp. clockwise) boundary arc of \(\eta^{\rm SF}([0,s])\) from \(0\) to \(\eta(s)\), let \(X_{s}^{-}\) and \(X_{s}^{+}\) (resp. \(Y_{s}^{-}\) and \(Y_{s}^{+}\)) be the quantum lengths of the boundary segments lying in the quantum wedge's boundary and bulk respectively. Then the law of \((X_{s},Y_{s}):=(X_{s}^{+}-X_{s}^{-},Y_{s}^{+}-Y_{s}^{-})\) is \({\rm CRT}_{\kappa}\). Moreover, the curve-decorated quantum wedge is measurable with respect to \((X_{s},Y_{s})_{s\geq 0}\)._
Figure 11: Let \(\kappa\in(4,8)\). **Left:** A pair of correlated continuum random trees described by \((X_{s},Y_{s})_{s\geq 0}\). **Right:** Mating the trees gives a weight \((2-\frac{\gamma^{2}}{2})\) quantum wedge decorated by an independent _space-filling_ SLE\({}_{\kappa}\) curve. **Middle:** The trees have been mated until the quantum area is \(s\). We write \(X_{s}^{-},X_{s}^{+},Y_{s}^{-},Y_{s}^{+}\) for the quantum lengths of the four labelled boundary arcs.
Figure 12: Let \(\kappa\in(4,8)\). **Left:** A pair of independent forested lines. **Middle:** The forested lines have been mated for some amount of time. **Right:** The pair of forested lines can be coupled with a sample from \({\rm CRT}_{\kappa}\) such that the middle picture is obtained by mating continuum random trees, then replacing the space-filling SLE\({}_{\kappa}\) with its coupled non-space-filling SLE\({}_{\kappa}\).
As before, consider reverse \(\mathrm{SLE}_{\kappa}\) where the driving function \(W_{t}\) is given by \(W_{0}=w\) and \(dW_{t}=\sqrt{\kappa}dB_{t}\). Let \((\eta_{t})_{t\geq 0}\) be the family of curves, and let \(T\leq\infty\) be the first time \(t\) that \(g_{t}(x_{k})\in\eta_{t}\) for some \(k\). For \(t<T\) define
\[M_{t}=\prod_{j}|g_{t}^{\prime}(z_{j})|^{2\Delta_{\partial j}}\prod_{k}|g_{t}^{ \prime}(x_{k})|^{\Delta_{\beta_{k}}}F_{-\frac{\gamma}{2}}(W_{t},(g_{t}(z_{j}) )_{j},(g_{t}(x_{k}))_{k}).\]
**Lemma 7.7**.: _In the case \(\kappa\in(4,8)\), \(M_{t}\) is a local martingale._
Proof.: For \(N>0\) we define the stopping time \(T_{N}\) as in Lemma 7.1. As in Lemma 7.4 we assume that \(\mu_{k}=0\) for all \(k\neq k_{*}\), and we prove that for a stopping time \(\tau\leq T_{N}\) we have \(\mathbb{E}[M_{\tau}]=M_{0}\); the full result follows from the same argument.
Sample \((\phi_{0},(X_{\cdot},Y_{][0,\infty)})\sim\mathrm{LF}_{\mathrm{H}}^{(-\frac{1} {\sqrt{\kappa}},w),(\alpha_{j},z_{j})_{j},(\frac{\beta_{k}}{2},x_{k}),(\frac{ \delta}{2},\infty)}\times\mathrm{CRT}_{\kappa}\), consider the space-filling-\(\mathrm{SLE}_{\kappa}\)-decorated thin quantum wedge obtained by mating \((X_{\cdot},Y_{][0,\infty)}\), and let \((F_{L},F_{R})\) be the pair of forested lines obtained by cutting this quantum wedge by the coupled ordinary \(\mathrm{SLE}_{\kappa}\) curve. By definition the law of \((\phi_{0},(F_{L},F_{R}))\) is \(\mathrm{LF}_{\mathrm{H}}^{(-\frac{1}{\sqrt{\kappa}},w),(\alpha_{j},z_{j})_{j},(\frac{\beta_{k}}{2},x_{k}),(\frac{\delta}{2},\infty)}\times\mathrm{FL}_{\kappa}\), see Figure 12.
Let \((\phi_{t},\eta_{t})\) be the field and curve obtained from \((\phi_{0},(F_{L},F_{R}))\) by mating the forested lines as in Theorem 1.6. By our construction, \((\phi_{t},\eta_{t})\) arises from mating the trees \((X_{\cdot},Y_{][0,\infty)}\) for some time \(s=s(t)\) to get \((\phi_{t},\eta_{t}^{\mathrm{SF}})\) where \(\eta_{t}^{\mathrm{SF}}\) is a space-filling curve, then replacing \(\eta_{t}^{\mathrm{SF}}\) with its coupled non-space-filling curve \(\eta_{t}\). Consequently, \(\sigma=s(\tau)\) is a stopping time for \(\widetilde{\mathcal{F}}_{s}=\sigma(\phi_{0},(X_{\cdot},Y_{][0,s]})\).
Let \(A_{t}=\mathcal{A}_{\phi_{t}}(\mathbb{H})\), \(L_{t}=\mathcal{L}_{\phi_{t}}((g_{t}(x_{k_{*}}),W_{t}))\) and \(R_{t}=\mathcal{L}_{\phi_{t}}((W_{t},g_{t}(x_{k_{*}+1})))\). Let \(G_{t}=e^{-A_{t}-\mu_{L}L_{t}-\mu_{R}R_{t}}\). As in the proof of Lemma 7.4, the martingale of Lemma 7.3 gives \(\mathbb{E}[G_{\tau}\mid\phi_{0}]=G_{0}\). Consequently, by Theorem 1.8, Equation (7.2) holds in this setting, so \(\mathbb{E}[M_{\tau}]=M_{0}\). We are done.
**Lemma 7.8**.: _Theorem 1.10 holds for \(\beta_{*}=-\frac{\gamma}{2}\) and \(\gamma\in(\sqrt{2},2)\)._
Proof.: Given Lemma 7.7 the argument of Lemma 7.2 implies the result.
|
2308.00011 | Satellite-based Quantum Network: Security and Challenges over
Atmospheric Channel | The ultra-secure quantum network leverages quantum cryptography to deliver
unsurpassed data transfer security. In principle, the well-known quantum key
distribution (QKD) achieves unconditional security, which raises concerns about
the trustworthiness of 6G wireless systems in order to mitigate the gap between
practice and theory. The long-distance satellite-to-ground evolving quantum
network distributes keys that are ubiquitous to the node on the ground through
low-orbit satellites. As the secret key sequence is encoded into quantum
states, it is sent through the atmosphere via a quantum channel. It still
requires more effort in the physical layer design of deployment ranges,
transmission, and security to achieve high-quality quantum communication. In
this paper, we first review the quantum states and channel properties for
satellite-based quantum networks and long-range quantum state transfer (QST).
Moreover, we highlight some challenges, such as transmissivity statistics,
estimation of channel parameters and attack resilience, quantum state transfer
for satellite-based quantum networks, and wavepacket shaping techniques over
atmospheric channels. We underline two research directions that consider the
QST and wavepacket shaping techniques for atmospheric transmission in order to
encourage further research toward the next generation of satellite-based
quantum networks. | Hong-fu Chou, Vu Nguyen Ha, Hayder Al-Hraishawi, Luis Manuel Garces-Socarras, Jorge Luis Gonzalez-Rios, Juan Carlos Merlano-Duncan, Symeon Chatzinotas | 2023-07-29T17:54:15Z | http://arxiv.org/abs/2308.00011v2 | # Satellite-based Quantum Network: Security and Challenges over Atmospheric Channel
###### Abstract
The ultra-secure quantum network leverages quantum cryptography to deliver unsurpassed data transfer security. In principle, the well-known quantum key distribution (QKD) achieves unconditional security, which raises concerns about the trustworthiness of 6G wireless systems in order to mitigate the gap between practice and theory. The long-distance satellite-to-ground evolving quantum network distributes keys that are ubiquitous to the node on the ground through low-orbit satellites. As the secret key sequence is encoded into quantum states, it is sent through the atmosphere via a quantum channel. It still requires more effort in the physical layer design of deployment ranges, transmission, and security to achieve high-quality quantum communication. In this paper, we first review the quantum states and channel properties for satellite-based quantum networks and long-range quantum state transfer (QST). Moreover, we highlight some challenges, such as transmissivity statistics, estimation of channel parameters and attack resilience, quantum state transfer for satellite-based quantum networks, and wavepacket shaping techniques over atmospheric channels. We underline two research directions that consider the QST and wavepacket shaping techniques for atmospheric transmission in order to encourage further research toward the next generation of satellite-based quantum networks.
Quantum key distribution, satellite communication, quantum state transfer, wavepacket shaping modulation
## I Introduction
To meet the growing global demand for broadband services, satellites offer the unique ability to cover enormous geographic areas while requiring minimum base infrastructure, making them an appealing alternative [1]. The ability to connect quantum devices across long distances, considerably boosting their inherent communication, network efficiency, and security, is made possible by an examination of the state-of-the-art of the key components of quantum networks [2]. By delivering quantum states containing critical information through free space optical (FSO) channels, the authors in [3] reveal the possibility of establishing global quantum networks. To achieve unconditional security, quantum key distribution (QKD) [4, 5] secures a protocol that can guarantee information security in theory between two remote nodes while exchanging keys. QKD is the most thoroughly explored quantum communication technology [6], and it has been used in both fiber and FSO channels. Long-distances FSO quantum communications [7, 8, 9] have been deployed effectively across extraordinarily long distances while several experiments have also been realized in [10, 11, 12]. However, when channel attenuation and noise levels increase in the communication via fiber or FSO, the communication range for successful key distribution is restricted to a few hundred kilometers. That raises a significant challenge in satellite-based QKD systems. Recently, quantum teleportation [13, 14, 15, 16] has been considered as a possible approach for increasing the deployment range of QKD using satellites. By moving to free space and connecting two ground stations, the effective communication range can be increased to a hundred km; while connecting a ground station with a satellite, quantum teleportation was conducted across a thousand km. Although channel noise has no direct effect on the transported qubit, it does result in an entanglement network with low fidelity, diminishing the success chance of secret key transmission by quantum teleportation [17]. Furthermore, while QKD theoretically provides ultra-security to satellite-based QKD networks, satellite-to-ground, and inter-satellite quantum communication systems, the exploitation of defects in quantum cryptography protocols to undermine information security or obtain unauthorized access to sensitive data is referred to as quantum hacking [18, 19, 20]. The physical layer security deduces that the efficiency for detecting binary bit levels is identical, this raises doubts about the validity of the security proof. The time-shift attack makes use of a QKD system's time-domain detection efficiency discrepancy between its two detectors. It is only possible to securely eliminate the time-domain inconsistency of detection efficiency. Therefore, the characteristics of the atmosphere can have identical security concerns and are the critical instance required to consider and investigate.
Starting with a background study of atmospheric channel characteristics carrying quantum states and progressing to research challenging issues, the goal of this paper is to bring up some more in-depth investigation by reviewing and proposing open challenges of physical layer transmission and security via satellite-based QKD networks, which will necessitate developing more secure and efficient scenarios to achieve high-quality quantum communications. We hope to explore the following significant implementation issues as a result of this:
* It is critical to investigate system design for the satellite-based quantum networks over atmospheric channels in order to assure optimal system performance, as well as approaches for diversified scheduling for space missions or satellite-to-ground communication.
* Because interference issues caused by coexistence with other satellite systems, as well as atmospheric turbulence and diffraction, affect the fidelity of satellite-based quantum networks, developing novel strategic deployment and interference mitigation techniques is critical.
* By using quantum mechanical ideas in QKD networks, the quantum states based on a variety of QKD protocols, as well as the heterogeneity of transmissivity from satellite-to-ground/air/space-destination scenarios to the aforementioned deployments, can be realized. As a result, channel parameter estimation is indispensable in such dynamic satellite communication.
* Experiments show that the assaults against a commercial QKD system are technically feasible. Because of these minor flaws, satellite-based QKD networks may be susceptible to Eve with current technology. The effectiveness of the attacks emphasizes the need to develop innovative techniques for security proofs with verifiable premises and battle-test actual resilience to any attacks.
Additional prospects for satellite-based quantum networks and potential future research topics are also included in this paper.
## II Quantum networks via Atmospheric Channel
We start from a basic knowledge of quantum states in QKD networks and atmospheric channels to the potential future research avenues.
### _Entangled Gaussian and Non-Gaussian States_
Continuous variables Gaussian states, such as thermal, coherent, and compressed states of light, are one example of states that cannot be purified by ordinary processes [21]. When ordinary processes are referred to, the actions that keep the state's Gaussian characteristics intact are implicit in matching mirrors, beam splitters, and squeezers. Therefore, any entangled Gaussian state has bound entanglement in the Gaussian assumption. The symplectic matrix allows for a compact expression of the canonical commutation relations (CCR) [21] for a quantum system with n modes. Gaussian two-mode squeezed vacuum (TMSV) states are characterized by a gaussian \(\chi_{\rho}\) function.
\[\chi_{\rho}(\zeta)=e^{i\zeta^{T}J_{n}d-\frac{1}{4}\zeta^{T}J_{T}^{n}\gamma J_ {n}\zeta} \tag{1}\]
where d is a \(2n\) real vector, called displacement vector (DV), and \(\gamma\) is a \(2n\times 2n\) symmetric real matrix, denoted as the covariance matrix (CM). Researchers [22] apply the scenario of prepare and measure with compressed or coherent states to experimentally build quantum cryptography with gaussian states and gaussian operations to discuss the efficiency of security protocol.
Implementing non-Gaussian entangled states effectively across atmospheric fading channels is still difficult while maintaining a reliable secret key rate. Recent research has looked at the rate of entanglement formation caused by non-Gaussian entangled states moving through atmospheric channels [23, 24]. A non-Gaussian entangled state [24] can be addressed by giving a Gaussian state and zero first moment CM by
\[M_{AB}^{in}=\begin{bmatrix}vI&\sqrt{v^{2}-1}Z\\ \sqrt{v^{2}-1}Z&vI\end{bmatrix} \tag{2}\]
where \(I\) is a \(2\times 2\) identity matrix, Z is a \((1,-1)\) diagonal matrix, and \(v=cosh(2r)\) is the quadrature variance of each mode. The pure photon-subtracted squeezed (PSS) state is introduced to the unmeasured output of the beam splitter in order to elicit this operation of non-Gaussian states by detecting heralded modes. This transfer is then analyzed by using the Kraus representation to chart the PSS state's progress through the fading channel. The authors [24] investigate that contrary to common sense, non-Gaussian states can occasionally yield greater quantum key rates via fixed-attenuation channels and particularly for very high-loss channels. Furthermore, the measurement result [25] is determined using a coherent detector at the receiver using an upgraded non-Gaussian state discrimination detector. The received nonorthogonal coherent states are measured with the state-discrimination detector, which is regarded as the optimal quantum measurement [25]. High uplink losses are frequently an issue for satellite-based communication systems. Without the helpful intervention, satellite-based entanglement distribution and quantum key distribution would remain an unsuccessful undertaking. Therefore, we discuss the atmospheric channel loss in the next section.
### _Channel Loss of Satellite-based quantum networks_
For the satellite-based QKD networks, the uplink and downlink channels are quite different. On an uplink channel, the atmospheric turbulence layer only occurs close to the transmitter, while on a downlink, it only exists close to the terrestrial receiver. The uplink optical beam initially travels through the turbulent environment for typical ground station aperture sizes, and its beam-width is substantially smaller than the large-scale turbulent vortex. The downlink optical beam only passes through the turbulent environment during the last portion of its route, in contrast to the uplink channel. The satellite's beam-width upon entrance into the atmosphere is most likely to be greater than the size of the turbulent vortex given the usual aperture size of the optical equipment incorporated in the satellite. Therefore, turbulence-induced effects [6] cause the transmissivity, \(\eta_{t}\), of atmospheric channels to change. The probability distribution of the transmission coefficients, indicated by \(p(\eta)\), may be used to describe these fading channels where \(\eta=\sqrt{\eta_{t}}\). The mean fading loss in dB for a channel that is fading and associated with the probability distribution \(p(\eta)\) is given by \(-10log_{10}\int_{0}^{\eta_{0}}\eta^{2}p(\eta)d\eta\), where \(\eta_{0}\) is the greatest value of \(\eta\). Another dominant of the loss is beam-wandering which causes the beam-center to wander erratically from the receiver's aperture plane, regardless of atmospheric turbulence-related changes in beam-width [6].
When beam-width variations are taken into account, the mean fading loss in dB of a fading channel is now given by \(-10log_{10}\int\eta^{2}(l,\theta)p(l,\theta)dld\theta\) with the knowledge in [6]. It should be noted that, with the addition of beam-width
variations, the channel's maximum transmission coefficient \(\eta_{0}\) is no longer constant but instead varies at random. Therefore, the above introduction can lead us to the research in [26] to investigate Gaussian entangled states in fading channels and uncorrelated fading channels. The transmitted continuous-variable entanglement has a non-trivial effect on coherent displacements of the input field's quantum state in such turbulent channels. Surprisingly, this enables one to maximize the entanglement certification by altering local coherent amplitudes with a limited but optimal amount of squeezing. These variables can be changed to enhance the Gaussian entanglement transfer via a turbulent environment by adaptive method attaining the correlated form in the entanglement preserving link. Following this thread, the long-range quantum state transfer is presented in the following section.
### _Long-distance Quantum State Transfer_
Many quantum information processing (QIP) [27] activities rely on the transport of quantum information between various sites. The quantum state transfer (QST) [28] of interacting qubits might be significantly more advantageous, both for the QIP protocol and for transmitting the entire physical configuration of quantum communication. Long-distance QST is an important component of quantum protocols that can be realized by quantum teleportation. The challenges of long-distance QST [29, 30, 31] are to maintain the fidelity and reduce the low fidelity caused by inconsistency of the channel's mirror symmetry. Moreover, in prior long-distance investigations, Alice and Charlie have always performed local Bell-state measurements before the entanglement distribution procedure. It is difficult to conduct the Bell-state measurement after photons pass via air channels due to atmospheric turbulence. The authors in [32] prove the QST over a distance of more than 1000 km with the satellite Micius aided by previous quantum entanglement shared between two distant ground stations. The highly steady interferometer project the photon into a composite path-polarization dimension and makes use of a satellite-borne entangled photon source.
## III Open Challenges
Based on the extensive approaches for making quantum states adapt to atmospheric channel variations, we identify the following challenges in order to increase fidelity and security resilience, more research efforts should be addressed for greater robustness to actual attacks, as well as the inspiration for a better understanding of quantum state correlation in the atmosphere. We depict the quantum networks in Fig. 1 to illustrate the following challenges.
### _Transmissivity Statistics on Atmospheric Channels_
Extensive theoretical examinations [33, 34, 35] of these effects are presented, and these models are shown to be broadly coherent with terrestrial testing done under a broad range of turbulence conditions. The quantum states of entanglement conditions were explored in [33, 36, 37] based on the negativity of the partial transposition. In order to implement and optimize quantum communication in atmospheric connections, the partial transposition can be viewed as the output state of turbulent atmospheric fluctuating-loss channels. Despite the dominating impacts of the variable nature of transmissivity in these channels, which introduces extra noise and limits the possible secret key rate, other more subtle atmospheric effects such as weather condition [34, 38] and day-and-night times [15], which consider beam attenuation, absorption, and scattering, can play a significant role [23, 39, 40, 24]. We illustrate in Fig. 1 the case of satellite-based QKD node 1 communicating with the ground user terminal and long-range communication between satellite-based QKD node 1, 2, and 3. The collaboration of satellite-based QKD node 2 and 3 communicates to the ground user terminal in distance QLAN. Due to the above discussion, QKD networks are very sensitive to the channel's transmissivity even during day-and-night times. A thorough understanding of atmospheric channels [15, 41, 42] has broad implications for research, and utilizing satellite-based nodes and quantum ground transceivers with the bulk of propagation occurring outside the Earth's atmosphere can enable global-scale quantum communication. Furthermore, transmissivity and secrete key rate can be aided by self-compensating technique [43, 9, 44] such as some use cases of passive polarization analysis units, adaptive optics pre-compensation, GPS timing receivers for synchronization, and custom-written software. This research trend is still in the initiative stage, and thus, more sophisticated approaches are necessary to optimize the satellite-based QKD networks. Additionally, it is noteworthy to remark that the authors in [45] present an interesting framework for monitoring the continually changing satellite-to-ground atmospheric channel. This framework aspires to enhance the scheduling techniques by optimizing the available time for SatQKD and reducing the average link loss during a single orbit. The suggested scheduling technique serves the dual objective of reducing geometric loss in future satellite orbit selection and enhancing the effective use of designated keys.
### _Estimation of Channel Parameters and Attack Resilience_
The general idea of the design requirement for satellite-based QKD networks is that because of the changing nature of transmissivity in these channels, more noise is generated, limiting the feasible secret key rate. A thorough framework for analyzing the needs for spatial, spectral [46], and temporal filtering is provided by the study [47, 48] for wavelength-dependence free space QKD networks. However, an adaptable paradigm for optimal wavelength is required more effort to provide high performance. Despite the practical attack on QKD networks [18], channel parameter estimation is not only beneficial to secret key rate performance [49, 50, 51, 52] but also security concern [53, 54] to demonstrate against entanglement-distillation attack. Eve may be undetected by Alice and Bob utilizing basic light monitoring that is present in quantum repeaters such as satellite-to-ground channels and ground-to-ground networks in the event of this assault potentially generating a loophole based on the assumption [55]. The
Monte Carlo approach [53] was used to estimate the channel transmittance, which changes randomly over time. Especially when there is little knowledge of the channel, as there is in the case of the atmosphere and the ocean, it is improper for parametric variation channels to be based on insufficient statistical features. Authorized communication [25] is required to continuously communicate partial input and output signals in order to update the channel estimate when the channel changes, even if it changes slowly. As illustrated in Fig. 1, both Eve 1 and 2 present the threat to demonstrate the potential loophole between Alice and Bob. Moreover, malicious attackers in the ground user terminal act as Eve 3 performing unauthorized access to QKD node 3. Above all, further research efforts must be made in order to investigate both the secret key rate performance and the durability of the attack.
### _The Quantum State Transfer for Satellite-based Quantum Networks_
Following the discussion in Section II D, QST allows the transfer of arbitrary quantum states from one node to another by SWAP [56] or quantum teleportation. As illustrated in Fig. 1, the quantum state transfer exist over the atmosphere for the satellite-based QKD networks. Moreover, spatial steering is well-known that quantum states may be remotely created via an entangled pair to be benchmarked by average fidelity [28]. It is worth noting that the study of a quantum teleportation protocol in [16] presents a comprehensive investigation of uplink and downlink channel loss mechanisms. This thread leads us to the challenge of investigating the effectiveness of QST on atmospheric channels. The authors in [57] first touched on this challenge. Regarding remote state determination or remote state tomography, quantum information transfer considers providing a quantum teleportation protocol and associated security discussion with the QKD protocol by means of high-dimension quantum correlations. Furthermore, the experiment results of quantum teleportation [16] show improved fidelity by the generation of states in an intermediate station that produces nearly symmetric states in line with the well-known Braunstein-Kimble quantum teleportation process to have a better teleportation fidelity. Furthermore, the authors in [58] provide the experiment of a given event proving that the concurrence of entanglement can be conserved by turbulent atmospheric channel. By demonstrating how to improve the fidelity of QST or entangled transfer on atmosphere measuring prior to the privacy amplification of satellite-based QKD networks, we highlight this potential research avenue that can be taken into consideration in the physical layer of quantum networks. In order to improve the secret key rate, transmissivity, and fidelity for sophisticated satellite-based quantum networks, it is beneficial to investigate the quantum state correlation and decoherence on atmospheric channels, which depend on the implementation of entanglement transfer or QST.
### _Wavepacket Shaping Techniques over Atmospheric Channels_
Following the quantum state introduction of Section II A, an innovation technique in [59, 60] observes the purification and entanglement after conducting wavepacket shaping modulation to single photon and entangled photon which is applicable to both single-photon and entangled-biphoton QKD devices. The experiments in [59] demonstrates how the wavepacket modulation control photons emitted from the colloidal quantum dots at room temperature. It has been shown that biexciton emission is eliminated by modulating the wavepacket, the emitted single photons preserve the high purity even at high excitation power. Furthermore, the result of concurrence and purity shows that wavepacket modulation reserve and restore the entanglement at the high-frequency difference by topographically reconstructing the density matrix. However, this shaping modulation applies to satellite-based quantum networks which remain more explored and studied over atmospheric channels. Based on the purification and revival entanglement to the quantum state, the implicit correlation with long-range QST to the physical layer of the quantum network is still not clear about how the photon absorption efficiency and revival of entanglement can enhance the transmissivity and entangled quantum networks respectively. This could be quite an interesting research direction to investigate for the advanced design of physical layers in satellite-based quantum networks.
## IV Conclusion and Future Directions
In order to enhance the quality of long-range quantum communication, this paper reviews several significant technical
Fig. 1: A Vision of Satellite-based Quantum Networks.
evaluations of satellite-based quantum networks and suggests future research options. First, the knowledge of quantum states provokes the understanding of a fundamental transmission unit for quantum communication. The atmospheric channel loss is explored to depict the effect on coherent displacements and entanglement certification of the quantum state. A more sophisticated self-compensating approach to receiver design for satellite-based quantum networks can help to improve transmissivity and secret key rate. The channel estimate is therefore seen as a challenge in terms of both ensuring higher key transmission quality and thwarting possible attacks. Based on the above reviews, this paper features two open challenges that the implementation of QST and wavepacket shaping technique are required to investigate how to improve the security, fidelity, and transmissivity of satellite-based quantum networks and can be the potential candidates for improving the existing approach to the challenges of estimating channel parameters and attack resilience.
## Acknowledgment
This work is under the project LUQCIA funded by the European Union Next Generation EU, with the collaboration of the Department of Media, Connectivity, and Digital Policy (SMC), Luxembourg.
|
2304.09486 | Security and Privacy Problems in Voice Assistant Applications: A Survey | Voice assistant applications have become omniscient nowadays. Two models that
provide the two most important functions for real-life applications (i.e.,
Google Home, Amazon Alexa, Siri, etc.) are Automatic Speech Recognition (ASR)
models and Speaker Identification (SI) models. According to recent studies,
security and privacy threats have also emerged with the rapid development of
the Internet of Things (IoT). The security issues researched include attack
techniques toward machine learning models and other hardware components widely
used in voice assistant applications. The privacy issues include technical-wise
information stealing and policy-wise privacy breaches. The voice assistant
application takes a steadily growing market share every year, but their privacy
and security issues never stopped causing huge economic losses and endangering
users' personal sensitive information. Thus, it is important to have a
comprehensive survey to outline the categorization of the current research
regarding the security and privacy problems of voice assistant applications.
This paper concludes and assesses five kinds of security attacks and three
types of privacy threats in the papers published in the top-tier conferences of
cyber security and voice domain. | Jingjin Li, Chao chen, Lei Pan, Mostafa Rahimi Azghadi, Hossein Ghodosi, Jun Zhang | 2023-04-19T08:17:01Z | http://arxiv.org/abs/2304.09486v1 | # Security and Privacy Problems in Voice Assistant Applications: A Survey
###### Abstract.
Voice assistant applications have become omniscient nowadays. Two models that provide the two most important functions for real-life applications (i.e., Google Home, Amazon Alexa, Siri, etc) are Automatic Speech Recognition (ASR) models and Speaker Identification (SI) models. According to recent studies, security and privacy threats have also emerged with the rapid development of the Internet of Things (IoT). The security issues researched include attack techniques toward machine learning models and other hardware components widely used in voice assistant applications. The privacy issues include technical-wise information stealing and policy-wise privacy breaches. The voice assistant application takes a steadily growing market share every year, but their privacy and security issues never stopped causing huge economic losses and endangering users' personal sensitive information. Thus, it is important to have a comprehensive survey to outline the categorization of the current research regarding the security and privacy problems of voice assistant applications. This paper concludes and assesses five kinds of security attacks and three types of privacy threats in the papers published in the top-tier conferences of cyber security and voice domain.
Key words and phrases: ASR, SI, voice assistant, attack, defense, security, privacy +
Footnote †: journal: Accepted in 2023
2022. In this survey, during 2020-2022, quite a few papers about new side-channel attacks were researched and had impressive results. Another difference is that this survey divided the attacks towards ASR and SI models. Because ASR and SI are the two major functions
Fig. 1: The timeline that outlined the development and progress of voice assistant applications.
that voice assistant applications have, some applications have both, and some have only one. It is useful for users to know which function their applications have and what type of threats correspond to each application. The other survey by Yan et al. [14] covers almost the same security and privacy problems mentioned in Cheng et al. [3]. However, it also does not cover any research after 2020. Also, Yan et al. [14] categorize defensive methods from a system designer's perspective, which differs from this survey and Cheng et al. [3]. In this survey, defensive methods are introduced regarding specific attacks. Through this survey, some defensive methods are effective in multiple types of attacks, and some mitigate one type of attack but are prone to another type of attack, which helps the users or producers have more comprehensive information when choosing the defensive methods.
This survey aims to make a comprehensive and clear outline of the security and privacy issues of voice assistant applications. The papers included in this survey are from the top four cyber security conferences and Interspeech, a conference focusing on the speech domain. The aspects that are included are as follows:
1. Technical attacks that were targeted towards ASR models and SI models. Including machine learning attacks that targeted the software, frequency modulation that exploits the hardware, malicious skills hidden in the third-party market and policy loopholes that were not refined quickly enough to catch up on the development of voice assistants.
2. Defensive methods have been researched and proven effective in ASR and SI models. Usually, the defensive means can be divided into detection and prevention. Some methods may provide both means.
3. We have security and privacy issues when using voice assistant applications beyond technical threats. With more and more younger users, third-party regulations and policies should be refined.
**Contributions.** This paper provides a comprehensive summary of technical attacks with impressive experiment results and feasible defensive methods corresponding to each attack. Nontechnical threats in the voice assistant application market are included to safeguard the user. The contributions are concluded as follows:
* From a user's standpoint, this survey is, as far as we are aware, the most thorough investigation of voice assistant application security. Our study includes both market policy issues and technology risks. We provide a comprehensive overview of the state of the art, development, major difficulties, and future prospects for voice assistant application security research based on a thorough literature review of pertinent attacks and countermeasures.
* We classify pertinent assaults by attack techniques and structure the attack literature according to the voice assistant's systems. In order to properly identify, comprehend, and analyze the security risks against voice assistants, the organization assists in bridging the gap between a large category of seemingly unrelated attacks and vulnerabilities.
* To systematize the countermeasures against various attacks, we base them on defensive tactics. We present a qualitative evaluation of existing solutions by the installation cost if the defense requires additional devices, usability, and security and make useful recommendations in order to help users select protection based on the type of danger they may encounter.
The remaining portions of this essay are structured as follows: The introduction to voice assistant applications in Section 2 is brief. The taxonomy of assaults on ASR and SI models as well as the taxonomy of countermeasures that may be applied to ASR and SI models are also introduced in Section 2. The attacks that take advantage of the voice assistant's ASR function's weaknesses are described in detail in Section 3 along with the corresponding defenses. The attacks against SI models are described in detail in Section 4, along with systematized defense tactics that can stop or at least slow them down. The security and privacy issues outside of technological assaults are summarised in Section 5. In Section 6, we go through issues with the current research and potential future approaches for voice assistant applications. The survey is concluded in Section 7.
## 2. Preliminaries of Voice Assistant Applications
This section provides background information on voice assistant apps, including a definition of key terms, a list of categories, and a description of the process for each type of voice assistant application.
### Voice Assistant Components and Speech Recognition Workflow
There are two kinds of voice assistant models -- automatic speech recognition (ASR) and speaker identification (SI). As shown in Figure 2, the first step in creating a voice recognition model is translating the spoken language into text. Speech recognition is much more challenging to solve than machine translation. A machine translation system's input is usually printed text that differentiates between individual words and word strings. The voice input used by a speech recognition system is far more complicated than written text and spoken language, especially with ambiguity. When two people communicate, they frequently infer the term in the conversation in the context and often read a lot of latent information from the tone, facial expressions, and gestures the other party uses. The speaker regularly rectifies what has been said and repeats important material by rephrasing. It is challenging to train an automated system to detect and comprehend speech. To provide a compact digital representation of the sound wave, each sampled value is quantized throughout the speech recognition process. A feature vector characterizing the spectral content is retrieved for each frame from which the sampled values are situated in overlapping frames. The words that the speech represents are identified based on the features of the voice signal. The five steps that make up the voice recognition process are described as follows:
**Step 1. Voice Signal Acquisition**
Voice signal acquisition is the foundation of voice signal processing. A voice signal acquisition system typically receives inputs through a microphone. Subsequently, the sound wave is transformed from a voltage signal by the microphone to a digital signal handled by an A/D device like a sound card. Voice signal acquisition and processing systems based on single-chip microcomputers and DSP chips
are utilized extensively for unfavorable on-site conditions, limited space, and numerous specific equipment. The essential hardware for voice assistant apps includes sound cards, speakers, microphones, and many alike. Sound cards are crucial to process voice signals through signal filtering, amplification, A/D conversion, and D/A conversion. Modern recording software tools activate the sound card to harvest voice signals as voice recordings.
**Step 2. Speech Signal Pre-processing**
After collecting the speech signal, pre-processing operations must be completed, including filtering, A/D conversion, pre-emphasis, and endpoint detection. Filtering primarily serves the two goals of preventing aliasing interference and suppressing the 50 Hz power frequency interference. The voice analog signal is converted into a digital signal via A/D conversion. The signal is quantized during A/D conversion, and the quantization error, also known as quantization noise, is the difference between the quantized signal value and the original signal value. Pre-emphasis processing aims to improve the signal's high-frequency content, flatten its spectrum, and maintain its full frequency range from low to high frequency. End-point detection involves extracting the beginning and conclusion of speech from a speech-containing signal. Effective endpoint identification removes background noise in silent periods. Two popular approaches work on different features -- time-domain features and frequency-domain features. The time domain feature approach uses the voice volume and zero-crossing rate to identify endpoints with the advantage of a minimal amount of calculation. However, the time domain feature approach often leads to incorrect evaluation of air sounds, and differing volume calculations will also result in varied detection outcomes. On the other hand, the frequency domain feature technique uses variations in the sound spectrum and entropy detection to identify speech at a high computation cost.
**Step 3. Feature Parameter Extraction of the Speech Signal**
The frequency of human speech is below 10 kHz. Shannon's sampling theorem requires the sampling frequency to be at least twice the maximum speech frequency present in the speech signal. The signal is often broken into blocks (also known as frames). Frames should overlap each other to prevent losing crucial information. Microphones collect waveforms of sound. It is important to extract distinctive information to separate the words from the collected data. Techniques for linear predictive coding are frequently employed to extract voice components. The fundamental tenet of linear predictive coding is that speech signal sampling points are correlated so that a linear combination of numerous previous sampling points helps predict the values of the present and subsequent sampling points. The linear prediction coefficient is calculated to reduce the mean square error between the anticipated and actual values.
**Step 4. Vectorization**
Vector quantization (VQ) is a data compression and coding method. In scalar quantization, a dynamic range is split into several sub-intervals, where each sub-interval has a representation value. This representative value is used to determine the value for an input scalar signal that falls inside the sub-interval during quantization. Due to scalar quantization, the semaphore is a one-dimensional scalar. VQ transforms a scalar into a one-dimensional vector from the perspective of linear space to quantify the vector. VQ separates the vector space into numerous little sections. A representative vector replaces the vectors in the section during quantization for each small section. VQ integrates various scalar values into a vector (or feature vector generated from a frame of speech data) to provide overall quantization in multi-dimensional space and enable data compression with minimal information loss. In a hidden Markov model, the input observation symbol can alternatively be the vector quantized feature vector.
**Step 5. Speech Recognition**
A typical speech recognition task is recognizing words and phrases because words are sequences of letters. A recognition system receives feature parameters from the speech signal as the input, like the LPC predictive coding parameters. Using Bayesian decision-making with maximum likelihood, three typical approaches are used in speech recognition: template matching, stochastic model, and probabilistic parsing.
* In template matching, a template is generated and stored while a user pronounces each phrase during the training stage. Each template in the template library is a feature vector. During the recognition stage, the input speech's feature vector sequence is iteratively compared to each template in the template library for the best match.
* The hidden Markov model (HMM) is the most popular method among stochastic models. HMM is a time-varying process
Fig. 2: Workflow of voice assistant application service.
that transits from one reasonably stable feature to another characteristic. With adequate time, the speech signal's properties gradually stabilize.
* Probabilistic parsing is used for continuous voice recognition across broad length ranges. While individuals speak the same phonetics, significant differences in the corresponding spectrograms and their modifications exist among individuals.
Last but not least, several other voice recognition techniques exist especially artificial neural network-based approaches for voice recognition, including the BP neural network, the Kohcmen feature mapping neural network and other networks with deep learning.
### Speaker Identification Workflow
The Speaker Identification (SI) system is often referred to as speaker recognition. SI consists of two stages: speaker identification and speaker confirmation. Speaker identification is a one-to-one mapping, and speaker confirmation is a many-to-one mapping. SI determines whether multiple speakers are present in a record and validates a speaker's identity by analyzing and processing the speech signal of the speaker. SI creates a reference template or model by extracting unique characteristics from the original voice signal before recognizing a speaker according to the predetermined criteria. In a SI task, the system extracts the speaker's personality traits by averaging the semantic information in the speech signal, emphasizing the individual's distinctive characteristics; in a speech recognition task, the system normalizes the differences between different people's speeches as much as possible. The waveform of the speaker's speech reflects differences in pronunciation organs and habits, revealing each person's speech as a distinct personal trait that serves as an objective assurance of the speaker's identity.
Depending on the speaker numbers, speaker identification has two categories: closed set and open set. A closed set SI requires reflecting the number of speakers in the set to a closed set; on the contrary, an open set SI requires disregarding the number of speakers. Only a comparison and judgment between a reference model and the test speech are required for validation.
Speaker identification may be broken down into three groups: text-related, text-independent, and text-prompted. The speaker's pronunciation of essential words and phrases is used as a training text by text-related SI, and the same information is uttered during recognition. The recognition object is a free speech signal, and the text-independent speaker identification technique does not define speech content during training or recognition.
The training stage and the recognition stage are the two key phases. A template or model of each speaker is created during the training phase using feature extraction and the training corpus for each speaker in the speaker set. The speech to be recognized is broken down into its component characteristics at the recognition stage and compared to the template or model created during the system training. In speaker identification, the recognition outcome is the speaker corresponding to the model with the highest predicted speech similarity. Decide speaker confirmation by determining if the similarity between the test tone and the claimed speaker's model is higher than a predetermined threshold. The following fundamental issues affect the SI system's realization:
1. Preprocessing speech signals and feature extraction or extracting parameters can describe speaker characteristics.
2. Establishing the speaker model and establishing the model's training parameters.
3. Calculating the test speech's similarity to the speaker model.
4. Identification and technique for choosing. Confirmation or identification of the speaker.
Three categories can be used to implement SI:
1. Template matching -- A reference template is a set of feature vectors to characterize the sequence of feature vectors. During the training process, feature vectors are extracted from the training sentences of each speaker to extract the feature vector sequence. During identification, a subject's template is compared with each reference template. The outcome of matching is frequently the accumulated distance between the feature vectors, measured as part of the matching process. VQ and dynamic time normalization (DTW) are template-matching techniques most often utilized.
2. Probabilistic model -- An effective feature vector from pronuncations accurately characterizes the speaker's feature vector's distribution in the feature space. A mathematical model is constructed using statistical characteristics. A few model parameters serve to represent and store mathematical models. The feature vector of the test speech is compared to the mathematical model used to describe the speaker. The similarity between the test speech and the model is computed and helps make the recognition decision. The most widely used model is HMM because HMM correctly captures the properties of human vocal tract alterations and provides a reasonable description of stationarity and variability.
Figure 3. Microphone components and voice signal capturing and pre-processing workflow
3. Artificial neural networks (ANN) -- ANN is self-organized and self-learning, which may enhance its performance over time. ANN's features may be utilized to effectively extract speakers' personality traits from audio samples to implement SI systems.
Several performance metrics for evaluating the SI system include recognition rate, training duration, number of training corpora, reaction time, speaker set size, speaking mode, pricing, and number of training corpora. There are several assessment indicators for various events. The recognition rate is the most crucial factor, and it must be assured first to serve as the baseline for all other performance measures. Accurate and false recognition rates are frequently employed in voice recognition systems. The speaker confirmation mechanism determines the erroneous rejection and acceptance rates. The two are at odds with one another. Different sizes are needed for various events. The equal error probability is crucial for assessing speaker confirmation since it states that the two are equivalent if a particular judgment threshold is met.
### Metrics
Word error rate (WER) and sentence error rate (SER) are the most often utilized metrics because the voice assistants' primary responsibility is to convert spoken words into text. WER is the proportion of the number of words in the common word order divided by the number of added, changed, or removed words. SER is the number of sentence recognition errors divided by the total number of sentences. However, SER is typically 2 to 3 times higher than WER. Therefore, SER is frequently ignored.
### Taxonomy
We develop a taxonomy of security and privacy problems in the voice assistant domain to define the attacks against voice assistants. This taxonomy classifies voice assistants' security and privacy risks recently published. Our taxonomy investigates various target models, adversarial information, and attack strategies. We further classify publications in the target model level category based on probabilistic models and target machine learning model types, such as DNN, RNN, CNN, and many alike. Popular apps are also used in various empirical studies, including Amazon Alexa, Google Assistant, and Microsoft Cortana. We categorize articles according to adversarial knowledge levels as black-box, grey-box, and white-box attacks on the target model. The taxonomy of attacks that threaten voice assistant models is shown in Figure 4. The current attacks on SI models include spoofing, backdoor, adversarial, and hidden command attacks. Attacks in ASR models include dolphin attacks, adversarial attacks, and hidden command attacks.
We also developed a taxonomy for defensive methods in the voice assistant domain. The taxonomy of defensive methods is shown in Figure 5. The defensive methods were listed based on different attacks that they mitigate. Furthermore, each mitigation method was categorized based on its mitigation types: detection and prevention. The defensive methods also were categorized based on whether they needed extra devices when they are deployed to protect the voice assistant applications.
We categorize all published publications that discuss attacks on ASR and SI models and then choose a few exemplary studies to put in Tables 1 and 3. Each chosen paper either revealed new ways to exploit privacy issues on a particular target model or offered new attack strategies. Table 1 and Table 3 provide additional details about each study that in the taxonomy Figure 4, which can aid readers in understanding and comparing each work. We specifically provide the publication year, publication venue, learning task for the target model, attack knowledge available to the attacker, specific attack approach, baseline for the proposed attack, metrics for evaluating attack performance, and datasets used in the experiments for each paper listed in Tables 1 and 3.
## 3. Attacks and Defences in ASR Assistant Applications
### Backdoor Attacks and Defences in ASR
Backdoor attacks are common attacks against ASR models. Backdoor systems refer to attackers embedding inaudible signals into data (music clips or voicemail messages), disrupting the results of how that data is processed in a machine-learning model. A backdoor attack involves inserting a particular input of a secret command, often referred to as a trigger, into the model such that the model will make the attacker's desired decision with a specific input. Adversarial audio translates concealed orders into the attacker's intended command using the voice assistant's neural processing network, making hidden commands perceptive to human ears. Kasher et al. (Kasher et al., 2021) used the backdoor system to control several smart devices that allow voice commands, introducing potent, noise-resistant adversarial audio perturbations to typical voice as music-only audio to translate target commands. To maximize the impact of backdoor, our study evaluates various base vectors, target words, and perturbation strengths. Their backdoor method is equally effective in delivering perturbations applied to musical samples as it is used to speech-based samples, enabling the perturbation of various samples. Although the target phrase is important and the orders to be conveyed must be carefully chosen, it is plausible to attain a transcription accuracy rate of more than 50%.
Many businesses frequently outsource the training process to third parties or purchase pre-trained models to save costs. Unfortunately, outsourcing gives various adversaries access points, such as backdoor attacks. Koffas et al. (Koffas et al., 2021) look into backdoor attacks against ASR systems. Injecting inaudible triggers makes it more difficult to identify a backdoor attack. They employed datasets with 10 and 30 classes along with three neural networks (two CNNs and one LSTM), respectively. They also looked at the impact of trigger type, duration, and location. The findings demonstrate that launching an inaudible Backdoor attack against ASR is simple, requiring only that the attacker poison just around 0.5 percent of the training sample. Because the trigger cannot be heard, it may be as long as the signal duration, which increases the attack's potency. Discontinuous triggers can also dramatically enhance attack performance, even with brief triggers. Short non-consecutive triggers that are put into less than 0.5 percent of the training dataset can result in attack success rates higher than 99 percent.
## References |
2310.15130 | Novel-View Acoustic Synthesis from 3D Reconstructed Rooms | We investigate the benefit of combining blind audio recordings with 3D scene
information for novel-view acoustic synthesis. Given audio recordings from 2-4
microphones and the 3D geometry and material of a scene containing multiple
unknown sound sources, we estimate the sound anywhere in the scene. We identify
the main challenges of novel-view acoustic synthesis as sound source
localization, separation, and dereverberation. While naively training an
end-to-end network fails to produce high-quality results, we show that
incorporating room impulse responses (RIRs) derived from 3D reconstructed rooms
enables the same network to jointly tackle these tasks. Our method outperforms
existing methods designed for the individual tasks, demonstrating its
effectiveness at utilizing 3D visual information. In a simulated study on the
Matterport3D-NVAS dataset, our model achieves near-perfect accuracy on source
localization, a PSNR of 26.44dB and a SDR of 14.23dB for source separation and
dereverberation, resulting in a PSNR of 25.55 dB and a SDR of 14.20 dB on
novel-view acoustic synthesis. We release our code and model on our project
website at https://github.com/apple/ml-nvas3d. Please wear headphones when
listening to the results. | Byeongjoo Ahn, Karren Yang, Brian Hamilton, Jonathan Sheaffer, Anurag Ranjan, Miguel Sarabia, Oncel Tuzel, Jen-Hao Rick Chang | 2023-10-23T17:34:31Z | http://arxiv.org/abs/2310.15130v2 | # Novel-View Acoustic Synthesis from 3D Reconstructed Rooms
###### Abstract
We investigate the benefit of combining blind audio recordings with 3D scene information for novel-view acoustic synthesis. Given audio recordings from 2-4 microphones and the 3D geometry and material of a scene containing multiple unknown sound sources, we estimate the sound anywhere in the scene. We identify the main challenges of novel-view acoustic synthesis as sound source localization, separation, and dereverberation. While naively training an end-to-end network fails to produce high-quality results, we show that incorporating room impulse responses (RIRs) derived from 3D reconstructed rooms enables the same network to jointly tackle these tasks. Our method outperforms existing methods designed for the individual tasks, demonstrating its effectiveness at utilizing 3D visual information. In a simulated study on the Matterport3D-NVAS dataset, our model achieves near-perfect accuracy on source localization, a PSNR of \(26.44\,\mathrm{dB}\) and a SDR of \(14.23\,\mathrm{dB}\) for source separation and dereverberation, resulting in a PSNR of \(25.55\,\mathrm{dB}\) and a SDR of \(14.20\,\mathrm{dB}\) on novel-view acoustic synthesis. Code, pretrained models, and video results are available on the project webpage [1].
Byeongjoo Ahn\({}^{\dagger,*}\), Karren Yang\({}^{\dagger}\), Brian Hamilton\({}^{\dagger}\), Jonathan Sheaffer\({}^{\dagger}\), Anurag Ranjan\({}^{\dagger}\),
Miguel Sarabia\({}^{\dagger}\), Oncel Tuzel\({}^{\dagger}\), Jen-Hao Rick Chang\({}^{\dagger}\)\({}^{\dagger}\)Apple, \({}^{*}\)Carnegie Mellon University
novel-view acoustic synthesis, source localization, source separation, dereverberation
## 1 Introduction
Recent advancements in novel-view synthesis and 3D reconstruction [2, 3, 4] have enabled users to explore scenes freely, viewing them from positions not captured during recordings. However, a significant limitation of these approaches is the absence of sound, restricting the immersive experience. In contrast to novel-view image synthesis, the non-stationary nature of sound and the low resolution of microphones make novel-view acoustic synthesis a challenging problem [5].
In this work, we investigate novel-view acoustic synthesis in 3D reconstructed and calibrated rooms. We build upon recent developments in 3D reconstruction [2] and acoustic calibration techniques [6, 7, 8] and assume the availability of high-quality room geometry and acoustic material information. We also assume to have audio recordings from a limited microphone array (2-4 receivers) at known locations in the scene. However, we have no knowledge about the sound sources, including their number, locations, and content. Under this setting, our goal is to enable users moving freely in the scene to hear realistic spatial audio renderings of the unknown sound sources recorded by the microphones.
Novel-view acoustic synthesis remains challenging despite having 3D reconstructed rooms. The main problem is the lack of knowledge of the sound sources. With known sound sources, the audio at any new location can be produced using standard acoustic renderers [9, 10, 11, 12]. In other words, the key to novel-view acoustic synthesis is estimating the positions (_i.e._, sound localization) and the content (_i.e._, sound separation and dereverberation) of the sources from blind audio recordings. However, the limited number and resolution of microphones and the mixture of different reverberant sound in same audio makes these problems difficult, particularly if the semantics are similar (_i.e.,_ two people speaking) or the sounds arrive from close directions. Existing methods relying on time-delay cues cannot pinpoint the locations of multiple sound sources in a complex 3D scene [13]. Simply training a neural network also fails to generate high-quality results, as will be shown in Section 4.
Our key observation is that the echoes caused by the multipath reflection of sound contain valuable information for acoustic scene reconstruction. Specifically, as shown in Fig. 1, we propose to assist the network by deconvolving audio recordings from individual microphones (c) with RIRs from a specific location. This operation aligns the sound emitted from that location across microphone channels while keeping sound from other locations uncorrelated (d), enabling the network to predict whether an audio source exists at that location and the corresponding dry sound. Iterating this approach over all candidate source locations enables us to reconstruct the acoustic scene with high spatial resolution (g) and render it from novel viewpoints. Additionally, if we incorporate semantic visual cues (_i.e._, RGB images), we can further enhance source localization. We thoroughly study the benefits of deconvolution, as well as the use of visual cues, in experiments on the Matterport3D-NVAS dataset.
**Contributions.** Our method advances novel-view acoustic synthesis by leveraging RIRs for source localization, separation, and dereverberation. Our technique can reconstruct acoustic scenes with semantically indistinguishable sources,
such as two guitars in the same room. To our knowledge, ours is the first method for novel-view acoustic synthesis to work for multiple sound sources. Additionally, our model can generalize to new scenes without per-scene optimization, as scene-specific information is encoded in the deconvolved audios, making our model scene-agnostic.
## 2 Related Work
Our approach relies on RIR estimation techniques that operate on 3D reconstructed rooms [6, 7, 8, 14, 15, 16]. These techniques estimate or render RIRs by using both room geometry and estimated acoustic material properties.
There are extensive studies separately addressing the tasks of sound localization [17], separation [18, 19, 20, 21, 22, 23], and dereverberation [24]. These generally do not take 3D scene information into account, which is useful for novel-view acoustic synthesis. Although the active audio-visual separation method [25] provides separation with 3D localization, it requires 3D embodied agents moving within the space.
Conceptually, our method is related to beamforming techniques [26, 27, 13, 28, 29, 30, 31], where the objective is to isolate the source at a query location, typically in non-reverberant scenarios (_e.g_., open spaces), relying on the directionality of sound. In contrast, our method utilizes 3D scene geometry and performs dereverberation simultaneously.
Recently, Chen _et al_. [5] introduced ViGAS, a pioneering end-to-end approach for novel-view acoustic synthesis, using images to synthesize binaural audio. While their method offers valuable insights, it does not address sound separation and is demonstrated only for a single source. It also does not utilize 3D scene geometry. In contrast, our method leverages 3D scene geometry for sound localization and separation, enabling it to handle multiple sources.
## 3 Method
Our method decomposes the novel-view acoustic synthesis into two subproblems: (i) acoustic scene reconstruction, which involves 3D source localization, separation, and dereverberation, and (ii) novel-view acoustic rendering. We begin by introducing each problem and then present our approach.
### Problem formulation
**Acoustic scene reconstruction.** Given recorded audio \(y_{m}(t)\) from \(M\) microphones and a 3D reconstructed room containing \(S\) sound sources, we estimate the dry sound emitted by each source \(\{x_{s}\}_{s=1}^{S}\), and their locations \(\mathcal{P}=\{\mathbf{p}_{s}\}_{s=1}^{S}\). The recorded audio from microphone located at \(\{\mathbf{r}_{m}\}_{m=1}^{M}\) is
\[y_{m}(t)=\sum_{s=1}^{S}h_{\mathbf{p}_{s}\rightarrow\mathbf{r}_{m}}(t)*x_{s}(t )+\psi_{m}(t), \tag{1}\]
where \(h_{\mathbf{p}_{s}\rightarrow\mathbf{r}_{m}}\) denotes the RIR from \(\mathbf{p}_{s}\) to \(\mathbf{r}_{m}\), and \(\psi_{m}(t)\) is the noise. Here we assume the RIRs are given from a 3D room reconstruction [6, 14, 7, 8, 15, 16]. We discuss robustness to the RIR estimation in the supplementary material [1].
**Novel-view acoustic rendering.** Once the acoustic scene is known, novel-view acoustic rendering is straightforward as it can be achieved by simply convolving the dry sound results with corresponding RIRs for novel viewpoints. The audio \(y(t)\) from novel microphone located at \(\mathbf{r}\) is given by:
\[y(t)=\sum_{s=1}^{S}h_{\mathbf{p}_{s}\rightarrow\mathbf{r}}(t)*x_{s}(t). \tag{2}\]
### Our approach
**Overview.** We reconstruct the acoustic scene by querying potential 3D source locations within a room. Our goal is to determine: (i) the existence of a source at a query location and
Figure 1: **Model overview and motivation. Given a 3D reconstructed room (a) and audio recordings from microphones (c), we estimate the locations and dry sound of individual sound sources. (d,f) Our key observation is that deconvolving audio recordings with the impulse response from a specific source location aligns sound emitted at that location across input recordings while keeping sound from other locations uncorrelated. (e) We use a network to isolate target audio from the mixture of sounds and mitigate deconvolution artifacts. (g) Our source detection result on an example scene. Our network accurately identify where the sound sources are.**
(ii) if present, its associated dry sound. By iterating this process across potential source locations, we effectively localize, separate, and dereverberate all sources in the room.
Specifically, for a set of candidate source locations, denoted as \(\mathcal{Q}\!=\!\{\mathbf{q}_{n}\}_{n=1}^{N}\), which includes the actual source locations \(\mathcal{P}\) (_i.e._, \(\mathcal{P}\subset\mathcal{Q}\)), the network provides two outputs: (i) a detection estimate \(\hat{d}\), indicating the presence of a source, \(\mathbf{1}_{\mathcal{P}}(\mathbf{q}_{n})\), and (ii) an estimation \(\hat{x}_{n}(t)\) of the isolated dry sound \(x_{s}(t)\) at the query point \(\mathbf{q}_{n}\) when a positive detection is made. Then, novel-view acoustic synthesis is achieved by
\[y(t)=\sum_{n=1}^{N}h_{\mathbf{q}_{n}\rightarrow\mathbf{r}}(t)*\left(\hat{x}_ {n}(t)\;\mathbf{1}(\hat{d}>0.5)\right). \tag{3}\]
**Deconvolution and cleaning.** One challenge for processing multichannel audios from microphones at different locations is that the audios do not align across the channels. The delay and echo received by individual channel depend on source and microphone locations (_e.g._, see Fig. 0(c)) and can change dramatically across scenes. Thus, a U-Net [32], which performs well on single-channel source separation, fails with multiple channels when applied directly (see Section 4).
Our key observation is that after deconvolving individual recorded audios with the RIR from a specific 3D location to a microphone, the sound emitted from the location (if any) would align across microphones. Specifically, given a query point \(\mathbf{q}_{n}\) and microphone \(m\), we deconvolve the recorded audio \(y_{m}(t)\) with the RIR \(h_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}(t)\). In the frequency domain, the deconvolved audio can be represented as
\[Z_{nm}(w)=\sum_{s=1}^{S}\frac{H_{\mathbf{p}_{s}\rightarrow\mathbf{r}_{m}}(w)} {H_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}(w)}\;X_{s}(w)+\frac{\Psi_{m}(w)} {H_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}(w)}, \tag{4}\]
where \(X_{s}\), \(H_{\mathbf{p}_{s}\rightarrow\mathbf{r}_{m}}\), \(H_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}\), and \(\Psi_{m}\) represent the Fourier transform of \(x_{s}\), \(h_{\mathbf{p}_{s}\rightarrow\mathbf{r}_{m}}\), \(h_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}\), and \(\psi_{m}\), respectively.
When \(\mathbf{q}_{n}\) corresponds to source \(i\) (_i.e._, \(\mathbf{q}_{n}=\mathbf{p}_{i}\)), we have \(Z_{nm}(w)=X_{i}(w)+\sum_{s\neq i}\frac{H_{\mathbf{p}_{s}\rightarrow\mathbf{r} _{m}}(w)}{H_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}(w)}X_{s}(w)+\frac{\Psi_{ m}(w)}{H_{\mathbf{q}_{n}\rightarrow\mathbf{r}_{m}}(w)}\), or \(Z_{nm}(w)=X_{i}(w)\) + noise. Notice that \(X_{i}\) is independent to \(m\), which means that the deconvolved audios for individual microphones, \(\{Z_{nm}\}_{m=1}^{M}\), consistently contain \(X_{i}(w)\), the dry sound emitted by source \(i\). Sound from other sources are unaligned and become noise-like. When \(\mathbf{q}_{n}\) contains no sound source, no such alignment would exist. Fig. 0(f) shows the average cosine similarity between two microphone audios across \(1000\) scenes with sources composed of speech and music. Deconvolution with RIRs significantly increases the similarity between two microphone channels when \(\mathbf{q}_{n}\) is a sound source and maintains low similarity otherwise.
Taking deconvolved audios as inputs, the neural network's job becomes separating the audios that are aligned across channels from the noise--a much easier job for a U-net than separating and dereverberating audios with arbitrary delay and echo. In practice, we use Wiener deconvolution [33] in our implementation to mitigate artifacts.
Our approach, which integrates deconvolution and cleaning, shares insights with image deblurring techniques that combine deconvolution with neural networks to reduce artifacts [34]. Additionally, our method can be related to the basic delay-and-sum strategy in beamforming [28, 29], but with key modifications: the traditional 'delay' is substituted with RIR-based deconvolution, and the'sum' is replaced by a neural network to enable a more refined separation.
**Utilizing visual information.** While our framework primarily relies on auditory cues across microphones, we can additionally utilize visual information. Specifically, we input the RGB environment map at the query location rendered from the 3D reconstructed room to the neural network. When combined with deconvolved audio, it enhances the source detection and the final synthesis result, as shown in Section 4.
**Training.** For a query point \(\mathbf{q}\), our loss is composed of the Binary Cross-Entropy (BCE) for sound source detection and the Mean Squared Error (MSE) between the Short-Time Fourier Transforms (STFT) of the estimated dry sound \(\hat{x}\) and the ground truth dry sound \(x_{s}\) when \(\mathbf{q}\) coincides a sound source:
\[\mathcal{L}(\mathbf{q}_{n})=\lambda\,\text{BCE}(\hat{d},d)+d\,\|\text{STFT}( \hat{x}_{n})-\text{STFT}(x_{s})\|_{2}^{2}, \tag{5}\]
where \(\lambda\) is the detection weight, \(d\) is the ground truth for source presence (1 if present), and \(\hat{d}\) is the estimated probability of source existence. The MSE term is active solely for queries with a source. We use a U-Net architecture from VisualVoice [32] for source separation. For source detection, we add a 3-layer CNN decoder, which takes the latent vector of the U-Net as input. For audio-visual experiments, we use a pretrained ResNet-18 to encode image features, which are concatenated with audio features at the latent space.
## 4 Results
**Dataset.** Following Chen _et al_. [5], we create a simulated Matterport3D-NVAS dataset by using SoundSpaces [9, 10] to render RIRs from Matterport3D scenes [35], which are split into 51/11/11 rooms for train/validation/test sets. For sound sources, we incorporate speech recordings from the LibriSpeech dataset [36] and audio from 12 MIDI instrument classes (bass, brass, chromatic percussion, drums, guitar, organ, piano, pipe, reed, strings, synth lead, synth pad) from the Slakh dataset [37], all sampled at 48kHz. We use grid query points of \(1\,\mathrm{m}\) resolution. For audio-visual experiments, we include a male or female mesh for LibriSpeech, and a guitar mesh for all instruments in Slakh. For each scene, we randomly sample two source locations, paired with two random audios from LibriSpeech or Slakh.
**Tasks and metrics.** We evaluate our method on novel-view acoustic synthesis, as well as on the intermediate tasks of source detection, separation and dereverberation (for acoustic scene reconstruction). For detection, we compute the area under the ROC curve (AUROC) based on source detection accuracy on the grid query points. For the other tasks, we use Peak Signal-to-Noise Ratio (PSNR) and Source-to-Distortion Ratio (SDR) [38] for evaluation.
### Baselines and ablations. _DSP baselines:_ |
2309.00338 | An Edge-based Interface Tracking (EBIT) Method for Multiphase-flows
Simulation with Surface Tension | We present a novel Front-Tracking method, the Edge-Based Interface Tracking
(EBIT) method for multiphase flow simulations. In the EBIT method, the markers
are located on the grid edges and the interface can be reconstructed without
storing the connectivity of the markers. This feature makes the process of
marker addition or removal easier than in the traditional Front-Tracking
method. The EBIT method also allows almost automatic parallelization due to the
lack of explicit connectivity.
In a previous journal article we have presented the kinematic part of the
EBIT method, that includes the algorithms for piecewise linear reconstruction
and advection of the interface. Here, we complete the presentation of the EBIT
method and combine the kinematic algorithm with a Navier--Stokes solver. A
circle fit is now implemented to improve the accuracy of mass conservation in
the reconstruction phase. Furthermore, to identify the reference phase and to
distinguish ambiguous topological configurations, we introduce a new feature:
the Color Vertex. For the coupling with the Navier--Stokes equations, we first
calculate volume fractions from the position of the markers and the Color
Vertex, then viscosity and density fields from the computed volume fractions
and finally surface tension stresses with the Height-Function method. In
addition, an automatic topology change algorithm is implemented into the EBIT
method, making it possible the simulation of more complex flows. The
two-dimensional version of the EBIT method has been implemented in the free
Basilisk platform, and validated with seven standard test cases: stagnation
flow, translation with uniform velocity, single vortex, Zalesak's disk,
capillary wave, Rayleigh-Taylor instability and rising bubble. The results are
compared with those obtained with the Volume-of-Fluid (VOF) method already
implemented in Basilisk. | Jieyun Pan, Tian Long, Leonardo Chirco, Ruben Scardovelli, Stéphane Popinet, Stéphane Zaleski | 2023-09-01T08:47:18Z | http://arxiv.org/abs/2309.00338v2 | # An Edge-based Interface Tracking (EBIT) Method for Multiphase-flows Simulation with Surface Tension
###### Abstract
We present a novel Front-Tracking method, the Edge-Based Interface Tracking (EBIT) method for multiphase flow simulations. In the EBIT method, the markers are located on the grid edges and the interface can be reconstructed without storing the connectivity of the markers. This feature makes the process of marker addition or removal easier than in the traditional Front-Tracking method. The EBIT method also allows almost automatic parallelization due to the lack of explicit connectivity.
In a previous journal article we have presented the kinematic part of the EBIT method, that includes the algorithms for interface linear reconstruction and advection. Here, we complete the presentation of the EBIT method and
combine the kinematic algorithm with a Navier-Stokes solver. To identify the reference phase and to distinguish ambiguous topological configurations, we introduce a new feature: the Color Vertex. For the coupling with the Navier-Stokes equations, we first calculate volume fractions from the position of the markers and the Color Vertex, then viscosity and density fields from the computed volume fractions and finally surface tension stresses with the Height-Function method. In addition, an automatic topology change algorithm is implemented into the EBIT method, making it possible the simulation of more complex flows. A two-dimensional version of the EBIT method has been implemented in the open-source Basilisk platform, and validated with five standard test cases: (1) translation with uniform velocity, (2) single vortex, (3) capillary wave, (4) Rayleigh-Taylor instability and (5) rising bubble. The results are compared with those obtained with the Volume-of-Fluid (VOF) method already implemented in Basilisk.
keywords: Two-phase flows, Front-Tracking, Volume-of-Fluid +
Footnote †: journal: Journal of Computational Physics
## 1 Introduction
Multiphase flows are ubiquitous in nature and engineering, and their numerical simulation still represents a formidable challenge, especially when a wide range of scales is involved, as in breaking waves on the sea surface, in some industrial processes or in atomizing liquid jets. Scales from meters to microns are typically seen. Large Reynolds number turbulence, as well as mass and heat transfers are the main causes for the introduction of such a wide range of scales. The CO\({}_{2}\) transfer and heat exchange between the oceans and the atmosphere is tightly linked to multiphase flows, as it takes
place through the production and dispersion of small bubbles and droplets. Similar small structures are observed in technology, for example in the heat exchange and transport in nuclear reactors, the atomization of liquids in combustion and other settings, and in most of the synthesis processes in chemical engineering. For many of these natural and engineering problems, numerical modelling is extremely desirable, albeit a monumental challenge.
However, in the general area of multiphase flow simulation, much progress has recently been achieved in the interrelated issues of multiple scales, discretization, and topology change. These issues are: i) the long-standing one of how to represent numerically (i.e., simulate) a dynamic or moving curve or surface, ii) the more difficult or challenging problem of how to efficiently model and discretize problems at multiple scales, and finally iii) the still open issue of how to take advantage of hierarchical data structures and grids, such as quadtrees and octrees, to address the "tyranny of small scales" [1].
The first, moving curve issue is divided into two problems: the kinematic problem, where the motion of the curve or surface separating the phases must be described by knowing the fluid velocity field and the rate of phase change, and the dynamic problem, where the momentum and energy conservation equations must be solved for given fluid properties. Methods available for the kinematic problem are sometimes separated into Front-Capturing and Front-Tracking methods. In the first kind, Front-Capturing methods, a tracer or marker function \(f(\mathbf{x},t)\) is integrated in time with the knowledge of an adequate velocity field \(\mathbf{u}(\mathbf{x},t)\)
\[\partial_{t}f+\mathbf{u}\cdot\nabla f=0 \tag{1}\]
The tracer function can be a Heaviside function, leading to Volume-of-Fluid
(VOF) methods [2; 3] or a smooth function, leading to Level-Set (LS) methods [4; 5].
In Front-Tracking methods, the interface or "front" is represented by a curve discretization, for example a spline, which evolves with a prescribed normal velocity \(V_{S}\), see [6]. Compatibility between the two formulations is achieved if \(\mathbf{u}\cdot\mathbf{n}=V_{S}\). An introduction to the most popular methods may be found in [7]. Another point of view on the methods, that is fruitful in connection with issues ii) and iii) above, can be obtained by investigating how the data structures are tied to the underlying Eulerian grid. The data structures are _local_ when their components have little or no "knowledge" of the overall connections among pieces of interface or connected regions of a given phase (Fig. 1b). Thus a discretization of the tracer function \(f\) is a local data structure, tied to the Eulerian grid. On the other hand a _global data structure_ contains information not only about individual points on the interface, but also about their connections with the entire interface of a given object. For Front-Tracking methods this is achieved by linked lists, or pointers, that allow to navigate the data structure along the object (Fig. 1a). It is then obvious and efficient, using the data structure, to find all the connected pieces of the interface. The local data structures are not limited to VOF or LS methods. For example, an unconnected marker or particle method, such as Smoothed Particle Hydrodynamics (SPH), or other methods, simply tracks particles of position \(\mathbf{x}_{i}\) by integrating
\[\frac{\mathrm{d}\mathbf{x}_{i}}{\mathrm{d}t}=\mathbf{u}(\mathbf{x}_{i},t) \tag{2}\]
without any concept of connection between the particles [8]. Such an unconnected marker method, despite its similarity with Front-Tracking is in fact
a _local_ method. Both the local and the global approaches have advantages. The local methods are easy to parallelize on a grid, are free of constraints that require the solutions of large linear or nonlinear systems (as when constructing interfaces by high-order splines) and are generally computationally efficient. The global methods allow to track and control the topology, so they are also useful for the second issue, the handling of multiscale problems. Moreover, when dealing with complex multiscale problems, it is often useful to investigate geometrical properties such as skeletons [9] which are branched manifolds most naturally represented by Front-Tracking. Global methods also allow to naturally distinguish between continuous slender objects such as unbroken thin ligaments or threads and strings of small particles or broken ligaments. This latter distinction is of great importance when an
Figure 1: (a) A global data structure with a linked list: the example of the front. Reproduced from [7]. (b) A local data structure with unlinked VOF linear reconstructions.
alyzing statistically highly-fragmented flows [10]. It is however possible to represent slender objects with local methods in some cases: Chiodi [11] proposed an enhancement of the VOF interface reconstruction method, involving two nearly parallel, closely located planes in three dimensions, in order to capture slender sheets thinner than the mesh size.
A method that combines some of the properties of global methods, such as Front-Tracking, and local methods, such as VOF or LS, seems desirable. There have indeed been some prior attempts at such a combination.
Aulisa et al.[12; 13] combined the VOF method with marker points to obtain smooth interfaces without discontinuity and to improve mass conservation of traditional Front-Tracking methods. Lopez et al. [14] introduced the marker points into the VOF method to allow tracking fluid structures thinner than the cell size.
The Level Contour Reconstruction Method (LCRM) developed by Shin, Juric and collaborators [15; 16; 17] combined Front-Tracking and LS methods. It improved the mass conservation problem of traditional LS methods, thanks to tracking the interface by Lagrangian elements (instead of advecting the LS function field). A LS function can then be regenerated from the Lagrangian elements. The smoothing of interface elements, as well as topology changes, take place automatically during the reconstruction procedure, thus explicit connectivity information is not needed in their method. Singh and Shyy [18] used the LCRM to perform topology changes in their traditional Front-Tracking method where connectivity of the Lagrangian elements has to be maintained explicitly. Shin, Yoon and Juric[19] later extended the LCRM to obtain a new type of Front-Tracking method, the Local Front Reconstruc
tion Method (LFRM), for both two-dimensional and three-dimensional multiphase flow simulations. The LFRM reconstructs interface elements using the geometrical information directly from the Lagrangian interface elements instead of constructing another LS field.
We suggest a similar method in the present work, based on a purely kinematic approach developed by two of us [20], called the Edge-Based Interface-Tracking (EBIT) method. In that method the position of the interface is tracked by marker points located on the edges of an Eulerian grid, and the connectivity information is implicit. The basic idea and the split interface advection were discussed in [20], here we improve the mass conservation of the original EBIT method by using a circle fit to reconstruct the interface during advection. The topology change mechanism is also introduced. Compared to the LFRM of Shin, Yoon and Juric [19], markers in the EBIT method are bound to the Eulerian grid. These marker are obtained by a reconstruction of the interface at every time step, thus the Eulerian grid and Lagrangian markers can be distributed to different processors by the same routine as the one used in the parallelization of the Navier-Stokes solver. Second, a new feature called Color Vertex, which amounts to describe the topology of the interface by the color of markers at the vertices of the grid, is discussed. Such a scheme is trivial on a simplex and just slightly more complicated on a square grid. It was proposed in a similar context by Singh and Shyy [18]. Third, we combine the EBIT method with a Navier-Stokes solver for multiphase flow simulations. The coupling is simply realized by computing volume fractions from the position of the markers and the Color Vertex, then we use the volume fractions as in typical VOF solvers [7].
The paper is organized as follows: the kinematics of the EBIT method is described in Section 2. This includes the interface advection algorithm, the updating rules for the color vertices and the automatic topology change algorithm. Then the coupling algorithm between the EBIT interface description and the multiphase fluid dynamics is presented. A two-dimensional flow solver based on a Cartesian or quadtree grid is implemented inside the open-source platform Basilisk [21; 22]. In Section 3, the EBIT method is validated by the computation of typical examples of multiphase flow simulations. The results obtained by the combined EBIT and VOF methods are presented and compared with those calculated by the pure VOF method.
## 2 Numerical method
### The EBIT method
In the EBIT method, the interface is represented by a set of marker points placed on the grid lines. The advection of the interface is done by moving these points along the grid lines. The equation of motion for a marker point at position \(\mathbf{x}_{i}\) is
\[\frac{d\mathbf{x}_{i}}{dt}=\mathbf{u}_{i} \tag{3}\]
which is discretized by a first-order explicit Euler method
\[\mathbf{x}_{i}^{n+1}=\mathbf{x}_{i}^{n}+\mathbf{u}_{i}^{n}\Delta t \tag{4}\]
where the velocity \(\mathbf{u}_{i}^{n}\) at the marker position \(\mathbf{x}_{i}^{n}\) is calculated by a bilinear interpolation.
For a multi-dimensional problem, a split method is used to advect the interface (see Fig. 2), which is similar to that described in [20], but with some
Figure 2: One-dimensional advection of the EBIT method along the \(x\)-axis. (a) Initial interface line. (b) Advection of the markers on the grid lines aligned with the velocity component (blue points). (c) Advection of the unaligned markers (gray points) and computation of the intersections with the grid lines (red points). (d) Interface line after the 1D advection.
improvements. The marker points placed on the grid lines that are aligned with the velocity component of the 1D advection are called _aligned markers_, while the remaining ones are called _unaligned markers_. Starting from the initial position at time step \(n\), the new position of the aligned markers is obtained by Eq. (4) (blue points of Fig. 3). To compute the new unaligned markers, we first advect them using again Eq. (4), obtaining in this way the gray point of Fig. 3c. Finally, the new position of the unaligned marker (red point of Fig. 3d) is obtained by fitting a circle through the surrounding marker points and by computing the intersection with the corresponding grid line. The gray point is then discarded. Whenever it is possible, we consider two circles for the fitting, through points 2,3,4 and 1,2,3 of Fig. 3, respectively. In that case, the final position of the unaligned marker will be the average of the two fits.
Figure 3: Circle fit to compute the position of the unaligned marker (red point)
### Color Vertex
In the EBIT method, the connectivity of the markers is implicit. When there are only two markers on the cell boundary, the interface portion is given by the segment connecting these two points. However, when there are four markers on the boundary, there are two possible alternative configurations, as shown in Fig. 4. In order to select one of the two configurations without any ambiguity, we consider a technique called Color Vertex, which was first proposed by Singh and Shyy [18] to implement an automatic topology change. The value of the Color Vertex indicates the fluid phase in the corresponding region within the cell, and five color vertices (four in the corners and one in the center of the cell) are enough to select one of the two configurations of Fig. 4. In other word, we can establish a one-to-one correspondence between the topological configuration and the value of the color vertices within each cell, and then reconstruct the interface segments with no ambiguity.
Furthermore, the direction of the unit normal to the interface can also be
Figure 4: Two color vertex configurations (brown and green squares) to select a different connectivity in the same set of markers
determined based on the Color Vertex distribution. The local feature of the Color Vertex makes the EBIT method more suitable for parallelization, when it is compared to the data structure that is used for storing the connectivity in traditional Front-Tracking methods.
As the interface is advected, the value of a Color Vertex should also be updated accordingly to ensure that the implicit connectivity information is retained. For a Color Vertex located on a cell corner, we have to change its value if a marker moves across the intersection of the corresponding grid lines, as shown in Fig. 5.
In the present implementation of the EBIT algorithm the Color Vertex in the cell center is only used to select one of the two configurations shown in Fig. 4. It is important to realize that in such a configuration, regardless of the direction of the advection, there are both aligned markers and unaligned ones. Therefore, the algorithm for updating the Color Vertex in the cell center proceeds as follows (see Fig. 6):
(1) If in the cell under investigation, after the interface advection there is an unaligned marker (red point 0 of Fig. 6), we identify the interface segment through points 1 and 2 that brackets the unaligned marker and the
Figure 5: Update of the Color Vertex value on a cell corner: its value changes, from green to brown, as the marker is advected across the grid lines intersection
Figure 6: Update of the Color Vertex value in the cell center with an unaligned marker (red point)
corresponding segment through points 1' and 2', before the advection step. These two points are on opposite sides in Fig. 6a and on consecutive sides in Fig. 6b.
(2) From the connectivity information before the advection step, we identify two more markers, points 3' and 4', that are connected to the segment 1'-2' on opposite sides, and compute their new positions 3 and 4 after the advection step.
(3) If the four points 1, 2, 3 and 4 are unaligned markers, we do not need to update the value of the Color Vertex in the center, because in this case it is impossible to have an ambiguous configuration.
(4) We check if in the cell under investigation there is an aligned marker (point 2 in Fig. 6a and point 3 in Fig. 6b).
(5) If an aligned marker has been found, we identify the cell corner (bottom left corner of Fig. 6) isolated by the segments connecting this marker and the new unaligned marker (point 0 of Fig. 6). The value of the Color Vertex in the cell center is set to the opposite value of that of the cell corner.
With these simple rules, the value of the Color Vertex in the cell center is set to the correct value to select one of the two configurations of Fig. 4.
### Topology change
The topology change is controlled by the Color Vertex value distribution in an automatic manner. In the present implementation of the EBIT method, a marker point is present on a cell edge only if the value of the Color Vertex at the two edge endpoints is different.
Furthermore, only one marker is allowed on a cell edge. When two markers move into the same edge, as shown in Fig. 7, they both will be eliminated
automatically, because the value of the Color Vertex at the corresponding endpoints is the same, hence there cannot be any marker on that edge. Moreover, the "surviving" markers within the cell will be reconnected automatically, see again Fig. 7. This reconnection procedure enables ligament breakup or droplet merging in an automatic way during the interface advection. As a direct consequence of this procedure, the volume occupied by the reference phase will decrease or increase. In particular, it tends to remove droplets or bubbles which are smaller than the grid size.
This topology change mechanism only affects interfaces that are approaching each other along a direction parallel to the grid lines. Because of the presence of a Color Vertex in the cell center, interfaces approaching along a diagonal direction do not induce any topology change, as long as the four markers remain on a different edge, as shown in Fig. 4. Thus, this mechanism does not suffer the problem of orientation-dependent topology change, which takes place during the tetra-marching procedure in Yoon's [23] LCRM
Figure 7: Topology change mechanism
and Shin's [19] LFRM, along the two diagonal directions.
However, in our method approaching interfaces along diagonal directions behave differently from those approaching along parallel ones. A possible solution to this problem would be removing the restriction on the number of markers per edge, which would also allow us to capture the subscale interfacial structure and to control the topology change based on a physical mechanism.
### Governing equations
The Navier-Stokes equations for incompressible two-phase flow with immiscible fluids written in the one-fluid formulation are
\[\frac{\partial\rho}{\partial t}+\mathbf{u}\cdot\nabla\rho=0 \tag{5}\] \[\frac{\partial\rho\mathbf{u}}{\partial t}+\nabla\cdot(\rho\mathbf{u}\mathbf{ u})=-\nabla p+\nabla\cdot\left[\mu\left(\nabla\mathbf{u}+\nabla\mathbf{u}^{T}\right) \right]+\rho\mathbf{g}+\mathbf{f}\] (6) \[\nabla\cdot\mathbf{u}=0 \tag{7}\]
where \(\rho\) and \(\mu\) are density and viscosity, respectively. The gravitational force is taken into account with the \(\rho\mathbf{g}\) term. Surface tension is modeled by the term \(\mathbf{f}=\sigma\kappa\mathbf{n}\delta_{S}\), where \(\sigma\) is the surface tension coefficient, \(\kappa\) the interface curvature, \(\mathbf{n}\) the unit normal and \(\delta_{S}\) the surface Dirac delta function.
The physical properties are calculated as
\[\rho=H\rho_{1}+(1-H)\rho_{2},\qquad\mu=H\mu_{1}+(1-H)\mu_{2} \tag{8}\]
where \(H\) is the Heaviside function, which is equal to 1 inside the reference phase and 0 elsewhere.
Since the markers are located on the grid edges, we consider a simple strategy to couple the EBIT method with the Navier-Stokes equations. From
the position of the marker points and the values of Color Vertex in the cell under investigation, we can easily compute the equation of the straight line connecting the markers and the volume fraction \(C\)[24]. The volume fraction field is then used to approximate the Heaviside function in Eq. (8) and to calculate the curvature by the generalized height function method [22] and the Dirac delta function in Eq. (6).
The numerical implementation of the EBIT method has been written in the Basilisk C language [21; 22], which adopts a time-staggered approximate projection method to solve the incompressible Navier-Stokes equations on a Cartesian mesh or a quad/octree mesh. The Bell-Colella-Glaz (BCG) [25] second-order scheme is used to discretize the advection term, and a fully implicit scheme for the diffusion term. A well-balanced Continuous Surface Force (CSF) method is used to calculate the surface tension term [22; 26].
Figure 8: The EBIT method with AMR
### Adaptive mesh refinement (AMR)
An efficient adaptive mesh refinement (AMR) technique, which is based on a wavelet decomposition [27] of the specified field variables, is implemented in Basilisk, which allows the solution of the flow field at high resolution only in the relevant parts of the domain, reducing in this way the computational cost of the simulation.
Due to the restriction on the number of markers on each edge, the mesh refinement near the interface should be handled carefully. For the particular situation where the cells on the two sides of the interface are on different levels, as that shown in Fig. 8, there are two markers on the same edge of the grid cell on the right. This instance violates the basic restriction of the current implementation of the EBIT method, and it is not allowed.
To avoid this inconsistency, we consider a simple strategy, that is to refine the cells within the \(3\times 3\) stencil of each interfacial cell to the maximum allowable level. With this assumption and the timestep limitation due to the \(CFL\) number, the interface will not be advected between two grid cells at different resolution levels. In principle, this refinement strategy should be less efficient than that based on curvature, however the numerical results of the next section show that the efficiency is still comparable to that based on curvature when other criteria of refinement, such as velocity gradients, are introduced into the AMR strategy.
## 3 Numerical results and discussion
### Translation with uniform velocity
In this first test a circular interface of radius \(R=0.15\) and center at \((0.25,0.75)\) is placed inside the unit square domain. The domain is meshed with \(N_{x}\times N_{x}\) square cells of size \(h=1/N_{x}\), where \(N_{x}=32,64,128,256,512\). A uniform and constant velocity field \((u,v)\) with \(u=-v\) is imposed in the box, so that the interface is advected along the diagonal direction. At halftime \(t=0.5\,T\) the center reaches the position \((0.75,0.25)\), the velocity field is then reversed and the circular interface should return to its initial position at \(t=T\) with no distortion.
We use this case to test the new EBIT method, where the unaligned markers are computed by a circle fit. The accuracy and mass conservation of the method are measured by the area and shape errors. The area error \(E_{area}\) is defined as the absolute value of the relative difference between the area occupied by the reference phase at the initial time \(t=0\) and that at \(t=T\)
\[E_{area}=\frac{|A(T)-A(0)|}{A(0)} \tag{9}\]
The shape error, in a \(L^{\infty}\) norm, is defined as the maximum distance between any marker \(\mathbf{x}_{i}\) on the interface and the corresponding closest point on the analytical solution
\[E_{shape}=\max_{i}|dist(\mathbf{x}_{i})|,\quad dist(\mathbf{x}_{i})=\sqrt{(x_{i}-x_{c })^{2}+(y_{i}-y_{c})^{2}}-R \tag{10}\]
where \((x_{c},y_{c})\) are the coordinates of the circle center and \(R\) its radius.
To initialize the markers on the grid lines we first compute the signed distance (10) on the cell vertices, then we use a root-finding routine, when
the sign of the distance is opposite on the two endpoints of a cell side, to calculate the position of a marker. Hence, there is a small numerical error in the initial data, that accumulates as the interface is translated. However, because of the circle fit in the EBIT method this error remains rather limited during the translation. We employ a relatively small \(CFL\) number \(CFL=(u\,\Delta t)/h=0.125\). The interface lines at halftime and at the end of the simulation are shown in Fig. 9 for different mesh resolutions.
The interface lines are also calculated with different reconstruction and advection methods for a medium mesh resolution, \(N_{x}=128\), and are shown in Fig. 10. Both the new EBIT method and the PLIC-VOF method maintain rather well the circular shape of the interface during the translation. On the other hand, the first version of the EBIT method, that considers a straight
Figure 9: Translation test with the new EBIT method at different resolutions: (a) interface lines at halftime; (b) interface lines at the end of the simulation
Figure 11: Errors in the translation test for different methods as a function of grid resolution: (a) area error \(E_{area}\); (b) shape error \(E_{shape}\)
Figure 10: Translation test with different methods at resolution \(N_{x}=128\): (a) interface lines at halftime; (b) interface lines at the end of the simulation
line approximation (SL) for the calculation of the position of unaligned markers, loses mass continuously during the simulation.
The area errors \(E_{area}\) and the shape errors \(E_{shape}\) are listed in Table 1 and are shown in Fig. 11 for the different methods. For the new EBIT method we observe a second-order convergence rate for the area error and approximately a first-order convergence rate for the shape error, as shown in Fig. 11. For the first version of the EBIT method (SL), both the area and shape errors are much larger. Furthermore, this method loses all the reference phase at the lowest grid resolution, \(N_{x}=32\) (this fact is denoted by "NA" in Table 1.
For the PLIC-VOF method, the mass conservation is accurate to machine error and it is not shown in the figure, while the shape error is evaluated with the two endpoints of the PLIC-VOF reconstruction in each cut cell. For this error we observe in Fig. 11a first-order convergence rate. Due to the fact that in the new EBIT method we are fitting a circle with a circle, the shape error obtained with the PLIC-VOF method is larger than that of the new EBIT method.
\begin{table}
\begin{tabular}{c c|c c c c c} \hline & \(N_{x}\) & 32 & 64 & 128 & 256 & 512 \\ \hline EBIT & \(E_{area}\) & \(9.32\times 10^{-9}\) & \(2.30\times 10^{-9}\) & \(3.13\times 10^{-10}\) & \(1.48\times 10^{-10}\) & \(3.82\times 10^{-11}\) \\ & \(E_{shape}\) & \(6.13\times 10^{-9}\) & \(3.21\times 10^{-9}\) & \(3.77\times 10^{-9}\) & \(9.08\times 10^{-10}\) & \(3.76\times 10^{-10}\) \\ \hline VOF & \(E_{shape}\) & \(3.88\times 10^{-3}\) & \(1.06\times 10^{-3}\) & \(4.72\times 10^{-4}\) & \(2.59\times 10^{-4}\) & \(2.68\times 10^{-4}\) \\ \hline EBIT-SL & \(E_{area}\) & NA & \(6.83\times 10^{-1}\) & \(3.41\times 10^{-1}\) & \(1.70\times 10^{-1}\) & \(8.52\times 10^{-2}\) \\ & \(E_{shape}\) & NA & \(6.86\times 10^{-2}\) & \(3.10\times 10^{-2}\) & \(1.54\times 10^{-2}\) & \(7.78\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 1: Mesh convergence study for the translation test
### Single vortex
The single vortex test was designed to test the ability of an interface tracking method to follow the evolution in time of an interface that is highly stretched and deformed [28]. A circular interface of radius \(R=0.15\) and center at \((0.5,0.75)\) is placed inside the unit square domain. A divergence-free velocity field \((u,v)=(\partial\phi\big{/}\partial y,-\partial\phi\big{/}\partial x)\) described by the stream function \(\phi=\pi^{-1}\sin^{2}(\pi x)\sin^{2}(\pi y)\cos(\pi t/T)\) is imposed in the domain. The cosinusoidal time-dependence slows down and reverses the flow, so that the maximum deformation occurs at \(t=0.5\,T\), where \(T\) is the period, then the interface returns to its initial position without distortion at \(t=T\). Furthermore, as the value of the period \(T\) is increased, a thinner and thinner revolving ligament develops.
In this test we use a constant \(CFL\) value, \(CFL=0.125\), based on the maximum value of the velocity at time \(t=0\). The error is again measured by the area and shape errors. The reference solution is obtained by solving the ordinary differential equations \(d\mathbf{x}\big{/}dt=\mathbf{u}(x(t),y(t),t)\) with a fourth-order Runge-Kutta method.
\begin{table}
\begin{tabular}{c c|c c c c c} \hline & \(N_{x}\) & 32 & 64 & 128 & 256 & 512 \\ \hline EBIT & \(E_{area}\) & \(1.69\times 10^{-2}\) & \(7.45\times 10^{-3}\) & \(2.62\times 10^{-3}\) & \(1.25\times 10^{-3}\) & \(5.99\times 10^{-4}\) \\ & \(E_{shape}\) & \(1.45\times 10^{-2}\) & \(6.74\times 10^{-3}\) & \(3.07\times 10^{-3}\) & \(1.54\times 10^{-3}\) & \(7.75\times 10^{-4}\) \\ \hline VOF & \(E_{shape}\) & \(8,79\times 10^{-3}\) & \(3.00\times 10^{-3}\) & \(1.17\times 10^{-3}\) & \(4.11\times 10^{-4}\) & \(1.21\times 10^{-4}\) \\ \hline EBIT-SL & \(E_{area}\) & \(8.26\times 10^{-1}\) & \(3.66\times 10^{-1}\) & \(1.71\times 10^{-1}\) & \(8.19\times 10^{-2}\) & \(4.02\times 10^{-2}\) \\ & \(E_{shape}\) & \(1.20\times 10^{-1}\) & \(5.77\times 10^{-2}\) & \(2.68\times 10^{-2}\) & \(1.33\times 10^{-2}\) & \(6.99\times 10^{-3}\) \\ \hline \end{tabular}
\end{table}
Table 2: Mesh convergence study for the single vortex test with period \(T=2\)
Figure 12: Single vortex test with period \(T=2\) at different resolutions: (a) interface lines at halftime; (b) interface lines at the end of the simulation
Figure 13: Single vortex test with period \(T=2\) at resolution \(N_{x}=128\) with different methods: (a) interface lines at halftime; (b) interface lines at the end of the simulation
The interface line at maximum deformation and back to its initial position, for the test with period \(T=2\), is shown in Fig. 12 for different mesh resolutions. Even at the lowest resolution \(N_{x}=32\), with the new EBIT method we still recover the initial shape and lose little mass. The interface line obtained with different methods is shown in Fig. 13 for the resolution \(N_{x}=128\). The results obtained with the new EBIT method and the PLIC-VOF method agree rather well with each other. For the EBIT method with a straight line fit, we observe a considerable amount of area loss.
The area error \(E_{area}\) and the shape error \(E_{shape}\) are listed in Table 2 and are shown in Fig. 14 for the different methods here considered. For the new EBIT method that implements a circle fit, a convergence rate between first-order and second-order is observed for both
Figure 14: Errors in the single vortex test with period \(T=2\) for different methods as a function of grid resolution: (a) area error \(E_{area}\); (b) shape error \(E_{shape}\)
calculated with the new EBIT method and with the PLIC-VOF method are of about the same order of magnitude.
The interface line at maximum deformation and back to its initial position, for the test with period \(T=8\) is shown in Fig. 15 for different mesh resolutions. In this test, the interface line at maximum deformation at half
\begin{table}
\begin{tabular}{c c|c c c c c} \hline \hline & \(N_{x}\) & 32 & 64 & 128 & 256 & 512 \\ \hline EBIT & \(E_{area}\) & \(3.80\times 10^{-1}\) & \(7.71\times 10^{-2}\) & \(2.43\times 10^{-2}\) & \(2.32\times 10^{-3}\) & \(8.23\times 10^{-5}\) \\ & \(E_{shape}\) & \(1.47\times 10^{-1}\) & \(4.99\times 10^{-2}\) & \(2.09\times 10^{-2}\) & \(7.07\times 10^{-3}\) & \(3.54\times 10^{-3}\) \\ \hline VOF & \(E_{shape}\) & \(2.57\times 10^{-1}\) & \(6.58\times 10^{-2}\) & \(1.35\times 10^{-2}\) & \(7.67\times 10^{-3}\) & \(1.34\times 10^{-3}\) \\ \hline EBIT-SL & \(E_{area}\) & NA & NA & \(7.98\times 10^{-1}\) & \(3.55\times 10^{-1}\) & \(1.61\times 10^{-1}\) \\ & \(E_{shape}\) & NA & NA & \(2.02\times 10^{-1}\) & \(1.29\times 10^{-1}\) & \(8.53\times 10^{-2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mesh convergence study for the single vortex test with period \(T=8\)
Figure 15: Single vortex test with period \(T=8\) at different resolutions: (a) interface lines at halftime; (b) interface lines at the end of the simulation
time is stretched into a long thin ligament. When the mesh resolution is too coarse, i.e. \(N_{x}=32\), the new EBIT method loses some mass due to the artificial topological change mechanism. As the mesh resolution is increased, the mass loss progressively decreases and the method recovers better and better the initial circular shape.
The interface line obtained with different methods is shown in Fig. 16 for the resolution \(N_{x}=128\). At this intermediate mesh resolution, there is some discrepancy between the final shape obtained with the new EBIT method and that with the PLIC-VOF method. However, they show the same level of deviation from the reference solution.
For the EBIT method with a straight line fit, there is an even more pronounced mass loss. Furthermore, the interface does not return to its
Figure 16: Single vortex test with period \(T=8\) at resolution \(N_{x}=128\) with different methods: (a) interface lines at halftime; (b) interface lines at the end of the simulation
initial position due to the lateral shift of the interface with respect to the reference solution, as shown in Fig. 16a at maximum deformation.
The area error \(E_{area}\) and the shape error \(E_{shape}\) are listed in Table 3 and are shown in Fig. 17. For the new EBIT method, a convergence rate between first-order and second-order is observed for both errors. This behavior is similar to that obtained in the previous test with period \(T=2\). The shape errors obtained with the new EBIT method and the PLIC-VOF method become closer to each other as the mesh is refined. The EBIT method with a straight line fit loses all the reference phase even when an intermediate mesh resolution, \(N_{x}=64\), is used. All the kinematic tests show that the new EBIT method with a circle fit does decrease the mass loss as the interface is reconstructed, thus increasing the accuracy of mass conservation, and does
Figure 17: Errors in the single vortex test with period \(T=8\) for different methods as a function of grid resolution: (a) area error \(E_{area}\); (b) shape error \(E_{shape}\)
improve the performance of the EBIT method.
In order to demonstrate the feasibility of the integration of the new EBIT method with AMR, we run again the single-vortex test case on a quadtree grid in Basilisk. The interface lines at halftime and at the end of the simulation and the corresponding meshes are shown in Fig. 18. The maximum level of refinement is \(N_{l,max}=7\), which corresponds to the mesh resolution \(N_{x}=128\), while the minimum level is \(N_{l,min}=4\). All the cells near the interface are refined to the maximum level, to avoid an inconsistency like that shown in Fig. 8. Since the imposed velocity field is not affected by the mesh resolution and the same stencil is used for the velocity interpolation, the interface line obtained on the quadtree grid coincides with that on the fixed Cartesian mesh.
### Capillary wave
Capillary waves are a basic phenomenon of surface-tension-driven flows and their adequate numerical resolution is a prerequisite to more complex applications. The small-amplitude damped oscillations of a capillary wave are now a classical test case to check the accuracy of new numerical schemes that are developed to investigate the evolution in time of viscous, surface-tension-driven two-phase flows.
A sinusoidal perturbation is applied to a plane interface between two fluids initially at rest. Under the influence of surface tension, the interface begins to oscillate around its equilibrium position, while the amplitude of the oscillations decay in time due to viscous dissipation. The exact analytical solution was found by Prosperetti [29] in the limit of very small amplitudes, and it is usually used as a reference.
Figure 18: Single vortex test with period \(T=8\) at fixed resolution \(N_{x}=128\) and AMR with \(N_{l,max}=7\) and \(N_{l,min}=4\): (a) interface lines at halftime; (b) interface lines at the end of the simulation
In order to get a good agreement with the theory [22], it is necessary to move the top and bottom boundaries far away from the interface, in particular here we consider the rectangular computational domain \([0,\lambda]\times[0,4\lambda]\), where \(\lambda\) is the wavelength of the perturbation. A symmetric boundary condition is applied on the four sides of the domain. The initial amplitude of the perturbation is \(\lambda/100\), as in Popinet and Zaleski [30], Denner et al. [31] and Gerlach et al. [32]. The value of the other physical properties that are used in the simulation and of the Laplace number, \(La=\left(\rho_{1}\sigma\lambda\right)\bigl{/}\mu_{1}^{2}\), is listed in Table 4.
The time evolution of the maximum amplitude of the interface is shown in Fig. 19, together with those of the analytical solution and of the PLIC-VOF method. Time is made dimensionless by using the normal-mode oscillation frequency \(\omega_{0}\), which is defined by the dispersion relation
\[\omega_{0}^{2}=\frac{\sigma k^{3}}{2\rho_{1}} \tag{11}\]
where \(k=2\pi/\lambda\) is the wavenumber and with \(\lambda=1\) in our simulation. The numerical results obtained with the new EBIT and PLIC-VOF methods agree rather well with the analytical solution.
The error between the theoretical solution [29] and the two numerical solutions can be further analyzed with the \(L_{2}\) norm
\[E_{2}=\frac{1}{\lambda}\sqrt{\frac{\omega_{0}}{25}\int_{t=0}^{25/\omega}(h-h _{exact})^{2}} \tag{12}\]
where \(h\) is the maximum interface amplitude obtained with a numerical method and \(h_{exact}\) the reference value. The results are shown in Fig. 20 and a convergence rate close to second-order is observed for both methods,
the error of the PLIC-VOF method being always somewhat smaller. In both simulations, the height function method has been used to calculate the curvature [22]. More particularly, since we are considering a very small amplitude of the oscillations, thus of the interface curvature as well, the straight line approximation of the new EBIT method in each cut cell provides a fairly good approximation of the volume fraction and hence of the calculation of the local height function.
### Rayleigh-Taylor instability
In order to demonstrate the capability of the new EBIT method to deal with more complex flows, we investigate another classical test: the Rayleigh-Taylor instability at high Reynolds number, that involves a large deformation of the interface.
The Rayleigh-Taylor instability occurs when a heavy fluid is on top of a lighter one, with the direction of gravity from top to bottom. The density difference between the two fluids plays an important role in the instability and is present in the definition of the dimensionless Atwood number \(At\)
\[At=\frac{\rho_{1}-\rho_{2}}{\rho_{1}+\rho_{2}} \tag{13}\]
where \(\rho_{1}\) and \(\rho_{2}\) are the densities of the heavy and light fluids, respectively.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\rho_{1}\) & \(\rho_{2}\) & \(\mu_{1}\) & \(\mu_{2}\) & \(\sigma\) & \(La\) \\ \hline
1 & 1 & 0.01826 & 0.01826 & 1 & 3000 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Physical properties for the capillary wave test
Figure 19: Capillary wave test with different methods: time evolution of the maximum amplitude of the interface oscillation. The time \(\tau=\omega_{0}\,t\) is non-dimensional and the grid resolution is \(N_{x}\times N_{y}=64\times 256\)
Figure 20: Error \(E_{2}\) in the capillary wave test for different methods as a function of grid resolution
This instability has been investigated in several studies [33; 34; 35], that consider an incompressible flow without surface tension effects, with \(At=0.5\) and different Reynolds numbers, \(Re=\big{(}\rho_{1}g^{1/2}d^{3/2}\big{)}\big{/}\mu_{1}\).
In this study, we consider the rectangular computational domain \([0,d]\times[0,4d]\), partitioned with \(N_{x}\times N_{y}=128\times 512\) grid cells. The plane interface \(y_{0}(x)=2\,d\) between the two fluids is perturbed by a sinusoidal wave \(y_{1}(x)=0.1\,d\cos(kx)\), so that the interface line at the beginning of the simulation is
\[y(x)=y_{0}(x)+y_{1}(x)=2\,d+0.1\,d\cos(kx),\quad k=\frac{2\pi}{\lambda} \tag{14}\]
with \(\lambda=d=1\). A no-slip boundary condition is enforced at the bottom and at the top of the computational domain, and a symmetric boundary condition on the two vertical sides. The value of the other physical properties that are used in the simulation and of the Reynolds number \(Re\) is listed in Table 5.
The results of the simulation are first presented in Fig. 21 with the representation of the interface line at several dimensionless times \(\tau=t\sqrt{g\,At/d}\). Overall, these results compare rather well with those presented in [35].
More specifically, in the early stages of the simulation, \(\tau\leq 1.75\), the shape of interface calculated with the new EBIT method and that with the PLIC-VOF method are in very good agreement, small discrepancies are observed only in the roll-up region where complex structures with thin ligaments start to develop.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\rho_{1}\) & \(\rho_{2}\) & \(\mu_{1}\) & \(\mu_{2}\) & \(g\) & \(Re\) \\ \hline
3 & 1 & 0.00313 & 0.00313 & 9.81 & 3000 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Physical properties for the Rayleigh-Taylor instability test
Figure 21: Rayleigh-Taylor instability test with different methods: interface lines at dimensionless times \(\tau=1.,1.5,1.75,2.,2.25,2.5\)
Figure 22: Rayleigh-Taylor instability test with different methods: highest and lowest positions of the interface, with respect to the mean position at \(y=2\), as a function of dimensionless time \(\tau\)
Some remarkable differences occur at later times (\(\tau\geq 2.00\)) when the ligaments start to break up. Due to the mass conservation property of the PLIC-VOF method, many small droplets are formed when the ligaments tear apart. However, in the new EBIT method, these small droplets will soon disappear due to the topology change mechanism. Thus, in the roll-up region, the interface structure obtained with the new EBIT method agrees only qualitatively with that of the PLIC-VOF method.
In spite of this local but consistent difference, very good agreement is observed for the highest and lowest positions of the interface during the whole simulation. The highest position of the rising fluid, near the two vertical boundaries at \(x=0,d\), and the lowest position of the falling fluid, near the centerline at \(x=d/2\), both computed with respect to the mean position at \(y=2\,d\), are shown in Fig. 22. They are in very good agreement with the results obtained with the PLIC-VOF method and those by Tryggvason [33], Guermond [34] and Ding [35].
The interface lines obtained with the new EBIT method combined with AMR are finally shown in Fig. 23. The maximum level of refinement is now
\begin{table}
\begin{tabular}{c c c c} \hline Method & Number of (leaf) cells & Time steps & Wall time (s) \\ \hline EBIT & 65536 & 5656 & 1741 \\ EBIT-AMR & 27355 & 5656 & 207 \\ \hline VOF & 65536 & 5656 & 914 \\ VOF-AMR & 27223 & 5656 & 151 \\ \hline \end{tabular}
\end{table}
Table 6: Computational efficiency of the EBIT method with AMR, Rayleigh-Taylor instability test, \(N_{l,max}=9\) and \(N_{l,min}=6\)
Figure 23: Rayleigh-Taylor instability test at dimensionless times \(\tau=1.,1.75,2.25\): interface lines with AMR, \(N_{l,max}=9\) and \(N_{l,min}=6\)
\(N_{l,max}=9\), while the minimum level is \(N_{l,min}=6\). The refinement criteria are based not only on the position of the interface, but also on the velocity gradient, thus the cells within the roll-up region, characterized by a strong vorticity, are also refined to the maximum level, even if they are not very close to the interface (see Fig. 23b). The interface lines calculated with the new EBIT method on the quadtree grid agree rather well with those on the fixed Cartesian grid (see Fig. 23a), only minor differences are observed in the roll-up region.
The computational efficiency for this test of the new EBIT method with or without AMR is summarized in Table 6. For the PLIC-VOF method with AMR (VOF-AMR), the mesh refinement criteria are based on both curvature and velocity gradient. Without AMR, the dynamics with the new EBIT method is about 2 times slower than with the PLIC-VOF method. But when AMR is used, the wall times for the two methods are comparable.
### Rising bubble
In this test we examine a single bubble rising under buoyancy inside a heavier fluid. This test case was first proposed by Hysing [36] and it provides a standard benchmark for multiphase flow simulations, since this configuration is simple enough to be simulated accurately. Nevertheless, the bubble shows a strong deformation and even complex topology changes in some flow regimes [37; 38], thus giving an adequate challenge to interface tracking techniques.
By taking into account the symmetry with respect to the vertical axis, we consider the rectangular computational domain \([0,D]\times[0,4\,D]\), with \(D=0.5\), partitioned with \(N_{x}\times N_{y}=128\times 512\) grid cells. At the beginning of the
simulation, a circular bubble of radius \(R=D/2\) is positioned in the bottom part of the domain with center at \((0,D)\). A no-slip boundary condition is enforced at the bottom and at the top of the computational domain, a free-slip boundary condition on the right vertical wall and a symmetric boundary condition on the left vertical boundary. The value of the relevant physical properties is that provided by Hysing [36] and is listed in Table 7, where \(Re=\big{(}\rho_{1}g^{1/2}d^{3/2}\big{)}\big{/}\mu_{1}\) is the Reynolds number and \(Bo=\big{(}\rho_{1}gL^{2}\big{)}\big{/}\sigma\) the Bond number.
We consider the two different test cases of Table 7. In the first test, the bubble should end up in the ellipsoidal regime [36], since the surface tension forces are strong enough to hold the bubble together, hence no breakup is present in the simulation. For this first case, we solve both the axisymmetric problem and the two-dimensional Cartesian one.
The interface lines at the end of the simulation at time \(t=3\) are shown in Fig. 24a, for both the axisymmetric and Cartesian problems and with the new EBIT and PLIC-VOF methods. In general, good agreement between the two methods is found in both problems. More particularly, the bubble computed with the PLIC-VOF method is always a very little ahead of the bubble with the new EBIT method.
The rising velocities for the two problems and the two methods as a
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Test case & \(\rho_{1}\) & \(\rho_{2}\) & \(\mu_{1}\) & \(\mu_{2}\) & \(g\) & \(\sigma\) & \(Re\) & \(Bo\) \\ \hline
1 & 1000 & 100 & 10 & 1 & 0.98 & 24.5 & 35 & 10 \\
2 & 1000 & 1 & 10 & 0.1 & 0.98 & 1.96 & 35 & 125 \\ \hline \end{tabular}
\end{table}
Table 7: Physical properties for the rising bubble test
function of time are shown in Fig. 25a. The figure includes also the results obtained with the new EBIT method in conjunction with AMR. The profiles of the rising velocities are very close to each other and this justify the fact that at the end of the simulation there is very little difference between the interface lines. For the Cartesian problem, the value of the maximum rising velocity is \(0.2419\) for the new EBIT method and \(0.2418\) for the PLIC-VOF method, which are basically the same value, \(0.2419\pm 0.0002\), reported by Hysing [36].
In the second test, the bubble lies somewhere between the skirted and the dimpled ellipsoidal-cap regimes, indicating that breakup can eventually take place. The simulation is carried out with both the new EBIT and PLIC-VOF methods, and the interface lines at the end of the simulation at time \(t=3\) are shown in Fig. 24b. At the given mesh resolution, the bubble skirt is observed with both methods, with no interface breakup. Good agreement is observed between the two interface lines. More in detail, the new EBIT method predicts a slightly larger central part of the bubble and a smoother and shorter tail in the skirt region.
The rising velocities for the two methods as a function of time are shown in Fig. 25b. In this case as well, the figure includes the results obtained with the new EBIT method in conjunction with AMR. The presence of two peaks is well predicted in our simulation. The value of the first one is \(0.2507\) for the new EBIT method and \(0.2512\) for the PLIC-VOF method, in good agreement with the value \(0.25\pm 0.01\) indicated by Hysing [36].
When AMR is considered, the refinement criteria are those that have been used in the Rayleigh-Taylor instability test, i.e. proximity to the interface
Figure 24: Rising bubble test. Interface lines at the end of the simulation \(t=3\) with different methods: (a) axisymmetric and Cartesian solutions for test case 1; (b) Cartesian solution for test case 2
Figure 25: Rising bubble test. Bubble velocities as a function of time with different methods: (a) profiles for test case 1, including EBIT with AMR; (b) profiles for test case 2, including EBIT with AMR
Figure 27: Rising bubble test case 1-Axisymmetric: interface lines at the end of the simulation \(t=3\) with AMR, \(N_{l,max}=9\) and \(N_{l,min}=6\)
Figure 26: Rising bubble test case 1-2D: interface lines at the end of the simulation \(t=3\) with AMR, \(N_{l,max}=9\) and \(N_{l,min}=6\)
and velocity gradient. The maximum level of refinement is \(N_{l,max}=9\), while the minimum level is \(N_{l,min}=6\), for both test cases. The interface lines at the end of the simulation at time \(t=3\) are shown in Figs. 26 and 27 for test case 1, and in Figs. 28 for test case 2.
For the axisymmetric problem, the interface line calculated by the new EBIT method on the quadtree grid is on top of the line on the fixed Cartesian grid. For the two Cartesian problems, the interface line on the quadtree grid is just a little bit behind that on the fixed Cartesian grid.
The computational efficiency for these two cases of the new EBIT method with or without AMR is summarized in Table 8. Without AMR, the dynamics with the new EBIT method is about 2 times slower than with the PLIC-VOF method, for test case 1 and the Cartesian problem. For test case 2, the interface line is about 2 times slower than the PLIC-VOF method, for test case 3. The results of the simulations are shown in Figs. 28 and 29.
Figure 28: Rising bubble test case 2-2D: interface lines at the end of the simulation \(t=3\) with AMR, \(N_{l,max}=9\) and \(N_{l,min}=6\)
2, with a much smaller surface tension coefficient, the two wall times are closer to each other. The axisymmetric problem is somewhat intermediate. When AMR is used, the wall times of the two methods are comparable, in agreement with the results obtained in the Rayleigh-Taylor instability test.
## 4 Conclusions
We present a novel Front-Tracking method, the Edge-Based Interface Tracking (EBIT) method, which is more suitable for parallelization due to the lack of explicit connectivity. Several new features have been introduced
\begin{table}
\begin{tabular}{c c c c c} \hline & Method & Number of (leaf) cells & Time steps & Wall time (s) \\ \hline Case1-2D & EBIT & 65536 & 4606 & 1726 \\ & EBIT-AMR & 11077 & 4606 & 172 \\ & VOF & 65536 & 4607 & 935 \\ & VOF-AMR & 10852 & 4607 & 147 \\ \hline Case1-Axi & EBIT & 65536 & 4606 & 1824 \\ & EBIT-AMR & 10141 & 4606 & 166 \\ & VOF & 65536 & 4607 & 1136 \\ & VOF-AMR & 9937 & 4607 & 145 \\ \hline Case2-2D & EBIT & 65536 & 1599 & 4653 \\ & EBIT-AMR & 14392 & 1595 & 729 \\ & VOF & 65536 & 1536 & 3577 \\ & VOF-AMR & 14053 & 1535 & 751 \\ \hline \end{tabular}
\end{table}
Table 8: Computational efficiency of the EBIT method with AMR, rising bubble test, \(N_{l,max}=9\) and \(N_{l,min}=6\)
to improve the very first version of the EBIT method [20]. First, a circle fit has been implemented to improve the accuracy of mass conservation after the interface advection in the reconstruction phase. Second, a Color Vertex feature has been introduced to distinguish between ambiguous topological configurations and to represent the connectivity implicitly. Third, an automatic topological change mechanism has been discussed.
The new EBIT method has been implemented inside the open-source Basilisk platform in order to solve the Navier-Stokes equations for multi-phase flow simulations with surface tension. Volume fractions are calculated based on the position of the markers and the Color Vertex, and are used to calculated the physical properties and the surface tension force. To improve the computational efficiency and to avoid inconsistencies, AMR can be used with the EBIT method by considering a careful refinement strategy.
Numerical results for various cases, including both kinematic and dynamical tests, have been considered and compared with those obtained with the VOF method. Good agreement is observed for all test cases.
In future work, we aim to remove the restriction on the number of markers on a cell side, to improve our control on topological changes, and to extend the EBIT method to three dimensions.
## 5 CRediT authorship contribution statement
**J. Pan**: Conceptualization, Formal analysis, Code development, Simulations, Writing
**T. Long**: Formal analysis, Code development
**L. Chirco**: Conceptualization, Code development
**R. Scardovelli**: Formal analysis, Writing
**S. Popinet**: Basilisk code development
**S. Zaleski**: Conceptualization, Formal analysis, Supervision, Writing, Funding acquisition
## 6 Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## 7 Acknowledgements
Stephane Zaleski and Stephane Popinet recall meeting Sergei Semushin in March 1995 and learning about his method. They thank him for the explanation of the method. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement number 883849).
|
2301.03013 | Semantic rule Web-based Diagnosis and Treatment of Vector-Borne Diseases
using SWRL rules | Vector-borne diseases (VBDs) are a kind of infection caused through the
transmission of vectors generated by the bites of infected parasites, bacteria,
and viruses, such as ticks, mosquitoes, triatomine bugs, blackflies, and
sandflies. If these diseases are not properly treated within a reasonable time
frame, the mortality rate may rise. In this work, we propose a set of
ontologies that will help in the diagnosis and treatment of vector-borne
diseases. For developing VBD's ontology, electronic health records taken from
the Indian Health Records website, text data generated from Indian government
medical mobile applications, and doctors' prescribed handwritten notes of
patients are used as input. This data is then converted into correct text using
Optical Character Recognition (OCR) and a spelling checker after
pre-processing. Natural Language Processing (NLP) is applied for entity
extraction from text data for making Resource Description Framework (RDF)
medical data with the help of the Patient Clinical Data (PCD) ontology.
Afterwards, Basic Formal Ontology (BFO), National Vector Borne Disease Control
Program (NVBDCP) guidelines, and RDF medical data are used to develop
ontologies for VBDs, and Semantic Web Rule Language (SWRL) rules are applied
for diagnosis and treatment. The developed ontology helps in the construction
of decision support systems (DSS) for the NVBDCP to control these diseases. | Ritesh Chandra, Sadhana Tiwari, Sonali Agarwal, Navjot Singh | 2023-01-08T10:32:38Z | http://arxiv.org/abs/2301.03013v2 | # Semantic rule Web-based Diagnosis and Treatment of Vector-Borne Diseases using SWRL rules
###### Abstract
Vector-borne diseases (VBDs) are a kind of infection caused through the transmission of vectors generated by the bites of infected parasites, bacteria, and viruses, such as ticks, mosquitoes, triatomine bugs, blackflies, and sandflies. If these diseases are not properly treated within a reasonable time frame, the mortality rate may rise. In this work, we propose a set of ontologies that will help in the diagnosis and treatment of vector-borne diseases. For developing VBD's ontology, electronic health records taken from the Indian Health Records website, text data generated from Indian government medical mobile applications, and doctors' prescribed handwritten notes of patients are used as input. This data is then converted into correct text using Optical Character Recognition (OCR) and a spelling checker after pre-processing. Natural Language Processing (NLP) is applied for entity extraction from text data for making Resource Description Framework (RDF) medical data with the help of the Patient Clinical Data (PCD) ontology. Afterwards, Basic Formal Ontology (BFO), National Vector Borne Disease Control Program (NVBDCP) guidelines, and RDF medical data are used to develop ontologies for VBDs, and Semantic Web Rule Language (SWRL) rules are applied for diagnosis and treatment. The developed ontology helps in the construction of decision support systems (DSS) for the NVBDCP to control these diseases.
-Semantic Web; Decision Support System; Basic Formal Ontology; NVBDCP; Vector Borne Diseases.
## 1 Introduction
VBDs are a kind of health issue caused by pathogens spread by arthropods such as triatomine bugs, mosquitoes, blackflies, sand flies, testese flies, ticks, and lice [1]. Vectors are biological organisms that can spread infectious disease from one person to another or from one animal to another. VBDs account for around 17% of all illnesses among all infectious diseases. As per one of the reports of the World Health Organization (WHO), more than 1 billion illnesses and over 1 million fatalities happen per year due to VBDs [2]. In India, VBDs are being controlled and prevented through the NVBDCP, which was introduced in 2003-04 by the Government of India. The NVBDCP [3] is formed by combining the National Filaria Control Programme, the Kala-azar Control Programme, and the National Anti-Malaria Control Programme. It also includes dengue and Japanese B encephalitis. NVBDCP receives funding from the World Bank and the Global Fund ATM (GFATM) to focus its efforts primarily in most endemic areas to control malaria and eliminate kala-azar, which has a negative impact on poor people who live in dense, untidy, and unsanitary housing [4]. WHO is also a significant partner that provides necessary support, rules, guidelines, and technical advice to the programme. In the present scenario, all countries over the world are facing a shortage of doctors, especially India. The majority of people are suffering greatly as a result of their lack of knowledge about proper medical treatment and checkups. The proposed model is useful to cope with this problem. We deployed this model in the form of an app and website in remote areas, which reduces the patients' dependencies upon the doctors and helps the people avoid paying huge amounts to the doctors unnecessarily.
A Decision Support Systems (DSS) [5] is a computer-based information system that integrates models and data to handle unstructured or semi-structured problems with multiple user engagements via a friendly user interface. DSS plays an important role for both the doctor and the patient. It not only assists physicians in diagnosis and treatment but also improves healthcare remotely, affecting the quality of life of patients. The semantic web is an effective option
for knowledge sharing and representation in order to improve one's expertise. One of the pillars of the semantic web is ontology. It is defined as "a technology for knowledge representation that has been adopted". It functions as a domain-specific dictionary, defining objects, properties, and the relationships between them.
Ontology is basically a data model where knowledge can be represented by using concepts related to any domain and defining relationships among the concepts. Nowadays, ontologies are utilized in the field of information science to accomplish a variety of activities, such as improving user-machine communication. It also makes use of any pre-existing data model or knowledge schema. As a result, the fundamental concept of the semantic web has grown to a higher level, and several types of ontologies have been designed. Among the several categories and classifications, Basic Formal Ontology (BFO) [6] is the only one that supports reasoning in addition to the general Open World Semantics, which is followed by all ontologies.
This work deals with the text medical data that is collected by doctors' handwritten notes, mobile medical applications, and websites. Then all this text data is combined, and meaningful text is extracted from it using some text-based algorithms like OCR, spell checker, and NLP [7]. Then convert this text data into an RDF medical data [8] with the help of NLP [9] and the PCD ontology [10]. Then develop a formal ontology using this RDF medical data, the NVBDCP guidelines, and SWRL rules [11] for the diagnosis and treatment of VBDs. This work's ultimate purpose is to digitize the NVBDCP by combining the ideas of the DSS [5] and the Web Ontology Language (OWL) [12]. The following points illuminate the major findings of this work:
* The text-based medical data is transformed into Resource Description Framework triples to increase its quality and reuse in the future.
* To develop a rule-based diagnosis and treatment system for VBD's patients, a set of rules can be defined with the help of classes, properties, and persons using Semantic Web Rule Language.
* The developed Knowledge Driven DSS aids new meaning in actual decision making with the support of facts, rules, and procedures.
* Design BFO ontology for better understanding of VBDs guideline, precautionary measure and working process of NVBDCP.
* Form a new text extraction model based on natural language processing for retrieving meaningful medical text data.
The remaining part of this work is arranged as follows: Section 2 provides a glimpse of existing related work about knowledge representation through ontology. Section 3 explains the proposed methodology of construction of VBD's ontology. Section 4 discusses ontology development based on BFO using NVBDCP guidelines, semantic web rule language (SWRL), usage of SWRL rules in diagnostic classification ontologies, and a practical framework view of diagnosis and treatment of VBDs. Section 5 presents the results of the metric-based evaluation of ontology, and Section 6 reports the conclusion and future scope.
## 2 Related Work
Many studies have been previously done in the case of vector borne disease identification, precaution, and treatment. These studies highlighted the potential research gaps and interest in the diagnosis of diseases caused through the transmission of vectors. Semantic web-based disease modeling is a relatively new term that has piqued the interest of researchers and medical practitioners dealing with diseases spread by mosquitoes, fleas, and other bacteria.
Topalis, P. et al. [13] developed a tool that is extremely beneficial for the malaria community due to its efforts to reduce the worldwide malaria burden effectively. They propose Infectious Disease Ontology--Malaria (IDOMAL), the first operational malaria ontology whose objective is to design a standard language for the community that
computers and dedicated software can both understand. The IDOMAL [14][15] contains almost two thousand terms. This ontology captures multiple aspects of disease, such as clinical, epidemiological, and vector biology. Some other works provide the experimental evaluation of the disease control system, in which eight different diseases are considered, including dengue, malaria, cholera, diarrhea, influenza, meningitis, leishmaniasis, and kala-azar. The disease may be detected on the basis of primary symptoms, and their relationships will be valuable for developing a biomedical knowledge base (e.g., a disease ontology) for e-health and disease surveillance systems [15][16][17].
The most common diseases afflicting Indonesian society are classified as tropical diseases, such as malaria, leprosy, and lymphatic filariasis. Using the Resource Description Framework (RDF) serialization form, Semantic Web can display data relationships in Bahasa Indonesia [18]. Table 1 provides a summarized list of existing researches performed on various infectious diseases using ontology.
After reviewing numerous papers in the literature, it is evident that no one has discussed combining complete VBDs into a single platform. Only a few works show knowledge representation through ontologies such as the IDODEN and IDOMAL ontologies. In the proposed work, a new ontology is constructed for VBDs that encompasses all vector
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Targeted disease & Objective & Technology used & Result \\ \hline Based on infectious diseases like bacteria, fungus etc. [19] & To construct a disease intelligence system, acquire and synthesize fragmented illness knowledge obtained from diverse sources. & Ontology creation and analysis. & Class hierarchy with illness ontology features. \\ \hline Based on dengue[20] & To integrate and navigate many types of information in the context of dengue sickness. & Ontology creation and text mining. & Dengue data integration and sharing generic infrastructure. Explanation of dengue serotype trends. \\ \hline Based on dengue[21] & To provide a system for detecting similarities among dengue-infected people in order to effectively manage the outbreak. & Creation of ontology-dependent domain the useurus and case-based reasoning for similarity identification. & In terms of accuracy and error rate, the framework surpasses the User-based Pearson Correlation Coefficient (UPCC) and Item-based Pearson Correlation Coefficient (IPCC) methods. \\ \hline Based on dengue[22] & Imputing missing data to improve the predictive ability of data mining techniques. & Semantic data imputation based on ontology inference for DF epidemic data. & Experiment results show that semantic data imputation outperforms statistical techniques. \\ \hline Based on Break bone fever[23] & To broaden the knowledge base preventing and controlling vector-borne disease. & IDODEN ontology creation. & DF taxonomy including biological, epidemiological, and clinical characteristics. \\ \hline \end{tabular}
\end{table}
Table 1: Existing research performed on various infectious diseases using ontology
borne illnesses under one roof to make it more efficient. This ontology is operational to support the diagnosis and therapy of VBDs with the help of SWRL rules. The text data collected from patients' databases, doctor's handwritten notes, and test reports of patients will be extracted using OCR and corrected with the help of a spell checker. This work also helps in making an RDF medical data using the PCD ontology, which is based on text entity extraction through NLP.
## 3 Proposed Methodology
### Dataset Description
The electronic health records taken from the Indian Health Records website are used for making ontology [24], which includes details of patients' age, smoking habits, drug use, previous diseases, and so on. This data is maintained by CDAC Mohali. The text data used in this work is taken from the Vector Borne Disease Control and Surveillance mobile application [25] and doctors' handwritten notes, in the form of images and documents (i.e., pdf, doc, etc.). Many other health guidelines have been formed with the help of the NVBDCP guidelines [3].
### Structure of the proposed model
#### 3.2.1 Module for data pre-processing
The doctor's written clinical notes and medical mobile application data [25] are fed into the data pre-processing module so that they can be processed in the correct text data format. Two processes are used to convert this data into correct text data, which is then given to NLP for knowledge extraction from the text data: optical character recognition [26] and spell checking [27]. Electronic health records are given to the PCD ontology after preprocessing.
#### 3.2.2 **Optical Character Recognition**
The system through which scanned images of any document, such as handwritten or typed text, can be converted into machine-readable and editable text documents is defined as OCR. Handwriting recognition refers to a computer's capacity to recognise and comprehend handwritten input from a variety of sources, including paper files, touch screens
Figure 1: Complete architecture of the proposed model
[28], and other kinds of devices. Typed or handwritten documents are available in a wide range of typefaces and styles. The three sorts of writing styles are continuous (cursive) text, distinct (handprint or boxed) text, and mixed text. In the ontology-based clinical information extraction (OB-CIE) process [8], a pen is used by the physician to record the details about a patient's visit on a paper note; after this, the paper is scanned and stored on the computer in image format. Handwritten text is recognized by the OCR component, which converts it into a text file that may be changed. The OCR component is built using the Tesseract OCR engine [29], which is a new open-source OCR engine developed by HP, and it is rewarded as the most powerful and accurate OCR engine among the existing ones. The Tesseract OCR engine is integrated with OB-CIE using Tess4J [8]. Tess4J works as a Java JNA wrapper for the Tesseract OCR API, which is licensed under the Apache License, version 2.0.
#### b) Spelling Checker
Output text generated by the OCR system can produce typographical errors against the recognized text if different writing styles are present in the scanned copy of a handwritten file. Before diving deeper into the next step, it must be ensured through spell checking that each concept is correct. The two steps followed in the OB-CIE spell checker are: firstly, it figures out the misspelled concepts and translates these concepts into correct form. This spell checker unit is built using JOrtho (Java Orthography) [30]. The dictionary of JOrtho (a Java-based open-source library) is established with the help of the Wiktionary project, which has 5,836,006 entries over 3800 languages, including English definitions [8]. The spell-checker unit of the system was developed using the JOrtho dictionary. The spell checker verifies from the dictionary whether each term is correct or not. If any particular term isn't discovered, then it is considered a misspelled term by the spell checker, and it is being highlighted. Secondly, the checker consults the dictionary to provide a list of relevant term options, which are then ranked to allow the user to select the most appropriate. If the term is correct but not in the dictionary, it could be added by the user.
#### 3.2.2 Knowledge extraction based on Natural language processing
The ability of a machine to collect data can be improved by detection based on natural language comprehension. In particular, the semantic meaning of phrases is taken into account as a factor to aid the machine in intent identification. Because of its scalability, a comparison of semantics between input and prepared sentences is seen as a good option for detecting human intentions. The cosine distance between sentences can be used to calculate a value of semantic similarity based on their distribution of representation. A probabilistic-based neural-net language model to assist machines in learning helps in the distributed representation of words, which is a well-known idea in natural language processing.
#### a)
**Sentence boundary detection**
To determine the ending of a sentence, the OpenNLP Sentence Detector examines the punctuation character. The beginning of a sentence is presumed by the first non-whitespace character, and the end of the sentence is recognized using the last non-whitespace character. When the boundaries of a sentence have been detected, each subsequent sentence will be written on a single line.
#### b) Tokenization
This is the process of applying various steps to the token, like lemmatization, stemming, conversion of uppercase text to lowercase, stop word removal, and efficiently finding the portion of text [31][32]. Each sentence is segmented into tokens using OpenNLP tokenizers, which are basically a string of whitespace characters [33].
### Stop words removing
To improve the identification of concepts and their relationships during the information extraction phase, stop words must be removed from the text created by the tokenization process. A collection of common stop words that appear frequently in physicians' notes was painstakingly compiled.
### Part of speech tagging (POS)
In the NLP module, POS tagging is used to detect the tag of every word, and some sets of rules are used to eliminate garbage verbs from phrases like "started" and "sought." In the information extraction module, removing these verbs will allow you to focus on detecting noun terms. The tokens and their correlative word types can be described using OpenNLP POS Tagger, and the right POS tag can be predicted using a probability model [33].
### Sentence parsing
Noun phrasing is a well-known NLP technique that is used to determine whether the standard keywords can be combined to enhance the targeted information's quality [34]. A parse tree is created across each textual input during the parsing step, which represents a hierarchical structure that describes the grammatical structure of a sentence. The OpenNLP parser is used to find noun phrases in any text by extracting combinations of words that are tagged with NP. Fever, cough, toxemia, BMI, and other medical terms were generally expressed in the form of phrases or words like acute sinusitis, nocturnal enuresis, loss of appetite, etc. [35]. Figure 2 depicts a complete illustration of the working of NLP module.
Figure 2: The NLP working process and its stages
### 3.2.3 PCD Ontology
Patient Clinical Data (PCD) is an ontology that describes the components of EHR clinical data. It represents clinical principles relating to healthcare activities that occur during a patient's visit. The output of NLP Correct text will pass intently by extracting the appropriate information based on the domain we choose. For extracting meaningful information, many approaches are available, like Word2Vec, FastText [36], etc. Then map this exact word through the PCD ontology [37] and convert this word into a Resource Description Framework (RDF) database [38] frame view as shown in figure 3. Further processes are handled by VBD's ontology, which is shown in Figure 1.
## 4 Ontology Development
Doctors must ask patients for more symptoms in order to aid accurate information in the diagnosing process during medical check-ups in order to create a basic formal ontology for VBDs. This technique necessitates a doctor's medical knowledge in order to recognize the signs that must be discovered. Recognition of disease by following symptoms is very tricky using rules in software programmes; nevertheless, this still has drawbacks because every disease has too many symptoms, and some of them may overlap, which imposes a challenge in system development. This barrier could be overcome by using ontology, a semantic database that can represent the relationship between symptoms and diseases. An ontology-based strategy is proposed for automatically detecting the necessary symptoms during the conversation of patients by following the procedure of a medical checkup. The system can detect symptoms in conversations and forecast the required subsequent symptoms by incorporating a neural network into the ontology database.
Figure 3: The PCD ontology’s top-level class hierarchy contains patient clinical information. [39]
### Basic Formal Ontology Based Representation of Vector Borne Diseases
This section focuses on representing NVBDCP using an upper-level ontology such as Basic Formal Ontology (BFO) [40]. BFO is designed to be domain-neutral in order to facilitate the interoperation of what are known as "domain ontologies" developed on its foundation and therefore to support uniform data annotation across multiple domains. BFO is a method of describing a basic entity that does not particularly focus on a problem area; it is commonly used in the representation of biomedical data. In this work, the domain of interest is vector-borne disease, and we have used BFO to represent NVBDCP (as shown in Figure 4).
BFO divides any entity into two types: continuant entities and occurent entities. A continuant is a thing that remains the same or doesn't change through time. The term "current" entity refers to something that varies through time. The "continuum" entity is further divided into two subcategories, namely independent continuant, and dependent continuant. Independent continuation can be either generically or specifically dependent continuation. There are four subclasses of occurrent entities: process, process boundary, temporal region, and spatiotemporal region. In this work, many concepts (entities) have been collected that are related to our domain of interest, i.e., VBDs and NVBDCP programmes. Then we divided these ideas into nouns and verbs. Nouns form the basis for classes, and verbs form the basis for object properties and occurrences. Then we have put those concepts in the BFO structure at their appropriate positions. Then we have defined data properties and object properties for those concepts, and after that we have made rules for the diagnosis and treatment of VBDs by relating those concepts using SWRL.
### The implementation of NVBDCP ideas as a Continuant entity
The organizational and operational structure of the NVBDCP is separated into several tiers, like state level, regional level, and national level. The state level is further divided into district level, sub-district level, and many more levels. Every level comprises actors responsible for implementing NVBDCP. The Directorate of NVBDCP, Ministry of Health and Family Welfare (MoHFW, Govt. of India), Directorate General Health Services, additional directors, joint directors, research officers, and other staff members such as accountants, data entry operators, and others work at the national level. At the regional level, the regional director, entomologists, and other entomology staff are associated; at the state level, the state programme officer (SPO) (for VBDs), the deputy director, entomologists, secretaries, and other staff are involved; and at the district level, the District VBD Control Officer (DVBDCO), the malaria inspector,
Figure 4: BFO1 structure for VBDs
the Malaria Technical Supervisor (MTS), the Kala-azar Technical Supervisor (KTS), and other support staff are working. Apart from that, at subdistrict and below-level MO-PHC, other health staff like Accredited Social Health Activist (ASHA), Multi-Purpose Health Worker (MPHW), etc. are connected. All these posts are occupied by a specific person and hence treated as "roles." "Role" comes under a realizable entity because it can be realized (as visualized in figure 5 (1)).
When it comes to combating vector-borne diseases in India, the NVBDCP works in tandem with the Ministry of Health and Family Welfare (MoHFW), an autonomous government agency. It employs a large number of people and organizations. As a result, it's possible to implement it as an **"object aggregate" (figure 5.(2)).** Because an object aggregate is a material entity, it will retain its identity even if some of its components are added or removed. In medical schools and hospitals, for example, if any staff or departments are added or removed, the institution retains its identity. So, it can be considered an object aggregate. Any organization, such as NGOs (non-governmental organizations) and research institutes such as the Indian Council of Medical Research (CMR) [41], the National Institute of Malaria Research (NIMR) [42], and others, can be represented as an object aggregate. Considering the interest of our domain, i.e., VBDs, a patient has its own importance in our domain. Objects are those that have some special importance in the relevant area.
Various VBDs such as malaria, dengue, filaria, chikungunya, and JE covered under the NVBDCP are spreading through mosquitoes, while the Kala-Azar VBD is transmitting through sand flies. So, the patients, mosquitoes, and sand flies can be considered "objects," which is shown in figure 5(3). Bed nets (ITNLLINs) for mosquito protection, DDT, medications, insecticides, diagnostic kits, blood smears (needed for detecting cases), and other items are also important in this study; these items are considered "objects." Independent continuants can be categorized as immaterial entities or material entities. A continuous fiat boundary, site, and spatial region comprise an immaterial entity. The term "site" refers to a three-dimensional immaterial entity that is bounded by a physical entity. In other words, a "site" is reliant on material elements. PHC (Primary Health Center), CHC (Community Health Center), regional training centers, malaria clinics, and district training centers rely on the Family Welfare and Ministry of Health, Government
Figure 5: Classes of role, object aggregate, objects, site, three-dimensional capital region generically dependent and disposition
of India, for funding, providing guidelines, and all other relevant resources. Hence all these can be recognized as **"sites"** (**as in** figure 5(4)). Laboratories, microscopy centers, and drug stores are materially dependent on hospitals and medical colleges.
**A'spatial region'** is an immaterial thing that is defined with regard to some reference frame and is a continuous portion of any space. For example, domestic areas, peri-urban areas, peri-domestic areas, rural regions, and residential blocks can be described in terms of a reference frame such as district, state, or nation. These all can be recognized as a **'three-dimensional spatial region'** under the spatial region as depicted in figure 5(5). Various forms used in hospital or medical systems such as patient transfer form, Laboratory form, malaria case investigation form etc. requires several data like name, gender, age etc.
It depends on one or more than one concept. This situation can be considered as '**generically dependent continuant'** (as mentioned in figure 5(6)). Similarly, registers (spray registers, stock registers, etc.), records (laboratory record), reports (laboratory test report, district annual planning report, etc.) and results (laboratory test result) are all considered 'generically dependent continuants' under the continuant entity concept. Because it does not take any additional processing to realize, for instance, anyone can identify a patient's gender simply by looking at him, personal facts like the patient's name, gender, and age can be identified as **"quality"**. Different VBDs viz. Malaria, Dengue, Chikungunya, Filaria, JE and kala-azar are realized as **'disposition'**, because it cannot be identified by seeing the physical appearance of a patient. Disease can be described as the state of an organism which as a result shows one or more biological system problems.
### 4.2 SWRL rules for diagnosis and treatment of VBDs under NVBDCP guidelines
SWRL (Semantic Web Rule Language) [43] is used to define rules and logic for semantic web. In the proposed work, we have defined several concepts related to NVBDCP and by following NVBDCP guidelines we have established relationships among those concepts using SWRL, to define rules for diagnosis and treatment of different vector borne diseases. These rules are the core part of our work which will be helping in decision making to identify symptoms of particular disease and take suitable action for diagnosis and treatments.
Patients are considered as an object, as a DSS is developed in this work for vector-borne disease, and here VBDs are realized as dispositions because they can be realized, and they can change the physical appearance of a patient. The fact that disease can exist without any proper manifestations (i.e., without realization of the disposition) and that it can appear in a variety of ways is explained by considering sickness to be a disposition (dependent, for example, on the presence or absence of symptom-suppressant drugs). VBDs have their own diagnosis and treatment plan defined by the NVBDCP. The suggested system makes heavy use of rules to direct the handler to perform the appropriate actions based on the patient's situation. The knowledge base that has been built is unable to infer new information. To extract the relevant knowledge, some rules must apply to the knowledge base. The developed system focuses on patient care and management for the various VBDs, such as:
a).Lymphatic Filarias
b). Chikungunya
c). Dengue
d). Malaria
e). Kala-azar
f). Japanese Encephalitis (JE)
The diagnosis process used by NVBDCP utilizes Semantic Web Rule Language (SWRL), which is built on Web Ontology Language Description Logic (OWL-DL) [44] and Horn logic, to diagnose and treat patients in the aforementioned groups. The rules were created in SWRL at first and then implemented with the pellet reasoner. SWRL rules given in Table 2 display the NVBDCP criteria for detecting illnesses in people with suspected symptoms.
#### 4.2.1 Development of SWRL rules for identification of different VBDs
As it is well known, chikungunya fever symptoms are quite similar to dengue fever symptoms. Dengue fever does not have hemorrhagic symptoms, and the Chikungunya virus does not cause infectious shock. Malaria is diagnosed using a rapid diagnostic test and microscopic analysis of blood samples (RDT). In villages where microscopic inspection is not possible within one day, RDT is provided by health agencies and health workers such as ASHAs. As a result, treatment can be delivered based on the diagnosis. Tables 2 and 3 list the SWRL criteria for diagnosis and drug selection for malaria treatment, which are based on the NVBDCP diagnosis and treatment model (i.e., diagnosis and treatment for malaria 2013) [45]. Figure 6 represents the procedure of microscopic diagnosis for Malaria disease.
\begin{table}
\begin{tabular}{|p{28.5pt}|p{284.5pt}|} \hline S.No. & VBD detection using SWRL rules \\ \hline
1 & patient(?p) \({}^{\Lambda}\) has\_Fever\_WithChills(?p, true) \({}^{\Lambda}\) has\_Headache(?p, true) \({}^{\Lambda}\)has\_Nausea(?p, true) \({}^{\Lambda}\)has\_SymptomOf\_Malaria(?P, true) \\ \hline
2 & patient(?p) \({}^{\Lambda}\) has\_Fever(?p, true) \({}^{\Lambda}\) has\_Headache(?p, true) \({}^{\Lambda}\)has\_JointPains(?p, true) \({}^{\Lambda}\)has\_Muscle\_Pain(?p,true)\({}^{\Lambda}\)has\_Vomiting(?p,true)\({}^{\Lambda}\)has\_Hemorrhagic\_Manifestations(?p,true)- \(>\)has\_SymptomOf\_Dengue(?p, true) \\ \hline
3 & patient(?p) \({}^{\Lambda}\) has\_Fever(?p, true) \({}^{\Lambda}\) has\_Headache(?p, true) \({}^{\Lambda}\)has\_MildInfection(?p, true) \({}^{\Lambda}\) has\_Neck\_Stiffness(?p, true) \(>\)has\_SymptomOf\_JE(?p, true) \\ \hline
4 & patient(?p)\({}^{\Lambda}\)has\_Elephantiasis(?p,true)\({}^{\Lambda}\)has\_Hydrocele(?p,true)\({}^{\Lambda}\)has\_Lymphoedema(?p, true) \(>\) has\_Symptom\_Of\_Filaria(?p, true) \\ \hline
5 & patient(?p) \({}^{\Lambda}\) has\_Chills(?p, true) \({}^{\Lambda}\) has\_Fever(?p, true) \({}^{\Lambda}\)has\_Headache(?p, true) \({}^{\Lambda}\) has\_Joint\_Pains(?p, true) \({}^{\Lambda}\) has\_Rash(?p, true) \({}^{\Lambda}\) has\_Vomiting(?p, true) \(>\)has\_SymptomOf\_Chikungunya(?p, true) \\ \hline
6 & patient(?p) \({}^{\Lambda}\) has\_Anaemia(?p,true)\({}^{\Lambda}\)has\_Dry\_Skin(?p,true)\({}^{\Lambda}\)has\_Recurrent\_Fever(?p, true)\({}^{\Lambda}\)has\_Weakness(?p,true)\({}^{\Lambda}\)has\_Weight\_Loss(?p,true)-\(>\)has\_Symptom\_Of\_Kalaazar(?p, true) \\ \hline \end{tabular}
\end{table}
Table 2: NVBDCP criteria for detecting illnesses in people with suspected symptoms
Figure 6: Flowchart of microscopic examination for Malaria [45]
SWRL rules given in table 4, provides guidelines for diagnosis and treatment of malaria with a monovalent RDT.
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline S.No. & When a microscopy result is received within one day, SWRL guidelines applied for diagnosis and treatment of Malaria \\ \hline
**1** & Microscopic\_Examination(?ME)\^Patient(?p)\^ has\_SymptomOf\_Malaria(?p, true) \textgreater{} undergoes(?p,?ME) \\ \hline
**2** & patient(?p) \^ has\_ME\_Result(?p,?v1) \^ is\_Positive\_For\_PVivax(?p, true)*swrlb:equal(?v1, “positive”) \textgreater{} has\_PVivax\_Malaria(?p, true) \\ \hline
**3** & patient(?p) \^ has\_ME\_Result(?p,?v1) \^ is\_Positive\_For\_PFalciparum(?p,true) \^ swrlb:equal(?v1, “positive”) \textgreater{} has\_Falciparum\_Malaria(?p, true) \\ \hline
**4** & patient(?p) \^ has\_ME\_Result(?p,?v1) \^is\_Positive\_For\_Mixed\_Infection(?p, true) \^ as swrlb:equal(?v1, “positive”) \textgreater{}has\_Mixed\_Infection(?p, true) \\ \hline
**5** & clinical\_diagnosis(?cd) \^ patient(?p) \^ has\_ME\_Result(?lp,?v1) \^ swrlb:equal(?v1, “negative”) \textgreater{} undergoes(?p,?cd) \^has\_Required\_Malaria\_Treatment(?p, false) \\ \hline \end{tabular}
\end{table}
Table 3: SWRL rules for guidelines for diagnosis and treatment of Malaria
Figure 7: Treatment model for malaria using monovalent RDT [45]
SWRL rules for screening and patient care suspected with Lymphatic Filariasis symptoms as per the NVBDCP guidelines are mentioned in Figure 8.
For the treatment of Japanese encephalitis (JE), there is no particular course of action. Early case management is critical for reducing the risk of complications and mortality. If signs of JE are discovered, the patient is treated symptomatically. The Indian government has introduced the JE vaccination, which is administered to infants under
\begin{table}
\begin{tabular}{|p{34.1pt}|p{284.5pt}|} \hline S.No & When microscopy results are not available within one day and a monovalent RDT is utilized, the SWRL guidelines applied for diagnosis and treatment of malaria \\ \hline
**1** & Monovalent\_RDT(?v1) \({}^{\wedge}\) rural\_area(?ra) \({}^{\wedge}\)is\_ME\_Result\_Available\_Within\_One\_Day(?ra, false) -\(>\) use(?ra,?v1) \\ \hline
**2** & RDT(?rtd) \({}^{\wedge}\) patient(?p) \({}^{\wedge}\) has\_Symptom\_Of\_Malaria(?p, true) \({}^{\wedge}\) is\_Prescribed\_RDT(?p, true) -\(>\) undergoes(?p,?rtd) \({}^{\wedge}\)prepare\_Slide(?p, true) \\ \hline
**4** & patient(?p) \({}^{\wedge}\) has\_RDT\_Result(?p, “positive”)\({}^{\wedge}\)is\_Positive\_For\_P\_Falciparum(?p, true) -\(>\) has\_Falciparum\_Malaria(?p,true) \\ \hline
**5** & ACT-AL(?al) \({}^{\wedge}\) Primaquine(?PQ) \({}^{\wedge}\) patient(?p) \({}^{\wedge}\) belongs\_To\_North\_East\_State(?p, true)\({}^{\wedge}\) has\_Falciparum\_Malaria(?p, true) -\(>\) is\_Prescribed(?p,?PQ) \({}^{\wedge}\) is\_Prescribed(?p,?al) \({}^{\wedge}\)is\_Prescribed \\ \_For\_Duration(?PQ, 1) \({}^{\wedge}\) is\_Prescribed\_For\_Duration(?al, 3)\({}^{\wedge}\) is\_Prescribed\_OnDay(?PQ, 2) \\ \hline
**6** & ACT-SP(?sp) \({}^{\wedge}\) Primaquine(?PQ) \({}^{\wedge}\) patient(?p) \({}^{\wedge}\) belongs\_To\_Other\_State(?p, true) \({}^{\wedge}\) has\_Falciparum\_Malaria(?p, true) -\(>\)is\_Prescribed(?p,?PQ)\({}^{\wedge}\)is\_Prescribed\_For\_Duration(?PQ,1)\({}^{\wedge}\)is\_Prescribed\_For\_Duration(?sp,3)\({}^{\wedge}\)is\_Prescribed\_OnDay(?PQ, 2) \\ \hline
**7** & Chloroquine(?cq) \({}^{\wedge}\) patient(?p) \({}^{\wedge}\) has\_High\_Suspicion\_Of\_Malaria(?p, true) \({}^{\wedge}\) has\_RDT\_Result(?p, “Negative”)\({}^{\wedge}\)has\_Slide\_Result(?p, false)-\(>\) isPrescribed(?p,?cq) \({}^{\wedge}\) is\_Prescribed\_For\_Duration(?cq, 3) \\ \hline \end{tabular}
\end{table}
Table 4: SWRL rules for diagnosis and treatment of malaria with a monovalent RDT
Figure 8: SWRL rules for Filaria with explanation
the age of two. Children are given one dosage when they are 9 months old and another when they are 16 to 24 months old.
Dengue fever has symptoms like fever, headache, muscle and joint pain, rash, nausea, and vomiting. Some infections result in dengue hemorrhagic fever (DHF) or dengue shock syndrome (DSS). DSS has all the symptoms of DHF, along with patients having a rapid and weak pulse, narrow pulse pressure, and cold skin.
As of now, no specific antiviral drug or vaccine against dengue is available. The only solution is to control Aedes aegypti mosquitoes. The treatment is based on the signs and symptoms of the disease and confirmed after blood tests. Chikungunya is diagnosed with enzyme-linked immunosorbent assay (ELISA) blood tests. Because the symptoms of chikungunya and dengue fever are so similar, laboratory testing is crucial. Chikungunya does not have a specific treatment. Getting lots of rest and receiving supportive counseling for symptoms may be beneficial. This newly designed system stores data on various activities performed for the diagnosis and treatment of VBDs.
### Framework view of (Diagnosis and treatment) VBDs
The work diagram given in figure 9, shows use case analysis of VBDs suffered cases, and treatment provided to them with the help of designed VBD ontology and SWRL rules.
The diagnosis and treatment provide suggestions to the patients for further precaution, health checkups (i.e., blood test, x-ray) and medicines based on diagnosed VBDs. First, the patient data is collected into RDF format and then uses VBDs ontology and SWRL rules on this data, then diagnosis and treatment is performed with the corresponding disease as shown in figure 19. During the complete process, Protege software [47] is working in the background. It is a java interface which helps in framework view of ontology results. Proteges have a reasoner inbuilt function which checks if the ontology was consistent or not. If it was incontinence then give warning, if not then give the inference output according to the SWRL rules. Many reasoners (i.e Jcel, Hermit etc.) are inbuilt in Protege, but in this work Pellet [46] is used. The complete working process shown in Figure 10.
Figure 9: Use case analysis of VBDs patients based on VBDs ontology and SWRL rules.
In figure 10, step 1 shows the details of patient 1 before running the reagent. The results are in yellow according to the data that was asserted by patient 1 after running the pellet reasoner, according to the data that was fed in patient details given in step 2.The step 3 output depicts JE symptoms and recommends an ELISA and HI test, which are depicted in yellow. Step 4 includes the SWRL rules used for this output and a complete description of both tests, which were shown in two different boxes in the last of the figures.
Figure 11: Process of diagnosis and treatment for example patient named as RK
Figure 10: Framework view of Pellet working process, diagnosis, and treatment of patient1
Figure 11(1) shows a use case in which patient RK has Kala-Azar disease data and is now asserted to run the reasoner, and Figure 11(2) shows which tests require further analysis (shown in yellow) and which SWRL rules were used for those. RK has three tests prescribed: aspiration, NAT, and the serological (normal infection check) test [47]. In Figure 11(3), the result of the aspiration test is positive, which is asserted in the RK data. Again, run the reasoner and give the output in yellow, resulting in Leishmania donovani (L. donovani) being present in the body and also mentioning which SWRL rules will be used for this output. In Figure 11(4), the result of the NAT test is asserted to be positive, and then after running the reasoner, the output displays that there is a three-month-old infection and also suggests the SWRL rules responsible for this output. In Figure 11(5), the report asserts two tests, and the result after running the reasoner in yellow tells that liposomal amphotericin B injection and anti-Kala Azar medicine are prescribed. The respective SWRL rules are also presented in Figure 11. All other VBD diseases can also be diagnosed and treated in the same way.
## 5 Results and Discussion
The goal of this study is to collect biomedical text data on vector-borne diseases such as malaria, dengue, kala azar, and others from various sources such as doctor notes, medical mobile applications, and websites and convert this text data into RDF medical databases. Some of the VBDs' medical terms overlap due to the same medical checkup text terms or some other factors. To explain these overlapping terms, PCD ontology is used, which gives the proper relationship between all the terms. We collected thousands of text words across different VBDs and extracted meaningful words with the help of NLP, which is written in Table 6.
A VBDs ontology is developed using BFO according to the NVBDCP guidelines, and to make this more accurate, an RDF medical database is added to the ontology, likewise adding different medical terms that are not available in the NVBDCP, and making it operational by adding individuals (i.e., patients) for diagnosis and treatment with the help of SWRL rules. A total of 72 SWRL rules are built for diagnosis and treatment. According to the RDF database, a total of 987 VBD patients have been diagnosed and treated, with 767 successfully treated and the remainder unsuccessful due to test reports that were not updated and text prediction that was unsuccessful due to missing text. The complete accuracy result information can be visualized in the form of a graph in figure 12.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Total text words collected & Useful text words & Accuracy \% \\ & after knowledge & according to the PCD & \\ & extraction model & ontology & \\ \hline Lymphatic Filariasis & 1987 & 1701 & 85\% \\ \hline Chikungunya & 2102 & 1986 & 94\% \\ \hline Dengue & 1806 & 1533 & 84\% \\ \hline Malaria & 2156 & 1987 & 92\% \\ \hline Kala-azar & 2400 & 2158 & 89\% \\ \hline Japanese Encephalitis & 1600 & 1561 & 97\% \\ \hline & 12051 & 10923 & \\ \hline \end{tabular}
\end{table}
Table 6: Collection of text data for different VBDs
Apart from this, our designed VBDs ontology gives evaluation based on available metrics count [48] as shown in Table 7.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Metrics** & **Value** \\ \hline Axiom & 6773 \\ \hline Logical axiom count & 2604 \\ \hline Declaration axioms count & 898 \\ \hline Object property count & 153 \\ \hline Data property count & 152 \\ \hline Individual count & 987 \\ \hline Annotation Property count & 25 \\ \hline SubClassOf & 407 \\ \hline DisjointClasses & 13 \\ \hline SubObjectPropertyOf & 111 \\ \hline ObjectPropertyDomain & 168 \\ \hline ObjectPropertyRange & 291 \\ \hline DataPropertyDomain & 138 \\ \hline \end{tabular}
\end{table}
Table 7: Total metrics count available in ontology
Figure 12: VBDs diagnosis and treatment results
_Schema metrics:_ In terms of classes, attributes, relations, and individuals, the ontology could alternatively be described as a 5-tuple model.
O=\(<\)C, Dr, Sc, Re, Ind\(>\) where C-classes, Dr - data properties (attributes), Sc - subclasses, Ind - individuals, Re - Relations between classes.
Metrics can be evaluated based on the Attribute Richness, Relationship Richness, Class Richness and Average Population [49].
_Relationship richness:_ Relationship Richness (RR) is a measure of the depth of connections between concepts in an ontology, and it is calculated with the help of equation 1.
\[RR=\frac{|Prop|}{|subclass|+|prop|}\ \ldots\ldots\ldots.\ (1)\]
Where \(|\text{Prop}|\) is the total number of properties, including attribute data and object characteristics (class relationships).
_Attribute richness:_ As shown in equation 2, Attribute Richness (AR) is calculated by averaging the amount of attributes over the entire class.
\[AR=\frac{|Attribute|}{|class|}\ldots\ldots.\ (2)\]
Where \(|\text{attribute}|\) is the total number of data attributes.
_Class richness (CR):_ Class Richness (CR) is a sort of measurement that can be thought of as a knowledge metric because it indicates the whole amount of real-world knowledge conveyed through the created ontology. The CR is calculated with Equation 3 by dividing the number of classes with instances by the total number of classes.
\[CR=\frac{|class\ with\ instance|}{|class|}\ldots\ldots.\ (3)\]
_Average population (AP):_ It figures out the average number of people in each class, which is shown in equation 4.
\[AP=\frac{|\ \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots \
Also, check the ontology **quality score** through [49] based on the represented knowledge, which shows the domain knowledge's qualities and relationships according to equation 5.
\[\text{Score\_rk}\ =\frac{(\ |rel|*|class|*100)+(|sub-class|+|rel|)*|prop|}{(|sub- class|+|rel|)*|class|}\................\ (5)\]
Score can also be figured out based on how well base knowledge is taken out [48], as shown in equation 6.
\[\text{Score\_bk}\ =\frac{(\ \mathit{class\ with\_instance}*100)+(|Individual |)}{|class|}\....................\ (6)\]
Ontology score is based on the formula in table 9 for score_rk and score_bk.
The diagnosis and treatment process suggested by the NVBDCP through its various guidelines has been implemented using SWRL. The diagnosis and treatment task, which lies in the hands of NVBDCP actors, is most likely to get valuable assistance from the proposed system. Also, queries on this VBDs ontology using DL and SPARQL [51] can be done for things like patient details, previous test reports of patients, precautionary guidelines, etc. It aids in decision-making based on the patient's health.
Figure 13: The complete ontology framework view shown VOWL in Protege 5.0. [50].
\begin{table}
\begin{tabular}{|c|c|} \hline
**Evaluation parameter** & **Score** \\ \hline Score\_rk & 86.05 \\ \hline Score\_bk & 92.03 \\ \hline \end{tabular}
\end{table}
Table 9: Ontology Score Computation
The time is also used to see how well SPARQL queries work. There were six queries we asked: Q1, Q2, Q3, Q4, Q5, and Q6. The query Q5 is hard because it has many parts. We run the same query on each of the four RDF medical data sets separately and look at the times in table 10 to see how well they work. Also, the RDF medical data that is used is put together and used for the same query. This is the best way to use time, since running the query on each set of data separately takes longer than running it on all of the data at once. Table 10 shows that it takes 2.1 seconds, 3 seconds, 3.9 seconds, and 4.8 seconds to process Q1 RDF medical data 1, 2, 3, and 4. Table 10 shows that it takes less time when all four processing times are added up and compared to the processing time for all RDF medical data. It found that our models and steps for estimating performance are the most likely to work. Aside from that, sometimes the query complexity is very high, but in those cases, our method gives the best performance, just like all other queries that are based on time. Even though it takes more time, putting all the data together gives us the best result, as shown in Figure. 14.
Conclusion
In this work, a VBD's ontology is developed, which handles all decision-making related to various VBDs. It also helps in the prevention and control of these kinds of diseases as per the NVBDCP guidelines. The created ontology can effectively represent the entire NVBDCP knowledge base, which contains the concepts behind the widely used NVBDCP program, using ontology graphs. The formalism used in this research will be very fruitful in the decision-making process, like case identification, diagnosis, prevention strategies, treatment, and recognizing the roles of various NVBDCP actors, etc. Also, through developed SWRL rules, it can improve the performance of the DSS. The OCR model is used here for text extraction from documents, and a spell checker is used for domain-specific meaning correction. The proposed DSS uses NLP for terminology extraction using NER, which is usually applied in PCD ontologies for converting text data into RDF. Our data extraction technology is making better use of patient information according to the PCD ontology. This can aid in medical diagnostics and enhance healthcare delivery by allowing for the inference and reasoning of new knowledge based on the VBD's ontology. It is also shown that the processing time of the system increases when dealing with large ontologies because of the massive amount of data, indicating that the scalability of the framework needs to be addressed. We are looking into adopting similar processing methodologies in this setting because they allow us to use many processors to examine different areas of the ontologies simultaneously, which in turn decreases execution time.
In the future, if there is any change in the guidelines or policies of the NVBDCP regarding the diagnosis and treatment of VBDs, it could also be implemented through our developed system by making some modifications. This work can be extended further for the cases of other vector-borne diseases by designing separate rules for diagnosis and testing, and these rules can be applied to the treatment regimen of the respective diseases. We can also extend the model's performance through the fusion of big data technologies with artificial intelligence.
## Acknowledgments
This research is supported by "Extra Mural Research (EMR) Government of India Fund by Council of Scientific & Industrial Research (CSIR)," Sanction letter no. - 60(0120)/19/EMR-II. The authors are grateful to CSIR for giving them the tools they needed to do the research. The authors are also grateful to the people in charge of the "Indian Institute of Information Technology, Allahabad at Prayagraj," which gave us the infrastructure and help we needed.
|
2307.11799 | An easy tool for the Monte Carlo simulation of the passage of photons
and electrons through matter | A simple Monte Carlo (MC) algorithm for the simulation of the passage of
low-energy gamma rays and electrons through any material medium is presented.
The algorithm includes several approximations that accelerate the simulation
while maintaining reasonably accurate results. Systematic comparisons for both
photons and electrons have been made against the MC code PENELOPE and
experimental data to validate the algorithm, showing deviations in the
deposited energy smaller than or around 10% in the energy interval of 0.1 - 5
MeV in light media. The simulation is also valid for heavy media, but with less
accuracy at high energy. The algorithm has been implemented in an open-source
Python package called LegPy, which provides an easy-to-use framework for rapid
MC simulations aiming to be useful for applications that do not require the
level of detail of available well-established MC programs. | Víctor Moya, Jaime Rosado, Fernando Arqueros | 2023-07-21T11:06:21Z | http://arxiv.org/abs/2307.11799v2 | # An easy tool for the Monte Carlo simulation of the passage of photons and electrons through matter
###### Abstract
A simple Monte Carlo (MC) algorithm for the simulation of the passage of low-energy gamma rays and electrons through any material medium is presented. The algorithm includes several approximations that accelerate the simulation while maintaining reasonably accurate results. Systematic comparisons for both photons and electrons have been made against the MC code PENELOPE and experimental data to validate the algorithm, showing deviations in the deposited energy smaller than or around \(10\%\) in the energy interval of \(0.1-5\) MeV in light media. The simulation is also valid for heavy media, but with less accuracy at high energy. The algorithm has been implemented in an open-source Python package called LegPy, which provides an easy-to-use framework for rapid MC simulations aiming to be useful for applications that do not require the level of detail of available well-established MC programs.
keywords: Monte Carlo simulation, ionizing radiation +
Footnote †: journal: Radiation Measurements
## 1 Introduction
The study of the interaction of ionizing radiation with matter is of utmost importance in a wide range of applications. Monte Carlo (MC) simulations are extensively used to this purpose, and several excellent MC programs are available, such as EGS4 [1], PENELOPE [2], GEANT4 [3] and MCNP [4], to name a few. These programs provide frameworks for detailed simulations of any case of study, including complex geometrical forms and all the physical processes that ionizing particles may undergo in a broad energy range. Moreover, these programs typically offer multiple options for different physical models and corrections due to the specific features of the atomic composition of the media. Unfortunately, such accuracy comes at the cost of increasing technical complexity, which may require users to acquire expertise to obtain meaningful results. Additionally, the more detailed a simulation is, the more time and computing resources it demands, which could limit the use of these programs in some applications. As an alternative, analytical calculations or simple models can be used to obtain estimates of the desired results. However, this approach generally lacks the accuracy and level of detail provided by MC simulations.
In this paper we present a simplified MC algorithm for the simulation of the passage of low-energy gamma rays and electrons through any material medium. The algorithm is based on several approximations that enhance simulation speed and simplicity while maintaining reasonably accurate results. In particular, we developed a very simple model for electron transportation that accelerates the simulations significantly. In this work, we analyzed the range of validity and assessed the impact of these approximations by comparing them against both experimental data and results from well-established MC programs, especially PENELOPE [2]. We implemented the algorithm in a Python package called LegPy, released under an open-source license [5]. This tool aims to provide an easy-to-use framework for rapid simulations in applications where minor details are not necessary.
The paper is structured as follows. In section 2, the physical approximations employed in our MC algorithm and the main features of the package LegPy are described. The validation of the algorithm is presented in section 3. Lastly, in section 4, the conclusions drawn, the potential uses of our algorithm, and the improvement plans are discussed.
## 2 The algorithm
Our Monte Carlo algorithm was conceived to provide accurate enough results for a wide range of situations using a small amount of input data and computing resources. Under this approach, we neglected several effects on the transportation of photons and electrons that are only expected to be relevant at either high energy or very small scale. For the transportation of photons, pair production (energy threshold of 1.02 MeV) was ignored. Besides, several simplifications were made in photoelectric absorption, coherent scattering, and incoherent scattering. For the transportation of electrons, we developed a novel method to account for multiple scattering and collisional energy loss in a very simple and fast way. Bremsstrahlung was also ignored because it is only significant at high energy and heavy media.
We focused on situations where a beam of photons or electrons interacts with an object made of one or more homogeneous materials. The algorithm was designed to track all the individual particles inside this object, but only keep information
on the energy deposit in a voxel-based scheme. Histograms of other relevant parameters of the beam particles (e.g., the angle and energy of escaping particles, the absorbed energy, and the maximum depth of electrons) can also be computed.
Next, the approximations made in the transportation of photons and electrons are described. In the last subsection, we briefly report on the LegPy package [5].
### Photon transportation
The algorithm transports photons in random steps. The distance a photon travels before it undergoes its next interaction is randomly obtained from its mean free path, which is calculated from the total attenuation coefficient of the medium at the photon energy. The photon is transported this distance in the direction of its momentum vector to the next interaction point as long as it is inside the same medium, otherwise, the photon escapes the object or the medium changes. In the latter case, the photon is transported to the boundary between the two media to take another step in the new medium.
When a photon interacts, the type of interaction (i.e., photoelectric absorption, coherent and incoherent scattering) is determined at random from the relative attenuation coefficients of the different processes. Both total and relative attenuation coefficients are taken from the NIST Standard Reference Database XCOM [6]. If photoelectric absorption takes place, an electron is emitted with a kinetic energy and propagation direction equal to those of the absorbed photon. In the case of incoherent scattering, the momentum vector and energy of both the photon and the electron are sampled from the differential cross section, given by the Klein-Nishina formula [7], and the energy-momentum conservation laws. If the photon undergoes a coherent scattering, the Thompson scattering law [7] is used to randomly deviate its track.
As already mentioned, pair production is not included in our simplified algorithm. Nevertheless, the attenuation coefficient corresponding to this process is added to that of photoelectric absorption so that the attenuation of a photon beam is properly simulated. Furthermore, the atomic effects on the above processes are ignored. In particular, the fluorescence subsequent to photoelectric absorption or incoherent scattering, i.e., the emission of X-rays by the excited atom, is not included. This is one of the main limitations of our approximation, because X-rays spread the energy deposit at larger distances than photoelectrons do. Neglecting this effect has a significant impact for heavy elements, as will be discussed later. However, our approximation is accurate enough for light elements.
### Electron transportation
The electron transportation is based on the continuous slowing down approximation (CSDA), that is, the rate of energy loss of an electron along its track is assumed to be determined by the total stopping power neglecting fluctuations. Therefore, the total path traveled by an electron is assumed to be equal to the CSDA range in the medium. Both the total stopping power and the CSDA range are taken from the NIST Standard Reference Database ESTAR [8]. The electron path is divided into a number of steps with length chosen adequately to the desired precision, e.g., equal to the voxel size used in the simulation. Since energy-loss fluctuations are neglected, the simulation uses pre-computed tables with the electron energies at the endpoints of all the steps in each constituent material of the object. When an electron is generated with a given initial energy in a medium, the energy loss and distance traveled in its first step are obtained by interpolation from the corresponding table. All the subsequent steps follow this table until the electron stops or the medium changes. In the latter case, the last step in the first medium is shortened to end at the boundary between the two media, and the energy loss is calculated accordingly. Then, the electron continues to be transported in the second media (if any) as if it had been generated at that point.
In each step, the end position of the electron is determined by its starting position, its propagation direction, and the step length, i.e., the electron is assumed to travel in a straight line ignoring the lateral displacement due to multiple scattering. On the other hand, after the step is taken, the electron propagation direction is randomly deviated. The axial angle of the deviation is a random number in the \(0-2\pi\) interval and the scattering angle \(\theta\) is sampled from a Gaussian distribution [9]
\[P(\theta)=\frac{1}{\sqrt{2\pi}\theta_{0}}\exp\left(-\frac{\theta^{2}}{2\theta_ {0}^{2}}\right), \tag{1}\]
where the average scattering angle \(\theta_{0}\) is given by
\[\theta_{0}=\frac{E_{0}}{\beta cp}\sqrt{\frac{s}{X_{0}}}\left(1+0.038\ln\frac{ s}{X_{0}\beta^{2}}\right). \tag{2}\]
Here, \(s\) is the step length, \(X_{0}\) is the radiation length of the medium, \(\beta\) is the ratio of the electron velocity and the speed of light \(c\), \(p\) is the electron momentum and \(E_{0}\) is a model parameter. In the Gaussian approximation described in [9], \(E_{0}\) takes the value 13.6 MeV. However, we modified this parameter to compensate for the various simplifications made in electron transportation. First, we searched the values of \(E_{0}\) that make our algorithm reproduce approximately both the energy deposit distribution and the backscattering factor obtained by PENELOPE [2] for the energy range \(0.1-5.0\) MeV and a number of media. Then, from these fitted values, we obtained the following parameterization of \(E_{0}\) in MeV:
\[\begin{split} E_{0}=& 13.6\left(1.56+0.130x \right)\\ &\left[1-\left(0.0471-0.0182\ln\,x\right)\ln\,E\right]\end{split} \tag{3}\]
where \(x=1/\sqrt{X_{0}}\) for \(X_{0}\) expressed in cm and \(E\) in MeV.
The necessary \(\theta_{0}\) values are calculated at the beginning of the simulation and stored in the table of steps for each medium. The results obtained with this simple model of multiple scattering were checked to be almost independent of the step length as long as it is smaller than 10% of the CSDA range.
As a further simplification, half of the energy lost by an electron in a step is deposited in the starting voxel and the other half in the end one. To avoid discontinuities in the energy deposit distribution, the step length should be smaller than the
voxel size. Besides, when an electron reaches the boundary between two media, the energy loss in that step is assumed to be fully deposited in the starting voxel to prevent artifacts just at the boundary. In our approximation, no secondary particles are simulated. In particular, Bremsstrahlung is ignored, which limits the applicability of our logarithm to moderate electron energies.
The use of all these approximations speeds up the simulation considerably while a reasonable accuracy is achieved, as will be shown in section 3. The main limitation is the underestimation of the maximum range of electrons because their total path length is imposed to equal the CSDA range.
### LegPy framework
LegPy stands for "low energy gamma-ray simulation in Python", but it allows the simulation of both photons and electrons using the approximations described above. The code is organized as a Python package containing classes to define the basic ingredients of a simulation: the beam geometry, the energy spectrum and type of the incident particles, the object geometry, the medium (or media) the object is made of, and the simulation settings. LegPy is designed to be used in Google Colab [10] or Jupyter Notebook, which are interactive environments that improve user experience. Several notebook examples are included in the last release of this software package.
The package includes a library of media containing all the necessary data taken from NIST databases, so the user only has to choose the media from this library. Adding new media to the library is also quite easy. Presently, only three object geometries (i.e., sphere, cylinder, and orthohedron) are supported. Any object can be divided into voxels in Cartesian coordinates, although spherical and cylindrical voxelizations are also available for objects having these symmetries. The user only needs to select the geometry, input the dimensions of the object, and give the number of voxels along each dimension. The beam of particles is assumed to enter the bottom surface of the object, which is always oriented along the z-axis. Both parallel and divergent beams can be used and a set of predefined energy spectra are available. For photon beams, the tracking of secondary electrons can be turned on and off. All these simplifications and default options provide a simple but flexible simulation framework.
In Fig. 1, a sample of code that executes a simulation with LegPy and a couple of output plots are shown. The object dubbed "result" in this example stores the results of the simulation and has several methods to plot the energy deposit distribution, the particle tracks and histograms of various relevant parameters. The package includes additional functions to analyze the simulation results. All the results shown in this paper have been obtained with LegPy.
## 3 Validation of the algorithm
In order to check the validity of the code, a number of tests were made to compare our results with experimental data and those from other MC codes. In particular, we carried out systematic comparisons with the well-established code PENELO
depth is very weak in light elements even at energies of a few MeV.
Results for iron at 0.3, 1.0 and 5.0 MeV are shown in Fig. 3. Again, the simplifications made in LegPy do not seem to have a relevant impact in the calculation of the dose at these energies. At 0.1 MeV (not shown in the figure), LegPy also reproduces the result from PENELOPE up to about 4 mfp but underestimates it by 30% at 10 mfp. This deviation is attributed to the simplified treatment of the coherent scattering in LegPy.
The results for lead at 0.3, 1.0, and 5.0 MeV are shown in Fig. 4. LegPy is in agreement with PENELOPE up to about 4 mfp with larger deviations (up to 30%) at large depths. Again, the effect of ignoring pair creation and Bremsstrahlung is not very relevant in the depth dose calculation even at 5 MeV. However, we found very large deviations with respect to both PENELOPE and MCNP5 at 0.1 MeV, which are attributed to ignoring X-ray fluorescence. It is well known [19] that, for heavy atoms, X-ray fluorescence leads to a dramatic increase in buildup factors for photon energies slightly above the K edge of the photoelectric cross section. Indeed, the value of \(B(R)\) obtained by PENELOPE is larger than that obtained with LegPy by a factor of 6 (50) at a depth of 5 mfp (10 mfp).
Figure 1: Example of the use of LegPy. a) Sample of code that sets up and executes a simulation of a parallel beam of electrons that traverses a coaxial cylinder of water and lead. b) Tracks of 50 electrons. c) 2D distribution of deposited energy at several depths.
Figure 3: Same as figure 2 for iron.
Figure 2: In the upper plot the buildup factors versus depth (in mean free path units) in water obtained with LegPy (continuous lines) are compared with those obtained with PENELOPE (full circles) and MCNP5 (+). The deviations of LegPy results with respect to those from PENELOPE are shown in the lower plot.
We carried out another set of tests for the angular distribution of photons escaping the medium. We simulated pencil beams of various energies traversing a cylinder of water with both height and diameter equals to one mfp at the corresponding energy. The LegPy results for 0.2, 1.0, and 5.0 MeV are compared with those from PENELOPE in Fig. 5. We found that the agreement is very good up to about 2.0 MeV while LegPy results deviate significantly from those from PENELOPE at higher energies. From these simulations, we also calculated the distribution of energy absorbed by the medium. The results are shown in Fig. 6. The agreement is very good even at 5 MeV, although LegPy does not reproduce some spectral features associated with processes ignored in our algorithm, e.g., escape of annihilating photons subsequent to pair creation.
An interesting test was done for isotropic beams of 0.667 and 1.275 MeV (emulating radioactive point sources of \({}^{137}\)Cs and \({}^{22}\)Na) at a distance of 2 cm from a NaI cylinder with both diameter and height of one mean free path (4.34 cm for \({}^{137}\)Cs and 5.32 cm for \({}^{22}\)Na). As can be seen in Fig. 7, the spectrum of absorbed energy (i.e., the shape of the Compton profile and the height of the photopeak) obtained with LegPy is in very good agreement with the one obtained with PENELOPE. However, PENELOPE predicts a small peak shifted by about 30 keV from the photopeak due to the escape of X-rays produced after the photoelectric absorption in the K shell of Iodine. This feature, which is generally irrelevant for the characterization of a scintillator, cannot be reproduced by LegPy because X-ray fluorescence is ignored in our algorithm.
### Electron beams
For the validation of LegPy for the transportation of electrons, the dose in depth, the backscattering coefficient and the angular distribution of backscattered electrons were obtained for a pencil electron beam going along the axis of a semi-infinite cylinder (i.e., larger than the corresponding CSDA range). Different media were tested in the energy range 0.1 - 5.0 MeV. As in the previous section, we repeated the simulations with PENELOPE to compare with LegPy. The simulation results were also compared with experimental data on energy deposition of electron beams on different media and backscattering coefficients, which are available in the literature for a wide energy
Figure 4: Same as figure 2 for lead.
Figure 5: Angular distribution of outgoing photons for 0.2, 1.0 and 5.0 MeV pencil beams on a water cylinder of one mean free path size. Results from LegPy (points) are compared with those from PENELOPE (continuous line). See text for details.
Figure 6: Spectra of absorbed energy for the same simulation tests as used in Fig. 5.
Figure 7: Spectrum of absorbed energy for isotropic sources of 0.667 and 1.275 MeV on a NaI cylinder. Results from LegPy (points) are compared with those from PENELOPE (continuous line). See text for details.
range [20], [21] and have been used since long ago to benchmark PENELOPE [22] and GEANT4 [23], [24].
In figures 8 and 9, depth dose in water for electrons at several energies in the range \(0.1-5.0\) MeV obtained with LegPy are compared with the results from PENELOPE. As explained above, the fluctuations in energy deposition are ignored in our simplified algorithm in such a way that the path length of all electrons equals the CSDA range. As a consequence, the dose at depth beyond the CSDA range obtained with LegPy is zero while the energy is spread to larger distances for the PENELOPE results. Apart from this discrepancy, LegPy reproduces reasonably the results from PENELOPE with deviations in the maximum of the dose in depth function smaller or around 10%. These discrepancies are a consequence of the approximations made in the multiple scattering and collisional energy losses in our algorithm.
The results for aluminum are shown in Fig. 10. LegPy reproduces the PENELOPE results with an accuracy similar to that obtained for water. Available experimental data at 0.31, 0.52, and 1.03 MeV [20] are in very good agreement with PENELOPE confirming the reasonable accuracy of LegPy. Similar results were found up in the whole range between 0.1 and 5.0 MeV.
The results for copper are shown in Fig. 11. LegPy reproduces well the dose in depth at energies below 1 MeV. The discrepancy turns out significant at 5.0 MeV (16 % at the depth dose maximum), very likely due to ignoring the Bremsstrahlung production. For very heavy elements, such as uranium, this discrepancy exists even at lower energies, as expected (Fig. 12).
Electron backscattering in LegPy was also compared with the results from PENELOPE. In general, we found a good agreement in the shape of the angular distribution of backscattered electrons obtained with both simulation codes. As an example, we show the results for copper at 0.1 and 2.0 MeV in Fig. 13. On the other hand, LegPy tends to underestimate the backscattering coefficient \(\eta\) (i.e., the ratio between incoming and backscattered electrons) for light elements. For instance, in
Figure 11: Same as Fig. 10 for 0.1, 0.2, and 0.5 MeV electrons on copper.
Figure 8: Depth dose for electron beams of 0.1, 0.2, and 0.5 MeV on water. The results from LegPy (continuous lines) are compared with those from PENELOPE (full blue circles).
Figure 10: Depth dose for electron beams of 0.31, 0.52, and 1.03 MeV on aluminum. The results from LegPy (continuous lines) are compared with those from PENELOPE (full blue circles) and experimental data from [20] (red triangles).
Figure 9: Same as Fig. 9 for 1.0, 2.0 and 5.0 MeV.
water at 1.0 MeV, the PENELOPE result is \(\eta=3.0\%\) while it is half that value for LegPy. The agreement in the backscattering coefficient improves for heavier elements. For aluminum at 1.03 MeV, LegPy gives \(\eta=8.0\%\) to be compared with the value of \(\eta=9.5\%\) obtained with PENELOPE, which is in good agreement with the experimental value of 9.2% [21]. For copper at 2.0 MeV (Fig. 13), LegPy gives \(\eta=28\%\) while PENELOPE gives \(\eta=29.5\%\). For uranium at 1 MeV, the PENELOPE result is \(\eta=51\%\) versus \(\eta=44\%\) from LegPy.
### Beams in a two media object
Several tests were made to check the performance of LegPy for transporting photons and electrons through an object composed by two different media. As an example, in Fig. 14, we show the dose in depth for a photon beam of 1.0 MeV crossing along its axis a cylinder of 2.0 cm length with a 10.0 cm diameter (i.e., laterally infinite) to assure that all scattered radiation except the backscattered one is absorbed. The transportation of secondary electrons is included in the simulation. The cylinder is made of water and lead, the boundary between the two media being at a depth of 1.0 cm. As can be seen in the figure, the dose in depth obtained with LegPy is in good agreement with the result from PENELOPE. Note the relevant role of secondary electrons in the dose in depth. Electronic equilibrium is reached at about 2.5 mm and the backscattering in lead is very strong. We checked that similar agreements with PENELOPE are achieved at 0.3 and 5.0 MeV.
Tests for an electron beam traversing a two-media cylinder were also carried out. As an example, in Fig. 15 it is shown the dose in depth for a pencil beam of 1.0 MeV electrons crossing a cylinder of 0.164 cm length and 0.20 cm diameter, made of aluminum and lead. Several cases were studied varying the depth \(z_{\mathrm{b}}\) of the boundary between the two media. The results of the figure corresponds to \(z_{\mathrm{b}}\) values of 0.0411, 0.082, and 0.123 cm as well as to the only-Al case. The LegPy results are in very reasonable agreement with those from PENELOPE. Backscattering leads to significant features that LegPy adequately reproduces. Deviations with respect to PENELOPE are basically due to the approximations in the electron transportation, as already discussed in 3.2.
## 4 Conclusions
In this paper, we have presented a simplified algorithm for the simulation of the passage of low-energy electrons and gamma rays through any medium. The algorithm has been realized in the Python package called LegPy available under an open source license.
The algorithm has been validated by comparing a set of results with the code PENELOPE as well as with available experimental data. From the comparisons for photon beams, we can state that LegPy is able to calculate with reasonable accuracy (i.e., around 10%) the dose deposited by photons of up to 5 MeV in light media at depths of up to several mean free paths. This is particularly interesting for applications in medical physics. The algorithm can also reproduce accurately the
Figure 14: Dose in depth for a beam of 1.0 MeV photons on a water - lead cylinder along the z axis. The dashed line indicates the position of the boundary between the two media. See text for details. The results from LegPy (black points) are compared with those from PENELOPE (red points)
Figure 12: Same as Fig. 10 for 0.2, 0.5 and 1.0 MeV electrons on uranium
Figure 13: Angular distribution of backscattered electrons in copper. The results from LegPy (red points) are compared with those from PENELOPE (black points)
spectrum of absorbed energy in light elements as well in typical scintillator materials used in gamma-ray spectrometry. The results of the angular distribution of escaping photons obtained with LegPy are also in general good agreement with those from PENELOPE, but results for water at 5 MeV have significant deviations, indicating that the scattering of photons is more sensitive to the approximations employed in our algorithm. We found that the main limitation of our algorithm for the simulation of low-energy gamma rays is that it is not valid at energies close but over that of the K shell of heavy elements. We plan to implement soon a simplified model for the X-ray fluorescence in our algorithm to solve this problem.
Regarding the transportation of electrons, we have presented a simple model that provides reasonable results of both the dose in depth and backscattering. Our results of the dose in depth in light elements deviate from those from PENELOPE by less than or around 10% at the maximum of the depth in dose function. For heavy elements, deviations remain small up to a few MeV, but LegPy underestimates significantly the energy deposition at higher energies as a consequence of ignoring the Bremsstrahlung effect. In general, backscattering is properly simulated with LegPy. The most significant deviations with respect to PENELOPE were found for light elements, although they are expected to have a minor effect in most practical cases because backscattering is only relevant for heavy elements.
LegPy has also been tested for the simulation of photons and electrons around the boundary between two media. The expected effects are properly observed and results are in good agreement with those obtained with PENELOPE for both photon and electron beams.
The several simplifications of our algorithm greatly improves the simulation speed. For example, the computing times needed to get the results of dose deposited by 5 MeV photons in lead shown in Fig. 4 were 130 times smaller for LegPy than for PENELOPE, in spite that LegPy simulations were run using a Python interpreter. At lower photon energies and for lighter elements, the number of interactions is reduced and the speed of both simulation codes becomes more similar. The increase in speed is also significant for electrons. For example, LegPy is faster than PENELOPE by a factor of 36 for 1.03 MeV electrons in aluminum (Fig. 10) if using a step length of 10% of the CSDA range in LegPy. For a step length of 1% of the CSDA, the speed of LegPy decreases approximately by a factor of 10, but it is still faster than PENELOPE.
In summary, an easy-to-use tool for fast simulations of low energy gamma-rays and electrons is available. LegPy aims to be useful for researchers that need simulations on simple geometries with reasonable accuracy but without the technical complexity of other MC codes. As the tool is very easy to use, it may also be useful for teaching purposes in undergraduate or postgraduate degree programs as well as for the training of experts in the field of medical applications of ionizing radiation.
## Acknowledgements
We thank Francesc Salvat for providing help and guidance for the use of PENELOPE program. We gratefully acknowledge financial support from the Spanish Research State Agency (AEI) through the grant PID2019-104114RB-C32. V. Moya also acknowledges the research grant CT19/23-INVM-109 funded by NextGenerationEU.
|
2303.14316 | The Sums of Two Squares do not have Metric Poissonian Pair Correlation | We prove that the sequence of the sums of two squares do not have metric
Poissonian pair correlation. | Sharon Zilberhertz | 2023-03-25T00:59:34Z | http://arxiv.org/abs/2303.14316v1 | # No Poissonian pair correlation for the sum of two squares
###### Abstract.
We prove that the sums of two squares do not have the metric Poissonian pair correlation property.
Supported by the Israel Science Foundation grant No.1881/20
## 1. Introduction and Definitions
For a strictly increasing sequence of naturals \(a_{n}\) and some \(\alpha\in[0,1)\) we say that \(a_{n}\alpha\) is uniformly distributed if for any subinterval \(J\subset[0,1)\)
\[\frac{1}{N}\#\{\{a_{n}\alpha\}\in J\mid n\leq N\}\xrightarrow[N\to\infty]{}|J|\]
with \(\{\cdot\}\) denoting the fractional part of a real number.
Now, we turn to a somewhat stronger definition of uniform distribution modulo \(1\), Poissonian pair correlation of a sequence.
### Metric Poissonian Pair Correlation
For a strictly increasing sequence of positive reals \(a_{n}\) we say that it has metric Poissonian pair correlation (MPPC for short) if and only if for a.e \(\alpha\in\mathbb{R}\) and all \(s\geq 0\) we have
\[\frac{1}{N}\#\{\|(a_{m}-a_{n})\alpha\|\leq\frac{s}{N}\mid 1\leq m\neq n\leq N \}\xrightarrow[N\to\infty]{}2s.\]
Intuitively, we count the number of Diophantine approximations of \(\alpha\) by the set of differences of \((a_{n})_{n\in\mathbb{N}}\) i.e. the number of times \((a_{n}-a_{m})\alpha\) has a smaller distance than a scaled factor of \(1/N\) (by \(s>0\)) which is the average gap that one expects to get over \(N\) points in the unit interval. So the pair correlation function \(T\) essentially measures how uniformly the gaps of \((a_{n}-a_{m})\alpha\) are distributed in scaled intervals.
In the last several decades this theme of Poissonian pair correlation has been studied for various of sequences such as \(n^{k}\) for \(k=1\) and \(k\geq 2\) integers, then \(n^{\theta}\) with \(\theta>0\) non-integer where the latter has
been shown to be true for all \(\alpha\neq 0\) and \(\theta<\frac{1}{3}\) in [12]. For various other sequences such as the mentioned above \(n^{k}\alpha\) it turned out to be difficult to prove the property for specific \(\alpha\)'s. The difficulty of the deterministic problem made mathematicians turn to the probabilistic problem instead, namely the MPPC which we formulated above. For this case there are several known results, among them the sequence of \(n^{k}\) has been shown not to have this property for \(k=1\) and has been proven to have this property for integers \(k\geq 2\) by Rudnick and Sarnak (see [14]), later the sequence \(n^{\theta}\) has been proven to have MPPC as well for any \(\theta>1\) non-integer (see [15]) and later for \(0<\theta<1\) (see [1]) finishing the case of powers of \(n\).
From now on we will consider the case of \(a_{n}\) being naturals only, and thus restrict ourselves to \(\alpha\in[0,1)\) instead of \(\mathbb{R}\).
We define the function \(R_{N}(v)\) on the naturals (\(v>0\)) to be
\[R_{N}(v)=\#\{a_{m}-a_{n}=v\mid 1\leq n<m\leq N\}\]
and get an alternative definition to pair correlation by considering the function
\[T(\alpha,N,s)=\frac{1}{N}\sum_{v\in\mathbb{N}}R_{N}(v)\mathbf{1}_{\{\|v\alpha \|\leq\frac{s}{N}\}}\]
and the definition shall then be that \(a_{n}\) has metric Poissonian pair correlation if and only if
\[T(\alpha,N,s)\xrightarrow[N\to\infty]{}s\]
for almost every \(\alpha\) for any \(s>0\). Note: the limit changed from \(2s\) to \(s\) because we removed the negative duplicated gaps by taking \(a_{m}-a_{n}\) with \(m>n\).
The technique used frequently to prove that a natural sequence has MPPC is based on the additive energy of the sequence, where we define the additive energy of a sequence to be
\[E(a_{n},N)=\#\{a_{m}-a_{n}=a_{k}-a_{l}\mid 1\leq n,m,k,l\leq N\}.\]
This term satisfies \(N^{2}\leq E(a_{n},N)\leq N^{3}\) for any given sequence \(a_{n}\) and has connection to the MPPC property through the next theorems
**Theorem 1.1** (Theorem 6 of [5]).: _Let \(a_{n}\) be a sequence with \(E(a_{n},N)\ll\frac{N^{3}}{(\log N)^{C}}\) for some big enough \(C>1\), then \(a_{n}\) is a MPPC._
**Theorem 1.2** (Theorem 1 of [11]).: _Let \(a_{n}\) be a sequence with \(E(a_{n},N)\gg N^{3}\) i.e. maximal order of additive energy, then the \(a_{n}\) is not MPPC and in fact it has no PPC for almost all \(\alpha\)._
By these theorems we have a clear red zone and green zone for the additive energy such that the sequence will or will not have MPPC, but notice that there is a gap that remains unknown between \(\frac{N^{3}}{(\log N)^{\mathcal{C}}}\) and \(N^{3}\). This leads of course to the natural question regarding how much further can we expand these red and green zones. There has been several proof of existence for sequences \(a_{n}\) with \(E=o(N^{3})\), beginning with Bourgain's construction of such sequence (see the Appendix of [2]), and later in [6] the authors constructed a sequence for every additive energy of the order
\[E(a_{n},N)\asymp\frac{N^{3}}{\log N\log\log N...(\log...\log N)}.\]
In [17] Walker showed that the primes are not MPPC. It is also known that the primes' additive energy is \(E\asymp\frac{N^{3}}{\log N}\). The latter two papers raised the question whether there is some kind of a Khintchine limit on the order of the additive energy, to which accordingly the sequence is MPPC or not, more precisely whether for a sequence \(a_{n}\) with \(E(a_{n},N)\sim N^{3}f(N)\) with \(f(N)\) decreasing to \(0\), is the MPPC property for \(a_{n}\) determined by the convergence or divergence of the sum
\[\sum_{n=1}^{\infty}\frac{f(n)}{n}.\]
This question has been answered in the negative in [3] where they constructed a sequence with \(E\gg\frac{N^{3}}{(\log N)^{3/4+\delta}}\) which has the MPPC property.
To summarize, the best known red zone (resp. green zone) is \(E\gg N^{3}\) (resp. \(E\ll\frac{N^{3}}{(\log N)^{\mathcal{C}}}\) for \(C>1\) big enough), and the red zone cannot be extended below \(\frac{N^{3}}{(\log N)^{3/4}}\).
In this paper we are going to focus on the sequence of sum of two squares in increasing order, and show it has no MPPC. It is not difficult to show that this sequence has additive energy of order \(E\asymp\frac{N^{3}}{\sqrt{\log N}}\).
**Theorem 1.3** (Main Theorem).: _Let \(a_{n}\) be the sequence of the sum of two squares in a strictly increasing order, then \(a_{n}\) does not have MPPC._
The proof of this theorem will exploit the somewhat uniform distribution of the sum of two squares along arithmetic progression of small spacing, i.e. if \(A(x)\) is the number of sum of two squares up to \(x\) then for appropriate \(a\) and \(q\leq\log x\)
\[A_{q,a}(x)\gg\frac{A(x)}{q}\]
with \(A_{q,a}(x)\) being the number of sum of two squares up to \(x\) which are congruent to \(a\) mod \(q\).
With some further work we can prove an analogous result when we replace the sum of two squares with any positive definite binary quadratic form, see [18].
## 2. Preliminaries
In preparation we will require two results
**Theorem 2.1** (Theorem 1 of [16]).: _Let \(f(n)\) be some positive arithmetic function with \(f(n)=O(n^{-1})\), denote by \(\phi(n)\) the Euler's totient function, then for almost every \(\alpha\) there are infinitely many \(n\)'s with_
\[\|n\alpha\|\leq f(n)\]
_and \(\|n\alpha\|=|r-n\alpha|\) for some \((r,n)=1\), if and only if the following sum diverges_
\[\sum_{n=1}^{\infty}f(n)\frac{\phi(n)}{n}.\]
Theorem 2.1 due to Vaaler (1978) is a weaker version of the Duffin-Schaeffer conjecture, which states a similar result but removes the need for the condition of \(O(n^{-1})\), which in our case does not affect us. The conjecture was recently proven by Dimitris Koukoulopoulos and James Maynard (see [9]).
**Theorem 2.2** (Theorem 14.7 of [8]).: _Let \(b^{\prime}(n)\) be the indicator of the odd naturals which are properly represented by sum of two squares, i.e. \(b^{\prime}(n)=1\) if and only if \(n=r^{2}+t^{2}\) with \((r,t)=1\), then for \(a\) with \((a,q)=1\) and \(a\equiv 1\mod(4,q)\), we have_
\[\sum_{\begin{subarray}{c}n\leq x\\ n\equiv a(q)\end{subarray}}b^{\prime}(n)=\frac{c_{q}x}{q\sqrt{\log x}}(1+O( \frac{\log q}{\log x})^{1/7})\]
_with \(c_{q}\gg 1\). If \(q\equiv 0(4)\) then \(c_{q}=2\kappa\prod_{\begin{subarray}{c}p|q\\ p\equiv 3(4)\end{subarray}}(1+\frac{1}{p})\) (with \(\kappa=0.7642\dots\) being Landau-Ramanujan constant). For \((q,4)=1\) it is the same but with factor of \(1/4\)._
## 3. Lemmas
**Lemma 3.1**.: _Let \(B\) be a set with the property that_
\[\#\{n\in[M,2M)\mid n\in B\}\gg\frac{M}{(\log\log M)^{2}}\]
_for any \(M\gg 1\). If we fix \(h<1\), then for almost all \(\alpha\) there exist infinitely many \(n\in B\) such that_
\[\|n\alpha\|\leq\frac{1}{n(\log n)^{h}}.\]
### Proof of Lemma 3.1
We apply Theorem 2.1 with
\[f(n)=\begin{cases}\frac{1}{n(\log n)^{n}}&n\in B\\ 0&\text{otherwise}\end{cases}\]
and divide the sum in the theorem into dyadic intervals to get that using \(\phi(n)\gg\frac{n}{\log\log n}\) we have
\[\sum_{n\in B}f(n)\frac{\phi(n)}{n}\gg\sum_{t=1}^{\infty}\sum_{ \begin{subarray}{c}n\in[2^{t},2^{t+1})\\ n\in B\end{subarray}}f(n)\frac{\phi(n)}{n}\gg\sum_{t=1}^{\infty}\sum_{ \begin{subarray}{c}n\in[2^{t},2^{t+1})\\ n\in B\end{subarray}}\frac{f(n)}{\log\log n}\\ \gg\sum_{t=1}^{\infty}\frac{1}{2^{t+1}(\log 2^{t+1})^{h}\log\log 2^{t+ 1}}\sum_{\begin{subarray}{c}n\in[2^{t},2^{t+1})\\ n\in B\end{subarray}}1,\]
and by the condition on \(B\) in the lemma, we have that the inner sum is
\[\gg\frac{2^{t}}{(\log\log 2^{t})^{2}}.\]
Therefore, we have
\[\sum_{n\in B}f(n)\frac{\phi(n)}{n}\gg\sum_{t=1}^{\infty}\frac{1}{(\log 2^{t})^{ h}(\log\log 2^{t})^{3}}\gg\sum_{t=1}^{\infty}\frac{1}{t^{h}(\log t)^{3}}=\infty.\]
Now apply Theorem 2.1 to conclude that for almost every \(\alpha\) there are infinitely many \(n\in B\) with \(\|n\alpha\|\leq f(n)\).
**Lemma 3.2**.: _Let b(n) be the indicator function for the sum of two squares, then for odd \(q\ll\log x\) and natural \(a\) such that \((a,q)=1\) we have for \(x\geq y\gg x\)_
\[\sum_{\begin{subarray}{c}x\leq n\leq x+y\\ n\equiv a(q)\end{subarray}}b(n)\gg\frac{y}{q\sqrt{\log x}}.\]
### Proof of Lemma 3.2
Let \(a,q,x,y\) be as in the lemma, and denote by \(b(n)\) (resp. \(b^{\prime}(n)\)) the naturals represented by the sum of two squares (resp. odd and properly represented by the sum of two squares) then by Theorem 2.2 we have
\[\sum_{\begin{subarray}{c}x<n\leq x+y\\ n\equiv a(q)\end{subarray}}b(n)\geq\sum_{\begin{subarray}{c}x<n\leq x+y\\ n\equiv a(q)\end{subarray}}b^{\prime}(n)=\sum_{\begin{subarray}{c}n\leq x+y\\ n\equiv a(q)\end{subarray}}b^{\prime}(n)-\sum_{\begin{subarray}{c}n\leq x\\ n\equiv a(q)\end{subarray}}b^{\prime}(n)\\ =\frac{c_{q}(x+y)}{q\sqrt{\log(x+y)}}-\frac{c_{q}x}{q\sqrt{\log x}} +O(\frac{c_{q}x}{q\sqrt{\log x}}(\frac{\log q}{\log x})^{1/7})\\ \gg\frac{c_{q}y}{q\sqrt{\log x}}+O(\frac{c_{q}x}{q\sqrt{\log x}} (\frac{\log\log x}{\log x})^{1/7})\gg\frac{y}{q\sqrt{\log x}},\]
where in the last inequality we used the fact \(y\gg x\) to dismiss the error thanks to it containing the factor \((\log\log x/\log x)^{1/7}\), and therefore, we proved the lemma.
## 4. Proof of The Main Theorem
We first recall the definition of \(T(\alpha,N,s)\) and \(R_{N}(v)\)
\[T(\alpha,N,s)=\frac{1}{N}\sum_{v\in\mathbb{N}}R_{N}(v)\mathbf{1}_{\{\|v\alpha \|\leq\frac{s}{N}\}}\]
with
\[R_{N}(v)=\#\{a_{m}-a_{n}=v\mid 1\leq n<m\leq N\}.\]
By the definition of metric Poissonian pair correlation it is enough to prove that for almost every \(\alpha\) there are infinitely many \(N\)'s such that
\[T(\alpha,10N,1)\gg_{\alpha}(\log N)^{1/10}\]
and this is what we aim to obtain, in fact we will be able to prove the above with the constant not depending on \(\alpha\) (but the sequence of such \(N\)'s do).
Let us now consider \(a_{10N}\) (the \(10N\)-th member of the sum of two squares). Landau showed (1908) that the density of the sum of two squares is
\[\frac{N}{a_{N}}\sim\frac{\kappa}{\sqrt{\log N}}\]
with \(0<\kappa<1\), so for large enough \(N\) we have
\[a_{10N}\geq 10N\sqrt{\log N}.\]
Now take any \(v\leq 5N\sqrt{\log N}\) for this large enough \(N\), then we have
\[R_{10N}(v)=\#\{a_{m}-a_{n}=v\mid 1\leq n<m\leq 10N\}=\\ \sum_{m\leq a_{10N}-v}b(m)b(m+v)\geq\sum_{m\leq 5N\sqrt{\log N}}b(m)b (m+v).\]
Now we want to take \(n\sim\frac{N}{(\log N)^{1/8}}\) and to examine the sum over \(R_{10N}\) along the arithmetic progression \(nk\) for \(k\leq(\log n)^{5/8}\leq(\log N)^{5/8}\) (so \(nk\leq 5N\sqrt{\log N}\) for big enough \(N\))
\[\sum_{k\leq(\log n)^{5/8}}R_{10N}(nk) \geq\sum_{k\leq(\log n)^{5/8}}\sum_{m\leq 5N\sqrt{\log N}}b(m)b(m+nk)\] \[\geq\sum_{k\leq(\log n)^{5/8}}\sum_{m\leq n(\log n)^{5/8}}b(m)b (m+nk)\coloneqq G(n).\]
Now, what we want to do, is to bound \(G(n)\) from below for a relatively dense set \(B\) of integers \(n\), dense enough to have a very good approximations \(n\in B\) such that even for \(k\leq(\log n)^{5/8}\) we will still have \(\|nk\alpha\|\leq\frac{1}{10N}\).
We denote by \(D_{u}\) the set of naturals with prime divisors \(p\equiv u(4)\), and we set \(K=(\log M)^{5/8}\) and \(Q=M(\log M)^{5/8}\). Now we use Lemma 3.2 and the fact that \(D_{1}\) is contained within the sum of two squares to obtain
\[\sum_{n\in[M,2M)}G(n) =\sum_{M\leq n<2M}\sum_{k\leq(\log n)^{5/8}}\sum_{m\leq n(\log n )^{5/8}}b(m)b(m+nk)\] \[\geq\sum_{m\leq Q}b(m)\sum_{k\leq K}\sum_{M\leq n<2M}b(m+nk)\] \[\geq\sum_{\begin{subarray}{c}m\leq Q\\ m\in D_{1}\end{subarray}}\sum_{\begin{subarray}{c}\frac{1}{2}K\leq k\leq K\\ k\in D_{3}\end{subarray}}\sum_{\begin{subarray}{c}Mk+m\leq l<2Mk+m\\ l\equiv m(k)\end{subarray}}b(l).\]
Since the summation is over \(k\in D_{1}\) and \(m\in D_{3}\) we have that \((m,k)=1\) and \((m,4)=1\). Put \(x=Mk+m\) and \(y=Mk\), since \(\frac{K}{2}\leq k\leq K\) and \(m\leq Q=MK\) we have that \(y\gg x\) and \(k\ll\log x\sim\log M\), therefore \(m,k,x,y\) satisfy the conditions in Lemma 3.2 so we can use
it to estimate the innermost sum of the above inequality's most RHS
\[\sum_{\begin{subarray}{c}Mk+m\leq l<2Mk+m\\ l\equiv m(k)\end{subarray}}b(l) =\sum_{\begin{subarray}{c}x\leq l\leq x+y\\ l\equiv m(k)\end{subarray}}b(l)\] \[\gg\frac{y}{k\sqrt{\log x}}=\frac{Mk}{k\sqrt{\log(Mk+m)}}\gg\frac {M}{\sqrt{\log M}}.\]
Now we use it to see that
\[\sum_{n\in[M,2M)}G(n)\gg\sum_{\begin{subarray}{c}m\leq Q\\ m\in D_{1}\end{subarray}}\sum_{\begin{subarray}{c}\frac{1}{2}K\leq k\leq K\\ k\in D_{3}\end{subarray}}\sum_{\begin{subarray}{c}Mk+m\leq l<2Mk+m\\ l\equiv m(k)\end{subarray}}b(l)\] \[\gg\frac{M}{\sqrt{\log M}}\sum_{\begin{subarray}{c}m\leq Q\\ m\in D_{1}\end{subarray}}\sum_{\begin{subarray}{c}\frac{1}{2}K\leq k\leq K\\ k\in D_{3}\end{subarray}}1\gg\frac{M}{\sqrt{\log M}}\frac{K}{\sqrt{\log K}}\frac {Q}{\sqrt{\log Q}}\gg\frac{M^{2}(\log M)^{2/8}}{\log\log M}.\]
Here we use two facts about the densities of \(D_{1}\) and \(D_{3}\), i.e.
\[\sum_{\begin{subarray}{c}\frac{1}{2}K\leq k\leq K\\ k\in D_{3}\end{subarray}}1\gg\frac{K}{\sqrt{\log K}}\]
and
\[\sum_{\begin{subarray}{c}m\leq Q\\ m\in D_{1}\end{subarray}}1\gg\frac{Q}{\sqrt{\log Q}}\]
where in the inequality we took \(K=(\log M)^{5/8}\) and \(Q=M(\log M)^{5/8}\). These facts can be proven by considering the generating Dirichlet function of \(D_{1}\) and \(D_{3}\), i.e.
\[\sum_{n\in D_{1}}\frac{1}{n^{\mu}}\]
which is used to derive the asymptotic
\[\sum_{\begin{subarray}{c}n\leq x\\ n\in D_{1}\end{subarray}}1\sim\frac{\beta x}{\sqrt{\log x}}\]
with \(\beta=\frac{1}{2\sqrt{2}}>0\) and this yields the result for
\[\sum_{\begin{subarray}{c}\frac{1}{2}K\leq k\leq K\\ k\in D_{1}\end{subarray}}1\]
by subtracting the sum up to \(K/2\) from the sum up to \(K\). The treatment of \(D_{3}\) is similar, and both can be found in [13] exercise 22 (with \(D_{1}\) containing 2-divisors but it does not make any difference and the
same exact method can be used to obtain the asymptotic for our \(D_{1}\)).
This implies that the average of \(G\) over \([M,2M)\) is
\[\gg\frac{M(\log M)^{2/8}}{\log\log M}.\]
Now we use the upper bound (see [7] exercise 2.8)
\[\sum_{m\leq x}b(m)b(m+nk)\ll\prod_{\begin{subarray}{c}p\mid nk\\ p\equiv 3(4)\end{subarray}}(1+\frac{1}{p})\frac{x}{\log x}\]
which holds uniformly for \(n\) and \(k\) and we apply it with \(x=n(\log n)^{5/8}\) to obtain
\[G(n)=\sum_{k\leq(\log n)^{5/8}}\sum_{m\leq x}b(m)b(m+nk)\ll\\ \sum_{k\leq(\log n)^{5/8}}\frac{x}{\log x}\prod_{p\mid nk}(1+\frac {1}{p})\ll n(\log n)^{2/8}\log\log n\]
with the \(\log\log n\) term in the RHS being due to \(k\leq\log n\) so using Mertens' inequality and the fact \(w(n)\ll\log n\) (\(w(n)\) is the number of distinct prime divisors of \(n\)) we see
\[\prod_{p\mid nk}(1+\frac{1}{p})\ll\log\log(nk)\ll\log\log n.\]
The lower bound on the average of \(G(n)\) over the interval \([M,2M)\) and the upper bound on all \(G(n)\) together imply that
\[\#\{n\in[M,2M)\mid G(n)\gg\frac{n(\log n)^{2/8}}{\log\log n}\}\gg\frac{M}{( \log\log M)^{2}}\]
which then implies by Lemma 3.1 that for a.e \(\alpha\) the set
\[B=\{n\mid G(n)\gg\frac{n(\log n)^{2/8}}{\log\log n}\}\]
has infinitely many \(n\)'s such that
\[\|n\alpha\|\leq\frac{1}{n(\log n)^{h}}\]
for any fixed \(h<1\). Now for the final step of the proof, we notice that for any \(n\) we can take \(N\) with \(n\sim\frac{N}{(\log N)^{1/8}}\) so for this choice of \(N\) we have for \(n\in B\)
\[G(n)\gg\frac{n(\log n)^{2/8}}{\log\log n}\gg N(\log N)^{1/10}.\]
Now we want to see that we can find infinitely many \(n\)'s (or equivalently \(N\)) such that \(\|nk\alpha\|\leq\frac{1}{10N}\) for all \(k\leq n(\log n)^{5/8}\) but this is indeed the case because we can apply the above with \(h=0.9\) to get
\[\|nk\alpha\|\leq\frac{k}{n(\log n)^{h}}\leq\frac{1}{n(\log n)^{2/8}}\ll\frac{1} {N(\log N)^{1/8}}\]
and of course for big enough \(N\) this implies
\[\|nk\alpha\|\leq\frac{1}{10N}.\]
Now going back to our definition of \(T(\alpha,N,s)\) we get
\[T(\alpha,10N,1)=\frac{1}{10N}\sum_{v\in\mathbb{N}}R_{10N}(v)\mathbf{1}_{\{\|v \alpha\|\leq\frac{1}{10N}\}}\]
but for our choice of \(n\) and \(N\) and due to the computations we did for the sum
\[\sum_{k\leq(\log n)^{5/8}}R_{10N}(nk)\geq G(n)\]
we have that
\[T(\alpha,10N,1)=\frac{1}{10N}\sum_{v\in\mathbb{N}}R_{10N}(v) \mathbf{1}_{\{\|v\alpha\|\leq\frac{1}{10N}\}}\geq\\ \frac{1}{10N}\sum_{k\leq(\log n)^{5/8}}R_{10N}(nk)\geq\frac{G(n)} {10N}\gg(\log N)^{1/10}\]
and now we are done proving there is no pair correlation for almost every \(\alpha\).
|
2302.03543 | Attributing equity gaps to course structure in introductory physics | We add to a growing literature suggesting that demographic grade gaps should
be attributed to biases embedded in the courses themselves. Changes in the
structure of two different introductory physics classes were made while leaving
the topics covered and the level of coverage unchanged. First, a class where
conceptual issues were studied before doing any complicated calculations had
zero final exam grade gap between students from underrepresented racial/ethnic
groups and their peers. Next, four classes that offered students a retake exam
each week between the regular bi-weekly exams during the term had zero gender
gap in course grades. Our analysis indicates that demographic grade gaps can be
attributed to the course structure (a Course Deficit Model) rather than to
student preparation (a Student Deficit Model). | David J. Webb, Cassandra A. Paul | 2023-02-07T15:52:47Z | http://arxiv.org/abs/2302.03543v3 | # Equity gaps are attributed to course structure in introductory physics
###### Abstract
We add to a growing literature suggesting that demographic grade gaps should be attributed to biases embedded in the courses themselves. Changes in the structure of two different introductory physics classes were made while leaving the topics covered and the level of coverage unchanged. First, a class where conceptual issues were studied before doing any complicated calculations had zero final exam grade gap between students from underrepresented racial/ethnic groups and their peers. Next, four classes that offered students a retake exam each week between the regular biweekly exams during the term had zero gender gap in course grades. Our analysis indicates that demographic grade gaps can be attributed to the course structure (a Course Deficit Model) rather than to student preparation (a Student Deficit Model).
## I Overview
Recent research has shown that demographic gaps in introductory STEM courses correlate with demographic differences in persistence of students pursuing their STEM majors[1; 2]. This implies that we we should be especially striving for equity in introductory courses. However there are still some who oppose these efforts based on the perception that closing equity gaps requires lowering expectations of students. In his July 2022 editorial, Editor-in-Chief of _Science_, H. Holden Thorp recognizes this opposition to efforts aimed at allowing more underrepresented students to be successful in the science on the basis that these 'accommodations' will diminish excellence in the field [3]. Thorp argues that "inclusion doesn't lower standards" by pointing out that are many different kinds of teaching and learning methods that have been shown to allow students from different demographic backgrounds to be successful in their learning without sacrificing the quality of education. In this report, we provide additional evidence for this claim by sharing two examples of structural course changes that removed equity gaps without lowering course standards. Furthermore, we advance the discussion by providing new evidence indicating that equity gaps can't necessarily be explained by measurements of prior math and physics knowledge (i.e. a Student Deficit Model [4] may be inappropriate). Instead, we suggest the Course Deficit Model (first discussed by Cotner and Ballen [5]) as useful when considering equity gaps.
## II Research Framing
The underrepresentation of some demographic groups in many STEM fields (see Ref's [6], [7] etc.) shows that, in these fields, those groups are denied equity in terms of access, achievement, identity, and/or power [8]. In this paper we address equity in achievement, specifically achievement of underrepresented demographic groups in introductory college courses in physics. We show evidence that demographic achievement gaps are the result of biases built into the structure of a course and may be removed by changing some features of the course. Thus, we suggest using a Course Deficit model [5] to understand these differences rather than the more commonly used Student Deficit model [4]. Using the idea that an achievement gap arises from a mismatch between course and student, the Student Deficit model looks to the detailed characteristics of the students in trying to understand the mismatch while the Course Deficit model looks to characteristics of the course to understand and close the mismatch. In this paper we add to the growing evidence that one should look to changes in the courses themselves as a remedy for inequities in achievement between demographic groups.
The most commonly used and readily available measure of achievement is student grades and we will use such measures in this paper, using grade gaps in place of achievement gaps. There is a growing body of recent research suggesting that demographic grade gaps can be changed by changing the structure of the class in any of the following ways: 1) changing a lecture class to an active learning class [9], 2) changing the value of assessments in determining grades [5], and 3) changing the grade scale used to compute course grades [10]. This malleability of grade gaps under changes in the structure of the class argues against the sole use of a Student Deficit model and for the inclusion of the Course Deficit model in explaining these demographic gaps.
Our analysis will also provide support for an equity model that has been called Equity of Parity [11]. Following Gutierrez [12], we take demographic equity to mean that a student's achievement shouldn't be predictable from their demographic characteristics. Equity of Parity further includes the idea that a course should produce no demographic achievement gaps, even if there are demographic differences in measures purporting to represent the quality of a group's preparation for that course. That is, within this equity model, **the class should not perpetuate past inequities**.
At this point we also note that even finding an unbiased measure of student preparation may be difficult.
First, as noted by Salehi et. al. Salehi et al. (2009), a seemingly straightforward measure of preparation, the level of a student's previous study of physics, does not explain demographic differences in college physics exam grades. Second, some other possible measures, such as SAT/ACT scores and/or FCI scores, that have been used Salehi et al. (2009); Salehi et al. (2009) to compare preparation for different demographic groups are themselves suspected Salehi et al. (2009); Salehi et al. (2009); Salehi et al. (2009) of being biased against the very demographic groups that score lower on physics exams so it becomes unclear whether they measure a quantity of racism/sexism in addition to measuring a quantity of preparation.
This does not mean that we don't find preparation to be important but simply that measures of preparation are complex and should not necessarily be taken at face value. One example of this is that those measures that are positively correlated with achievement for students **within** a group may be differently (and even negatively) correlated with achievement in comparisons **between** groups. Gutierrez Gutierrez (2000) suggests that the factors causing within-group differences may not be the same as those causing between-group differences. This phenomena is observed by Shafer et al. who find that the way that we group students together impacts the the predictive power of different metrics of preparation Salehi et al. (2009).
We recognize that the particular context of each class is important and that the precise changes that yield Equity of Parity for one set of students and teachers may not result in Equity of Parity with different students and teachers. Indeed, we'll see this in our data. Nevertheless, our evidence suggests that Equity of Parity is a goal that is possible to achieve.
In this paper we describe two instances of eliminating demographic grades gaps and, importantly, also show that controlling for past preparation does not have the effects predicted by a Student Deficit model. Thus we suggest that demographic grade gaps are determined at the course level and not the student level. Our results provide evidence that a Student Deficit model is inappropriate, if the course can be changed, because the course organization controls essentially all of demographic grade gap. Along with these general conclusions we share two particular course changes that closed demographic equity gaps and so resulted in Equity of Parity being fulfilled.
## III Concepts-first instruction
First we examine some of the results of a structural change to an introductory physics class where all of the concepts studied during a class were introduced and studied in detail in the first 60% of the term with students working on complicated calculations only in the final 40% of the term. We call this a "concepts-first" class. We compare this with the more common introductory physics classes where the various topics to be learned are studied in the same order they are arranged in the textbook. Each chapter of the text includes a discussion of the relevant conceptual material and calculations ranging from simple single-step calculations and progressing to much more complicated multi-step calculations. In a regular class these chapters are covered sequentially through the term so that there are both new concepts and new complicated calculations to learn together throughout the term. The concepts-first structured class is discussed in more detail in reference Salehi et al. (2009).
Four lecture sections of this introductory physics class for physical science majors were offered during the same term at a large public research university. The students from all four classes of this mechanics course took the same final exam at the same time and they were graded at the same time so we use those final exam scores to compare these two kinds of class structure. One of the chapter-by-chapter classes (Section II) and the concepts-first class (Section I) were taught by the same instructor using the same lecture slides, student activities, and homework problems but with a different timing of the various parts of the course to make one class chapter-by-chapter and one class concepts-first. Both of these classes can be considered active-learning classes in that much of the lecture time was spent in student-student discussions of conceptual ideas. The other two chapter-by-chapter classes (Sections III and IV) were taught by veteran instructors who had taught the course many times. The students registered for the various classes without any foreknowledge of how the classes would be organized.
We examine the grade gaps of the demographic groups that the American Physical Society identifies Barlow and Rafter (2000) as underrepresented in physics: i) racial or ethnic background (we use the acronym URM to identify students with either African, Hispanic, Indigenous American, and/or Pacific Islander ancestry) and ii) gender (APS uses binary gender and identifies female students as underrepresented). We use the university supplied data on the students' self-identified racial/ethnic and binary gender categories. At the time these data were collected, the university only recognized two genders that matched those assigned at birth. We regrettably have no means to collect more accurate gender information.
We normalize the final exam grades so that average over all 633 students is zero with a standard deviation equal to one. When we compare the average final exam grade of URM students with the average grade of their peers in the same class, the units will be standard deviations.
We separately consider the results using a Course Deficit model and a Student Deficit model. For the latter model, as discussed in Ref. Salehi et al. (2009), we use two measures of student preparation as they entered the class, a survey of physics concepts Salehi et al. (2009) and the students' normalized introductory calculus grades. The student demographics of our final database including all of these data are shown below in Table 1.
For our first analysis, we do not attempt to control for students' prior preparation, because we want to see the impact that the concepts-first course has on closing grade
gaps in general. The differences in the average final exam grades of URM and non-URM students are 0.17 \(\pm\) 0.25, -0.89 \(\pm\) 0.23, -0.71 \(\pm\) 0.18, -0.81 \(\pm\) 0.23 for lectures I, II, III, and IV, respectively and these are plotted in Figure 1 for each of the four classes. A negative gap means the URM average was lower than non-URM. There is a distinct negative grade gap for each of the three classes (II through IV) taught chapter-by-chapter with the URM students having lower average grades. These grade gaps are roughly equal to each other. In addition, they are also comparable to grade gaps published by several other US universities [13; 14] in that they are all negative and a fraction of a standard deviation, even though the actual exams given in these other schools are likely very different. On the other hand, in the concepts-first class (class number I) URM students had slightly higher final exam grades than their peers though the result is consistent with zero gap.
To analyze the differences seen in Fig. 1 we first group together the three traditional classes. From here on, each time we group more than one class in a single analysis we will do that using Hierarchical Linear Modeling (HLM) with STATA software. We use HLM to account for the fact that there are class-to-class differences in the exact material that students worked on and studied during the quarter and class differences such as these are expected to lead to class-level correlations on the final exam. For instance, students from section IV had seen two of the final exam problems (and their solutions) during the quarter as well as part of another question, students from section III had seen one final exam problem (and its solution) and also saw the same exam layout on two midterms as they had for the final, and there are likely other class differences that we don't know about but that can affect the exam results. HLM models each class by itself before assembling those results into the final coefficients so it should account for differences in the lecture sections because URM and non-URM students in the same section saw the same course materials. Nevertheless, as we show in Appendix A, essentially none of our results would change appreciably if we had instead simply used the more common ordinary least-square (OLS) fitting. For a discussion of HLM see Ref. [20].
In modeling the normalized exam grade (\(NFnlExam\)) we define two categorical variables. At the student level, \(URM=1\) if the student identified their ancestry as placing them in the URM category and \(URM=0\) if they didn't. At the class level, \(CncptFrst=1\) if a student is in the concepts-first class (Section I) and \(CncptFrst=0\) if they were enrolled in one of the other sections. First, we fit the following model separately for the two types of class:
\[NFnlExam=b_{0}+b_{URM}URM \tag{1}\]
This analysis yields a URM gap for the three chapter-by-chapter classes of \(b_{URM}=-0.79\pm 0.12\) and, of course, there is only one concepts-first class so that gap is the same one we found above. These numbers for the gaps uncontrolled for "preparation" are plotted in Figure 2.
Next we use HLM to give us a numerical comparison of the concepts-first class to the chapter-by-chapter classes. The model we fit includes both \(URM\) and \(CncptFrst\) and the interaction between them:
\[NFnlExam=b_{0}+\] \[b_{CncptFrst}CncptFrst+b_{URM}URM+\] \[b_{URM*CncptFrst}(URM*CncptFrst) \tag{2}\]
The results of our HLM fit to equation 2 are shown in Table 2. From \(b_{CncptFrst}\) we see that the non-URM students from the concepts-first class had final exam grades that were statistically indistinguishable from students from the regular classes (despite the fact that some regular lecture sections had seen some exam problems during
\begin{table}
\begin{tabular}{c c c c}
**Section** & **N** & **\%URM** & **\%Female** \\ \hline I & 152 & 12 & 25 \\ II & 160 & 13 & 24 \\ III & 163 & 22 & 31 \\ IV & 158 & 10 & 22 \\ \end{tabular}
\end{table}
Table 1: Demographics of the four lecture sections of this introductory course in Newtonian mechanics included in the dataset.
Figure 1: The URM grade gap on the final exam is the URM average final exam score minus the final exam scores of their peers in the same class. The final exam distribution is normalized to standard deviation = 1 and the results for the four different classes taking the same final exam are shown. Classes II, III, and IV were taught chapter by chapter in the usual way and class I was taught concepts first. Class I and II were taught by the same instructor using exactly the same materials (lecture slides, student activities, and homework) but just arranged differently in time in the two classes. The error bars are standard errors.
the term). Second, \(b_{URM*CneptFrst}\) is significantly different from zero so the URM students from the concepts-first class did much better on the final exam than their URM peers in the regular classes. Finally, \(b_{URM}\) is the demographic grade gap found in the regular classes. So, the grade gap in the concepts-first class is about 3.5 standard errors above the background gap seen in the chapter-by-chapter classes. This suggests that there is less than one chance in a thousand that this difference is just a random fluctuation (i.e. \(P<10^{-3}\)). Because teaching concepts first removes the equity gap that exists in the chapter-by-chapter class, this is evidence in favor of using a Course Deficit model in understanding the URM gaps in this set of classes.
If the URM gaps were explainable in terms of student preparation (a Student Deficit model) then controlling for that preparation should shrink each gap, and the difference between the two class types, to zero. We use the students' normalized calculus grades, \(Calc\), along with the Force Concept Inventory survey, \(FCIpre\), to control for their incoming math and physics preparation in an HLM analysis of the URM final grade gaps. In other words, we fit the normalized final exam scores with the following model:
\[NFnlExam=b_{0}+\\ b_{CalcCalc}+b_{FCIpre}FCIpre+\\ b_{URM}URM \tag{3}\]
The results of using this model on each of the two types of class are that the URM grade gaps, \(b_{URM}\), are \(-0.388\pm 0.089\) for the group of three chapter-by-chapter classes and \(0.38\pm 0.15\) for the concepts-first class. These numbers are also plotted in Figure 2. Neither gap is consistent with zero after using this Student Deficit model and the estimated gap for the concepts-first class has increased instead of decreasing.
We can put all four classes into the same model using:
\[NFnlExam=b_{0}+\\ b_{Calc}Calc+b_{FCIpre}FCIpre+\\ b_{CncptFrst}CncptFrst+b_{URM}URM+\\ b_{URM*CneptFrst}(URM*CneptFrst) \tag{4}\]
The results of our HLM fit to Equation 4 are shown in Table 3. Again, \(b_{CncptFrst}\) is small and statistically insignificant so, again, we see that the non-URM students performed essentially equally in the two kinds of class organizations. However, \(b_{URM*CneptFrst}\) again shows us that the URM students in the concepts-first class had final exam scores over 4 standard errors above the background (chapter-by-chapter) classes. This analysis shows that the Student Deficit model does not appear to help us at all in explaining the URM grade gap differences seen in the different class organizations. Controlling for preparation in the Chapter-by-Chapter class does explain some of the gap, but the same preparation metric does not explain the gap in the Concepts First class. The metrics of student preparation are not always correlated with final exam grades in the same way.
Finally, we note that HLM analysis (see Appendix B) also shows that the concepts-first class and the traditional classes had about the same size gender gap. Again, all four courses covered the same material at the same level using the same textbook and taking the same final exam. Furthermore, this can not be an instructor effect because the instructor who taught the concepts first class also taught one of the chapter-by-chapter classes.
## IV Assessment Retakes Course
Second, we examine the results of a change in the assessment structure of an introductory series of physics courses for biological science students. All of the courses considered here are active-learning classes (these classes were offered at the same public research university as the concepts-first class) and are discussed in some detail in Ref. [21]. These classes generally have one 80 minute lecture and two 140 minute discussion/labs per week. The students in these classes usually take either one quiz on new material every lecture or one quiz on new material every two lectures, and a final exam at the end. The assessment structure of a class was changed in four classes over three terms. In these classes students had one quiz on new material every other lecture and in the intervening lectures an optional "retake" quiz was administered that covered the same material and could supplant the original grade if the retake score was higher [22]. No retake was possible for the final exam in either type of class. In both the non-retake classes and the retake classes the course grade was almost entirely determined by exam scores (quizzes+final) [23] because
\begin{table}
\begin{tabular}{c c c c c}
**Coeff.** & **Value Error z-statistic P-value** \\ \hline \(b_{CncptFrst}\) & -0.21 & 0.20 & -1.02 & 0.307 \\ \(b_{URM}\) & -0.79 & 0.12 & -6.42 & \(<10^{-3}\) \\ \(b_{URM*CneptFrst}\) & 0.96 & 0.27 & 3.55 & \(<10^{-3}\) \\ \(b_{0}\) & 0.15 & 0.10 & 1.45 & 0.147 \\ \end{tabular}
\end{table}
Table 2: The coefficients from an HLM fit to equation 2 are shown along with their standard errors, z-statistics, and P-values. Included are \(N=633\) students in 4 classes. The interaction term suggests that the URM gap is significantly different (reduced) for the concepts-first class.
\begin{table}
\begin{tabular}{c c c c c}
**Coeff.** & **Value Error z-statistic P-value** \\ \hline \(b_{Calc}\) & 0.606 & 0.041 & 14.67 & \(<10^{-3}\) \\ \(b_{FCIpre}\) & 0.0691 & 0.004 & 16.76 & \(<10^{-3}\) \\ \(b_{ConceptFrst}\) & -0.10 & 0.17 & -0.55 & 0.585 \\ \(b_{URM}\) & -0.378 & 0.087 & -4.36 & \(<10^{-3}\) \\ \(b_{URM*ConceptFrst}\) & 0.77 & 0.19 & 4.12 & \(<10^{-3}\) \\ \(b_{0}\) & -1.19 & 0.11 & -11.00 & \(<10^{-3}\) \\ \end{tabular}
\end{table}
Table 3: The coefficients from an HLM fit to equation 4 are shown along with their standard errors, z-statistics, and P-values. Included are \(N=633\) students in 4 classes.
these classes do not grade homework. This allows us to compare course grades as a proxy for exam scores. We did not measure student initial understandings of physics or math but we can use a student's incoming GPA as a control variable to serve as a proxy for their general academic ability.
The course grade distributions in these courses have an average standard deviation of about one grade point so a course grade gap size in this course will have a meaning similar to the normalized final exam gap sizes discussed above for a different introductory physics series. We compare the 4 retake classes with a baseline computed from all 52 of the classes offered within this course series over the 2.5 years immediately previous to the first retake course. The demographics of our final database are shown below in Table 4.
We use HLM to find the average, over classes, of the difference between course grades of female students and course grades of male students in the same class. In practice this means that we fit
\[CourseGrade=b_{0}+b_{Female}Female \tag{5}\]
where \(Female=1\) if the student identified as female in our database and 0 if they identified as male and \(CourseGrade\) is the grade the student received in the course. Figure 3 shows the differences of these two average grades (\(b_{Female}\)) for 2.5 years of these classes, immediately preceding the first retake class, to identify the non-retake gender grade gap, and the gender grade gap in the four courses that allowed retakes. Female students had lower average course grades than male students in the non-retake courses. Again, these grade gaps are comparable to grade gaps published by other US universities [13] in that they are the negative of a fraction of a standard deviation.
On the other hand, in the retake classes female students had slightly higher course grades than male students. To quantify the comparison between the 52 non-retake classes and the 4 retake classes we define the categorical variable \(Retake=\)1 for the classes that offered retake exams and \(=0\) for the classes that did not offer retake exams. We use HLM to fit
\[CourseGrade=b_{0}+b_{Retake}Retake\\ +b_{Female}Female\\ +b_{Female*Retake}(Female*Retake) \tag{6}\]
The results of our HLM fit to equation 6 are shown in Table 5. From \(b_{Female}\) we find the gender gap we already knew about from the regular courses. From \(b_{Retake}\) we find that students identifying as male had about 1/3 of a grade point higher grades under the retake grading regime. Finally, from \(b_{FemalesRetake}\) we find that female students in the retake classes had an additional 1/4 of a grade point so that the gender gap is essentially gone.
The average grade gap in the retake classes is about 3.15 standard errors above the background gap seen in the regular classes (i.e. \(P=0.002\)). This suggests that there is only about one chance in five hundred that this difference is just a random fluctuation and is evidence that a Course Deficit model, again, is appropriate to use in understanding the demographic gender gap in the normal courses.
Now we control for the students' demonstrated academic abilities using their incoming GPA in case the classes giving retake exams had female students who were much better students than their male peers. First, for the third figure in the paper we fit the following model for the two groups of classes, retake and non-retake, separately to find gender gaps, \(b_{Female}\), of \(-0.025\pm 0.049\) for
\begin{table}
\begin{tabular}{c c c c c}
**Coeff.** & **Value** & **Error** & **z-statistic** & **P-value** \\ \hline \(b_{Retake}\) & 0.31 & 0.14 & 2.18 & 0.029 \\ \(b_{Female}\) & -0.210 & 0.015 & -13.75 & \(<10^{-3}\) \\ \(b_{Female*Retake}\) & 0.229 & 0.073 & 3.15 & 0.002 \\ \(b_{0}\) & 3.076 & 0.036 & 84.67 & \(<10^{-3}\) \\ \end{tabular}
\end{table}
Table 5: The coefficients from an HLM fit to equation 6 are shown along with their standard errors, z-statistics, and P-values. Included are \(N=12,884\) students in 52 non-retake classes and \(N=610\) students in 4 retake classes.
\begin{table}
\begin{tabular}{c c c c}
**Type** & **N** & **\%URM** & **\%Fem.** \\ \hline Retake & 610 & 23 & 66 \\ NonRet. & 12,884 & 18 & 63 \\ \end{tabular}
\end{table}
Table 4: Demographics of the 56 lecture sections of this introductory physics series included in the dataset.
Figure 2: Comparing the concepts-first class with the three chapter-by-chapter classes grouped together. Again, the URM grade gap on the final exam is is positive if URM students outperformed their peers. For each class organization the bare (uncontrolled URM gap is shown as well as the URM gap after controlling for incoming math and physics understandings of the students. The error bars are standard errors.
the retake classes and \(-0.214\pm 0.012\) for the non-retake classes after controlling for incoming GPA.
\[CourseGrade=b_{0}+b_{GPA}GPA+b_{Female}Female \tag{7}\]
Now we put the two types of class into the same model to quantify the difference after controlling for GPA as follows:
\[CourseGrade=b_{0}+b_{GPA}GPA\\ +b_{Retake}Retake+b_{Female}Female\\ +b_{Female*Retake}(Female*Retake) \tag{8}\]
The results of our HLM fit to equation 8 are shown in Table 6. The difference, \(b_{Female*Retake}\), is still significantly different from \(0\) and continues to suggest that the gender gap is an artifact of the structure of the course.
The difference in the gaps between the two classes is not significantly decreased if one uses incoming GPA to control for the students' academic ability with the error estimate substantially the same, the retake classes are about \(3.02\) standard errors above the background set by the regular classes (\(P=0.003\)) so it is not obvious that a Student Deficit model is of any use in understanding these differences. In other words, even though incoming GPA is a significant predictor for individual success in the course, controlling for this at the individual level does not significantly change the gender gap in in either set of courses. These GPA-controlled gaps are also shown in Figure 3.
Finally, as we show in Appendix B, the retake classes and the non-retake classes had about the same size URM demographic gap, with URM students receiving lower average grades than non-URM students under each assessment regime. So this particular intervention does not appear to close the URM demographic gap. Also note, we found (see \(b_{Retake}\) in Table 5) that male students had higher average grades in the retake classes than they had in the regular classes. Again, we note that the retake classes and the non-retake classes covered the same material at the same level and with approximately the same course materials.
## V Discussion
Several recent PER papers make measurements similar to those above and may be viewed through a Student Deficit vs Course Deficit lens (even though the authors do not originally use those terms). A recent study by Salahi et al. [13] analyzes student performance with respect to preparation and concludes that _"when controlling for incoming preparation, there remain no_ [significant] _demographic performance gaps."_ Salehi et al. argue that average deficits in the preparation of some demographic groups of students explain a substantial portion of the exam achievement gap that these groups experience under the particular unspecified teaching/assessment regimes of three different universities. They also suggest _"It is possible that there is some unmeasured factor (e.g., test anxiety) that causes both lower scores on our measures of incoming preparation and lower final exam performance."_ This perspective can be viewed as a Student Deficit model - that individual student preparation (or some other student-level variable) is responsible for the equity gaps. We offer an alternative explanation using a Course Deficit model; since Salehi et al. quantify "preparation" using measurements that, as we have noted earlier, are themselves suspected of including biases against the relevant demographic groups, the course exams are potentially subject to those same biases. Therefore controlling for one bias removes the other. Alternatively, two other recent pa
\begin{table}
\begin{tabular}{c c c c c}
**Coeff.** & **Value** & **Error z-statistic** & **P-value** \\ \hline \(b_{GPA}\) & 1.108 & 0.012 & 89.12 & \(<10^{-3}\) \\ \(b_{Retake}\) & 0.38 & 0.15 & 2.53 & 0.011 \\ \(b_{Female}\) & -0.214 & 0.012 & -17.68 & \(<10^{-3}\) \\ \(b_{Female*Retake}\) & 0.174 & 0.058 & 3.02 & 0.003 \\ \(b_{0}\) & -0.320 & 0.055 & -5.81 & \(<10^{-3}\) \\ \end{tabular}
\end{table}
Table 6: The coefficients from an HLM fit to equation 8 are shown along with their standard errors, z-statistics, and P-values. Included are \(N=12,884\) students in 52 non-retake classes and \(N=610\) students in 4 retake classes.
Figure 3: The gender gap for the course grade is the average course grade for female students minus the average course grade for male students in the same class. The course grade distribution already has a standard deviation of about one so it was not normalized. These classes all had course grades largely determined by exam grades and the classes that allowed retake exams had little or no gender grade gap. The error bars are standard errors.
pers, by Shafer et al. [14] and Stewart et al. [24], use similar metrics of student preparation in the same kinds of calculations used by Salehi et al. but conclude that student preparation does not explain the various achievement gaps they discuss. Shafer et al. find that preparation metrics do not predict student success equally across demographic groups. It's clear they are using a Course Deficit model as they conclude that _"There may be something about the physics course, the engineering program, or student culture that prevents Asian American and African American students, and to a lesser extent, Hispanic students, from realizing their full potential"_[14].
These papers may be suggesting that, because a Student Deficit model does not explain achievement gaps, a Course Deficit model is needed. Again, our view is that "preparation" is very difficult to measure when comparing different demographic groups. Finally, Burkholder et al. [25] reports that providing extra help to students who entered with purported "preparation deficits" did not close the achievement gaps. In our view adopting a Student Deficit model tends naturally to lead one to the idea of giving some students extra preparation to decrease the "preparation deficit". After pointing out that they tried this and it failed to decrease the gaps, Burkholder et al. [25] seem to adopt a Course Deficit model as most of the paper is concerned with changes they made to their courses and the results of these changes. Unfortunately, their measure of equity is not clearly related to the demographic achievement gaps that we have discussed above as their definition of equity addresses preparation gaps without examining demographic gaps that might exist. Nevertheless, their work agrees with our main conclusion - that there needs to be increasing focus on introductory courses, themselves, as the causes of demographic gaps.
Our work suggests that the Course Deficit model may be all that is needed in explaining the grade gaps and that a Student Deficit model may be inappropriate for these issues. We might reconcile these various ideas by suggesting that if i) introductory physics courses were unchangeable for some reason or ii) changing around the structure of introductory physics courses always led to the same rough demographic gaps then a Student Deficit model would be appropriate. However, the structure of physics courses may be changed and the data discussed in this paper shows that demographic gaps are not only changeable but may sometimes even change sign.
In our introduction we noted that there are good reasons to conclude that measures such as SAT/ACT math scores and FCI scores are biased against some demographic groups so that using these measures to compare different demographic groups may be inappropriate. Now we argue that the data in our paper together with the data in other published research are consistent with that conclusion, so that these measures of preparation should probably be used only for within demographic group comparisons. In many of the cases discussed in the literature [13; 14; 24] an underrepresented group received a lower average grade than their peers and this negative achievement gap is reduced (i.e. the gap changes in the positive direction) after controlling for SAT/ACT and FCIpre scores. This kind of change in a negative gap would occur whether the SAT/ACT and FCIpre scores measured bias against the underrepresented group or whether they measured poorer preparation of that group. So, most of these data don't help us decide whether the measures are controlling for bias or control for preparation. However, with the data from this paper there are now at least two results that differ from this norm of a negative gap becoming less negative after controlling for SAT/ACT and FCIpre. One uncommon result was shown above in Fig. 2 where a positive URM gap became even more positive after controlling for SAT/ACT and FCIpre. A surprising result like this is inconsistent with these measures acting as controls for preparation but is consistent with them acting as controls for bias against URM students. A second uncommon result was seen in Ref. [14] which showed that Asian-American students had a negative exam grade gap but that this negative gap became larger after controlling for SAT/ACT and FCI. This result is also inconsistent with the idea that these control variables measure preparation. These results suggest that perhaps these metrics should not be considered as proxies for preparation.
As a caution, we also find that any single change in course structure may differentially benefit one underrepresented demographic group and not benefit other demographic groups. Added to this caution is our personal teaching experience that these issues are at least somewhat dependent on the particular teacher, the particular group of students, and, for each teacher and student, can change from term to term. Unfortunately, research done in support of physics education has primarily been done at intuitions that have majority white students with above average SAT scores [26] so it is hard to know what might be applicable in a particular class. In general we suggest that a teacher be conscious of their student populations and any existing equity gaps when making choices about their course design. The knowledge-base that teachers can draw on in structuring their courses is, unfortunately, not very complete. There are likely many many possible ways to restructure traditional courses beyond those discussed in the literature or in this paper so more research is necessary on the differential demographic impacts of different course structures. Finally, there are also issues with the definitions of the demographic groups themselves; for instance, assuming gender is binary or aggregating several ethnicities into a single group [10; 14] or ignoring the intersectional nature inherent in the definitions of these groups. These are limitations in our own work, and they should be more broadly investigated.
When considering these results, it's also important to remember that eliminating achievement grade gaps does not necessarily eliminate all equity gaps from the course. As we note in our introduction, other equity gaps still
might exist - particularly those that are related to access, identity and power. Also, while we have relied on our data to advocate against using a Student Deficit model, we also note another argument against using a Student Deficit model, using such a model can potentially perpetuate the same racist and sexist perspectives [4] responsible for the gaps in the first place.
## VI Conclusions
We summarize three main conclusions from our analysis: 1) We find two examples of course changes that successfully eliminated some demographic grade gaps when compared with a control group: a) Teaching concepts first resulted an equity gap consistent with zero for under-represented minority students, and b) allowing retake exams resulted in an equity gap consistent with zero for female students; 2) Equity of parity was achieved for these demographic groups without controlling for any incoming inequities and (importantly) without changing the academic standards of the course. 3) When we did control for incoming inequities (often discussed as "preparation" metrics in the literature), those metrics did not reduce the grade gaps in predictable ways. Because controlling for individual student "preparation" did not reduce the equity gaps, we argue that grade gaps are the result of the course (a course deficit model) and not the individual students (a student deficit model).
This paper adds to growing literature that changes in the structure of a course, without changing the course content or the level of content expertise expected by the STEM community, may affect different demographic groups differently. These changes were, initially, done in an attempt to benefit **all** of the students in the class and in this paper, together with Ref. [18], they have been shown to do that. But we see that they also benefit some demographic groups more than others. Therefore because demographic grade gaps seem to be quite changeable under changes in course structure without applying interventions to address any existing student-level equity gaps present at the beginning of the course, it seems wisest to use such measures simply to judge one course structure against another rather than one group of students against another. Furthermore, because we find that controlling for metrics used to describe "preparation" can either decrease or increase the demographic grade gaps (as we see in Figure 2) depending on the course context, that these metrics should perhaps not be used so readily to explain grade gaps. In other words, our data support using a Course Deficit model of demographic grade gaps rather than a Student Deficit model. Taken all together, these ideas also support the idea that Equity of Parity is an appropriate goal for all introductory physics classes and, perhaps, for all STEM classes. An Equity of Parity model also supports a goal that many teachers may have, the goal of **not perpetuating past inequities.**
We conclude that there are likely systemic biases, in introductory physics classes, that act against some under-represented demographic groups. These biases are easily seen by comparing outcomes between different systems of teaching and assessment and these biases can likely be removed with appropriate structural changes at the level of a course that -importantly- do not impact the educational standards of the course.
###### Acknowledgements.
We thank the San Jose State University PER group for reviewing and providing feedback on an early draft of this paper. We are indebted to Wendell Potter (deceased Jan. 2017) who provided mentorship throughout both studies.
## Appendix A Ordinary Least Square Fits
In this appendix we show how the results can differ if we aggregate the data and fit a model using an ordinary least squares (OLS) procedure. We can use the model from Equation 2 to show how OLS fitting to the model mostly just reproduces the HLM results found in Table 2 but, in addition, incorrectly treats the lecture-level variable, \(ConcptFrst\). Using OLS to fit Equation 2 yields the coefficients shown in Table 7.
Comparing Table 2 with Table 7, we see that the estimated error values for \(b_{ConcptFrst}\) are smaller than when using HLM. OLS treats this variable as independently varying over students. In reality this variable only varies over classes as all students in any particular lecture class have exactly the same value of \(ConcptFirst\). Treating this as a student level variable will certainly lead to the error estimate being lower than it should be and we find it reduces the error estimate by about 50%. For our purposes, the important variables and their error estimates are basically unchanged. We find a similar result if we use OLS for the data concerned with retake exams.
## Appendix B Not all Achievement Gaps Changed
In this appendix we show the calculations leading to our two conclusions that i) the gender gap seemed unaffected by the concepts-first structure and ii) the URM gap seemed unchanged by offering retake exams.
\begin{table}
\begin{tabular}{c c c c c}
**Coeff.** & **Value Error** & **t-statistic** & **P-value** \\ \hline \(b_{ConcptFrst}\) & -0.21 & 0.10 & -2.13 & 0.033 \\ \(b_{URM}\) & -0.79 & 0.12 & -6.38 & \(<10^{-3}\) \\ \(b_{URM*ConcptFrst}\) & 0.96 & 0.27 & 3.51 & \(<10^{-3}\) \\ \(b_{0}\) & 0.15 & 0.05 & 3.04 & 0.002 \\ \end{tabular}
\end{table}
Table 7: The coefficients from an OLS fit to equation 2 are shown along with their standard errors, t-statistics, and P-values. Included are \(N=633\) students in 4 classes.
First, we show that the class organization seems unrelated to the gender gap. We do this by adding a categorical variable for the students' self-identified (binary) gender to Equation 2. \(Female=1\) if the student identifies as female and \(=0\) if they identify as male. We also add in the appropriate interaction term to determine if any gender gap is different for the Concepts-first class. In other words, we fit the normalized final exam scores (\(NFnlExam\)) with the following model:
\[NFnlExam=b_{0}+b_{CncptFrst}CncptFrst\\ +b_{URM}URM\\ +b_{URM*CncptFrst}(URM*CncptFrst)\\ +b_{Female}Female\\ +b_{Female*CncptFrst}(Female*CncptFrst) \tag{11}\]
The results of our HLM fit to equation B1 are shown in Table 8. One sees that there is a gender gap (\(b_{Female}\)) of about 0.33 standard deviations and that the gap is not significantly different for the Concepts-first structured class, \(b_{FemalesCncptFrst}\) has \(P=0.8\). The other coefficients are essentially unchanged from their values in Table 9.
Finally, we show that the retake exam organization seems unrelated to the URM racial/ethnic gap. We do this by adding a categorical variable for the students' self-identified ethnicity to equation 6. As before, \(URM=1\) if the student identifies as a member of a racial/ethnic group recognized by the APS as being underrepresented in physics and \(=0\) if they don't. We also add in the appropriate interaction term to determine if any URM gap is different for the retake classes. In other words, we fit the normalized final exam scores (\(CourseGrade\)) with the following model:
\[CourseGrade=b_{0}+b_{Retake}Retake\\ +b_{Female}Female\\ +b_{FemaleRetake}(Female*Retake)\\ +b_{URM}URM\\ +b_{URM*Retake}(URM*Retake) \tag{12}\]
The results of our HLM fit to equation B2 are shown in Table 9. One sees that there is a URM gap (\(b_{URM}\)) of about 0.34 grade points and that the gap is not significantly different for the retake exam classes, \(b_{URM*Retake}\) has \(P=0.523\). The other coefficients are essentially unchanged from their values in Table 10.
|
2306.10976 | Empirical sandwich variance estimator for iterated conditional
expectation g-computation | Iterated conditional expectation (ICE) g-computation is an estimation
approach for addressing time-varying confounding for both longitudinal and
time-to-event data. Unlike other g-computation implementations, ICE avoids the
need to specify models for each time-varying covariate. For variance
estimation, previous work has suggested the bootstrap. However, bootstrapping
can be computationally intense. Here, we present ICE g-computation as a set of
stacked estimating equations. Therefore, the variance for the ICE g-computation
estimator can be consistently estimated using the empirical sandwich variance
estimator. Performance of the variance estimator was evaluated empirically with
a simulation study. The proposed approach is also demonstrated with an
illustrative example on the effect of cigarette smoking on the prevalence of
hypertension. In the simulation study, the empirical sandwich variance
estimator appropriately estimated the variance. When comparing runtimes between
the sandwich variance estimator and the bootstrap for the applied example, the
sandwich estimator was substantially faster, even when bootstraps were run in
parallel. The empirical sandwich variance estimator is a viable option for
variance estimation with ICE g-computation. | Paul N Zivich, Rachael K Ross, Bonnie E Shook-Sa, Stephen R Cole, Jessie K Edwards | 2023-06-19T14:35:49Z | http://arxiv.org/abs/2306.10976v3 | # Empirical sandwich variance estimator for iterated conditional expectation g-computation
###### Abstract
Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing run-times between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.
## 1 Introduction
Causal inference with longitudinal and time-to-event data often must contend with time-varying confounding, whereby a covariate is both a confounding variable and is affected by prior treatment [1]. One approach to appropriately address time-varying confounding is the g-formula [2]. Two g-formula estimators are standard g-computation [2, 3, 4, 5], and iterated conditional expectation (ICE) g-computation [6, 7, 8, 9, 10]. To apply standard g-computation, one specifies models for the outcome and each time-varying covariate. ICE g-computation instead only requires specification of sequential models for the outcome [8, 10].
To consistently estimate the variance for ICE g-computation, previous work has suggested the nonparametric bootstrap [7, 8, 10]. However, the bootstrap is computationally demanding, as it requires repeating the analysis using resamples of the data [11]. This computational complexity can limit the scenarios considered by researchers in practical applications (e.g., exploring alternative treatment plans, varying functional form specifications, sensitivity analyses). The computational complexity also makes simulation experiments difficult. For example, some simulation studies forgo estimation of the variance by bootstrap in each iteration [8, 9]. While other work has used the bootstrap [10], the additional computational burden it requires may limit the sample sizes, number of iterations, or scenarios considered. An alternative variance estimator is based on the influence curve, but this estimator is not consistent for ICE g-computation (i.e., it underestimates the variance) [7].
Here, the ICE g-computation estimator is expressed as a set of estimating equations [12], which allows the asymptotic variance of the ICE estimator to be consistently estimated using the empirical sandwich variance estimator [8]. The primary benefit of this approach is that it provides a statistically consistent variance estimator that is more computationally efficient than the nonparametric bootstrap. Other benefits
include the ability to stack estimating equations together to easily estimate the variance for transformations of parameters and incorporate other nuisance models. Finally, the ICE g-computation estimator can be more easily implemented using existing software for general M-estimators [13, 14].
The structure of the paper is as follows. In section 2, the data and a sufficient set of identification assumptions for longitudinal data structures are reviewed. Section 3 reviews g-computation estimators in the setting of repeated measures and presents ICE g-computation as stacked estimating equations. The proposed M-estimator is assessed through a simulation study in section 4. In section 5, the proposed ICE g-computation procedure is demonstrated in an illustrative example of estimating the effect of cigarette smoking on prevalent hypertension. Finally, section 6 summarizes the key results and notes how an estimating equation approach for causal effect estimation with longitudinal data can be expanded upon in future work.
## 2 Observed data and identification
Let \(k\in\{1,...,\tau\}\) index discrete follow-up times. The potential outcome at \(\tau\) for unit \(i\) under the treatment plan (i.e., a defined sequence of treatments) \(\bar{a}_{\tau-1}^{*}=(a_{0}^{*},a_{1}^{*},...,a_{\tau-1}^{*})\) is denoted by \(Y_{i,\tau}(\bar{a}_{\tau-1}^{*})\). Only deterministic plans of binary treatments (i.e., units are assigned to specific treatments, as opposed to probabilities of treatments) are considered hereafter. A simple example is always treat, where \(\bar{a}_{\tau-1}^{*}=(1,1,...,1)\). The interest parameter is the mean potential outcome at \(\tau\) under a given plan, \(\mu_{\tau}=\mathbb{E}[Y_{i,\tau}(\bar{a}_{\tau-1}^{*})]\), where \(\mathbb{E}[.]\) is the expected value function.
For each unit \(i\) at time \(k\), the observed data consists of the treatment (\(A_{i,k}\)), a set of covariates (\(L_{i,k}\)), a loss to follow-up (i.e., censoring status, \(C_{i,k}\)) indicator, and the observed outcome for those uncensored (\(Y_{i,k}\)). Data are assumed to occur in a specific time-order, namely \(L_{0}\to A_{0}\to C_{1}\to Y_{1}\to L_{1}\to A_{1}\to...\to Y_{\tau}\). Here, \(Y_{i,k}\) is measured regardless of the value of \(Y_{i,k-1}\) (i.e., repeated measures). Overbars are used to indicate the history of a variable, e.g., \(\bar{A}_{i,k}=(A_{i,0},A_{i,1},...,A_{i,k})\). Lastly, units lost to follow-up are unobserved for all following time points (i.e., loss to follow-up is monotonic). The observed data consists of \(n\) iid units of \(O_{i}=(L_{i,0},A_{i,0},C_{i,1},Y_{i,1},L_{i,1},A_{i,1},...,Y_{i,\tau})\) from a random sample of the target population.
To identify \(\mu_{\tau}\), we proceed following the identification assumptions provided in Table 1. Comprehensive discussion of these assumptions can be found elsewhere [4, 8, 15, 16, 17, 18, 19]. Briefly, causal consistency provides a connection between the observed outcomes and time-varying covariates, and the potential outcomes and covariates under the treatment plans. Causal consistency, as expressed here, implies both no interference between units and variations of treatment are irrelevant [20]. Treatment exchangeability stipulates that the treatment at time \(k\) is independent of the potential outcomes conditional on the history of previous treatment and covariates. Treatment positivity requires a non-zero probability of the treatment plan being observed for each unique combination of treatment and covariate history in the population, which ensures the treatment exchangeability condition is well-defined. Censoring exchangeability specifies that loss to follow-up at time \(k\) is non-informative conditional on \(\bar{L}_{k}\) and \(\bar{A}_{k}\). Again, censoring positivity ensures that the expression for censoring exchangeability is well-defined. In addition to these assumptions, we further assume no measurement error. Following these additional assumptions, the interest parameter can be written as
\[\mu_{\tau}=\int_{\bar{l}_{\tau-1}\in\bar{L}_{\tau-1}}\left[\mathbb{E}\left\{Y_ {\tau}|\bar{A}_{\tau-1}=\bar{a}_{\tau-1}^{*},\bar{L}_{\tau-1}=\bar{l}_{\tau-1} \right\}\prod_{k=1}^{\tau-1}f_{\bar{l}}\left(\bar{l}_{k}|\bar{A}_{k-1}=\bar{a }_{k-1}^{*},\bar{L}_{k-1}=\bar{l}_{k-1}\right)\right]d\bar{l} \tag{1}\]
where \(f_{\bar{l}}(.)\) is the probability density function for \(\bar{l}_{k}\)[2]. As the parameter of interest is expressed in terms of observables (i.e., identified), one can consider estimators based on this expression.
## 3 G-computation
### Standard g-computation
A common approach for estimation of \(\mu_{\tau}\) is standard g-computation, where each time-varying variable is simulated forward in time using a series of models [2]. Pooled regression models paired with a Monte Carlo procedure are commonly used to implement standard g-computation [3, 4, 21, 5]. Detailed descriptions of how to implement standard g-computation with pooled regression models are provided in the following
references [4, 7, 5]. Briefly, one converts their data into a 'long' data set, where each row corresponds to an individual for a single unit of time. The long data set is then used to fit pooled regression models for each of the time-varying variables [22]. Generally, the covariate is modeled as a function of time and covariates at previous time points. From the rows corresponding to units at baseline, observations are sampled with replacement. Often this step uses a large sample (e.g., \(10000\times n\)) of the observed data to reduce Monte Carlo error. Using the resampled baseline data, each of the time-varying covariates are simulated forward in time under the plan till the end of follow-up. These simulated outcomes under the plan can then be used to estimate \(\mu_{\tau}\) (e.g., mean at each time for longitudinal data, empirical distribution function for time-to-event data).
As this procedure requires one to specify models for the outcome process and each time-varying confounder, standard g-computation is often considered to be both computationally intense and highly susceptible to model misspecification [4]. Specifically, the standard g-computation estimator is only consistent for \(\mu_{\tau}\) when each of the time-varying covariate models is correctly specified. This assumption becomes particularly questionable as the number of time-varying covariates increases. As the ability to correctly specify models for each time-varying covariate is often suspect, other algorithms for g-computation that reduce modeling assumptions have been proposed [8].
### ICE g-computation
To avoid specifying models for each time-varying covariate, the g-formula in Equation 1 can instead be written as a series of conditional expectations
\[\mu_{\tau}=\mathbb{E}\left\{...\mathbb{E}\left[\mathbb{E}(Y_{\tau}|\bar{A}_{ \tau-1}=\bar{a}_{\tau-1}^{*},\bar{L}_{\tau-1})|\bar{A}_{\tau-2}=\bar{a}_{\tau -2}^{*},\bar{L}_{\tau-2}\right]...|\bar{A}_{0}=\bar{a}_{0}^{*},\bar{L}_{0}\right\} \tag{2}\]
where the inner expectation is the outcome at \(\tau\) conditional on the plan and covariate history up to that time [2, 23]. Equation 2 leads to the ICE g-computation estimator, which can be implemented with a series of outcome regression models moving backwards through time. The following algorithm can be used to implement ICE g-computation:
1. Fit a regression model for \(Y_{i,\tau}\) conditional on \(\bar{A}_{i,\tau-1}\) and \(\bar{L}_{i,\tau-1}\) for all observations where \(C_{i,\tau}=0\).
2. Generate predicted values of the outcome, denoted by \(\tilde{Y}_{i,\tau}^{*}\), for the plan of interest, \(\bar{a}_{i,\tau-1}^{*}\), and the observed \(\bar{L}_{i,\tau-1}\) for all units uncensored at \(\tau-1\) (i.e., \(C_{i,\tau-1}=0\)).
3. Fit a regression model for \(\tilde{Y}_{i,\tau}^{*}\) conditional on \(\bar{A}_{i,\tau-2}\) and \(\bar{L}_{i,\tau-2}\) with all observations where \(C_{i,\tau-1}=0\)
\begin{table}
\begin{tabular}{l c c} \hline \hline Assumption name & Assumption expression & Conditiona \\ \hline Causal Consistency & \(Y_{i,k}=Y_{i,k}(\bar{a}_{k-1}^{*})\) & if \(\bar{a}_{k-1}^{*}=\bar{A}_{i,k-1}\) \\ & \(L_{i,k}=L_{i,k}(\bar{a}_{k-1}^{*})\) & \\ Treatment exchangeability & \(Y_{k}\bar{a}_{k-1}^{*}\amalg\bar{A}_{k-1}|\bar{L}_{k-1},\bar{A}_{k-2}=\bar{a}_{ k-2}^{*}\) & \(\text{for }\bar{a}_{k-1}^{*}\) \\ Treatment positivityb & \(f(\bar{a}_{k}^{*}|\bar{a}_{k-1}^{*},\bar{l}_{k-1})>0\) & \(\text{for }\bar{a}_{k-1}^{*},\bar{l}_{k-1}\) \\ & & where \(f(\bar{a}_{k-1}^{*},\bar{l}_{k-1})>0\) \\ Censoring exchangeability & \(Y_{k}\amalg C_{k}|\bar{A}_{k-1},\bar{L}_{k-1},C_{k-1}=0\) & \(\text{for }C_{k}=0\) \\ Censoring positivityb & \(\text{Pr}(C_{k}=0|\bar{a}_{k-1}^{*},\bar{l}_{k-1},C_{k-1}=0)>0\) & \(\text{for }\bar{a}_{k-1}^{*},\bar{l}_{k-1}\) \\ & & where \(f(\bar{a}_{k-1}^{*},\bar{l}_{k-1},C_{k-1}=0)>0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sufficient identification assumptions for time-varying treatments with informative censoring
4. Generate predicted values of the outcome, \(\tilde{Y}^{*}_{i,\tau-1}\), under \(\bar{a}^{*}_{i,\tau-2}\), and the observed \(\bar{L}_{i,\tau-2}\) for all units uncensored at \(\tau-2\) (i.e., \(C_{i,\tau-2}=0\)).
5. Repeat steps 3 and 4 for \(\tilde{Y}^{*}_{i,j}\) where \(j\in\{\tau-1,\tau-2,...,1\}\).
6. Take the arithmetic mean of \(\tilde{Y}^{*}_{i,1}\) across all \(n\) observations.
Step 6 provides the point estimate of \(\mu_{\tau}\), e.g., \(\hat{\mu}_{\tau}=n^{-1}\sum_{i=1}^{n}\tilde{Y}^{*}_{i,1}\). For causal contrasts, the preceding process is repeated for the alternative treatment plan and then the pair of point estimates are contrasted, e.g., \(\hat{\mu}^{1}_{\tau}-\hat{\mu}^{0}_{\tau}\), where the superscript indicates the designated treatment plan. For time-to-event data, the ICE g-computation algorithm is slightly modified (see Appendix 1).
The previous algorithm describes an unstratified version of ICE g-computation, meaning that the nuisance outcome models are fit to observations regardless of their observed treatment histories, \(\bar{a}_{i,k-1}\). Therefore, the functional form for treatment history in each iterative regression model must be correctly specified. To avoid these parametric modeling assumptions for treatment history, the data used to estimate the outcome models is steps 1 and 3 can instead be subset to those who followed the treatment plan \(\bar{a}^{*}_{i,k-1}\) up to that time [8]. We refer to this latter variation as stratified ICE g-computation. By estimating the models among only those following the plan, the stratified ICE g-computation estimator avoids any parametric constraints for the functional form of treatment history on the outcome. However, this comes at the cost of less data being available to fit the models.
For consistent estimation of the variance of \(\hat{\mu}_{\tau}\), the nonparametric bootstrap has been suggested [7, 8]. While the nonparametric bootstrap provides a consistent variance estimator, it can be computationally demanding, as it requires re-estimating the series of regression models with resampled data. Further, the ICE procedure does not provide estimates of \(\mu_{\tau-1}\) or other time-points as by-products. Instead, the whole process, and thus also the bootstrap, must be repeated for each time point of interest. Therefore, we consider an alternative to the bootstrap for estimating the variance of \(\hat{\mu}_{\tau}\).
### ICE g-computation as an M-estimator
To avoid using the bootstrap to estimate the variance, we express the ICE g-computation estimator as an M-estimator [12]. Let \(\boldsymbol{\theta}\) be a \(v\)-dimensional vector, where \(v\) is finite. An M-estimator, \(\hat{\boldsymbol{\theta}}\), is the solution to the estimating equation
\[\sum_{i=1}^{n}\psi(O_{i};\hat{\boldsymbol{\theta}})=\boldsymbol{0}\]
where \(\psi(.)\) is an estimating function [12, 24]. The asymptotic variance of \(\hat{\boldsymbol{\theta}}\) can be consistently estimated via the empirical sandwich variance estimator,
\[\mathbb{V}_{n}(\hat{\boldsymbol{\theta}})=\mathbb{B}_{n}(\hat{\boldsymbol{ \theta}})^{-1}\mathbb{F}_{n}(\hat{\boldsymbol{\theta}})\left[\mathbb{B}_{n}( \hat{\boldsymbol{\theta}})^{-1}\right]^{T}\]
where the 'bread' is
\[\mathbb{B}_{n}(\hat{\boldsymbol{\theta}})=\frac{1}{n}\sum_{i=1}^{n}\left\{- \psi^{\prime}(O_{i};\hat{\boldsymbol{\theta}})\right\}\]
with \(\psi^{\prime}(O_{i};\hat{\boldsymbol{\theta}})\) denoting the Jacobian (i.e., all first-order partial derivatives) of the estimating functions and the'meat' is
\[\mathbb{F}_{n}(\hat{\boldsymbol{\theta}})=\frac{1}{n}\sum_{i=1}^{n}\left\{ \psi(O_{i};\hat{\boldsymbol{\theta}})\psi(O_{i};\hat{\boldsymbol{\theta}})^{T}\right\}\]
The standard error for \(\hat{\boldsymbol{\theta}}\) can then be estimated by \(\left\{\text{diag}\left[\mathbb{V}(\hat{\boldsymbol{\theta}})\right]/n\right\} ^{0.5}\), where \(\text{diag}(.)\) denotes the major diagonal of the covariance matrix, and then be used to construct Wald-type confidence intervals (CI) [12, 24].
For unstratified ICE g-computation with a repeatedly measured binary outcome modeled using logistic regression, the corresponding estimating equations are
\[\sum_{i=1}^{n}\psi_{U}(O_{i};\hat{\mathbf{\theta}})=\sum_{i=1}^{n}\left[ \begin{array}{c}I(C_{i,\tau}=0)\left\{Y_{i,\tau}-\mathrm{expit}(X_{i,\tau-1}^{T }\hat{\beta}_{\tau-1})\right\}X_{i,\tau-1}\\ I(C_{i,\tau-1}=0)\left\{\tilde{Y}_{i,\tau}^{*}-\mathrm{expit}(X_{i,\tau-2}^{T} \hat{\beta}_{\tau-2})\right\}X_{i,\tau-2}\\ \vdots\\ I(C_{i,1}=0)\left\{\tilde{Y}_{i,2}^{*}-\mathrm{expit}(X_{i,0}^{T}\hat{\beta}_ {0})\right\}X_{i,0}\\ \tilde{Y}_{i,1}^{*}-\hat{\mu}_{\tau}\end{array}\right]=\mathbf{0} \tag{3}\]
where \(\hat{\mathbf{\theta}}=(\hat{\beta}_{\tau-1},\hat{\beta}_{\tau-2},...\hat{\beta}_{0 },\hat{\mu})\), \(\mathrm{expit}(.)\) is the inverse logit function, \(X_{i,k}\) is the \(i^{\mathrm{th}}\) row of a design matrix composed of user-specified functions of \(\bar{A}_{i,k}\) and \(\bar{L}_{i,k}\), \(\tilde{Y}_{i,k}=\mathrm{expit}(X_{i,k-1}^{*^{T}}\hat{\beta}_{k-1})\), \(X_{i,k}^{*}\) is the \(i^{\mathrm{th}}\) row of the design matrix with \(\bar{a}_{i,k}^{*}\) replacing \(\bar{A}_{i,k}\), and \(\hat{\mu}_{\tau}\) is the estimator under the specified plan. The first estimating function is the score function of a logistic regression model for the observed outcome at time \(\tau\). The second estimating function is the score of a fractional logistic regression model where the dependent variable is the predicted outcome at \(\tau\) under the plan [25]. The subsequent estimating functions are recursive fractional logistic regression models backwards through time until baseline. The final estimating function is for the mean of the outcome at \(\tau\) under the plan.
For stratified ICE g-computation with a repeatedly measured binary outcome modeled using logistic regression, the estimating equations are modified to
\[\sum_{i=1}^{n}\psi_{S}(O_{i};\hat{\mathbf{\theta}})=\sum_{i=1}^{n}\left[ \begin{array}{c}I(C_{i,\tau}=0,\bar{A}_{i,\tau-1}=\bar{a}_{i,\tau-1}^{*}) \left\{Y_{i,\tau}-\mathrm{expit}(X_{i,\tau-1}^{T}\tilde{\beta}_{\tau-1})\right\} X_{i,\tau-1}\\ I(C_{i,\tau-1}=0,\bar{A}_{i,\tau-2}=\bar{a}_{i,\tau-2}^{*})\left\{\tilde{Y}_{i, \tau}^{*}-\mathrm{expit}(X_{i,\tau-2}^{T}\hat{\beta}_{\tau-2})\right\}X_{i, \tau-2}\\ \vdots\\ I(C_{i,1}=0,\bar{A}_{i,0}=\bar{a}_{i,0}^{*})\left\{\tilde{Y}_{i,2}^{*}- \mathrm{expit}(X_{i,0}^{T}\hat{\beta}_{0})\right\}X_{i,0}\\ \tilde{Y}_{i,1}^{*}-\hat{\mu}_{\tau}\end{array}\right]=\mathbf{0} \tag{4}\]
where \(\hat{\mathbf{\theta}}=(\tilde{\beta}_{\tau-1},\tilde{\beta}_{\tau-2},...\tilde{ \beta}_{0},\tilde{\mu})\). These estimating equations are nearly identical to Equation 3, expect for the addition of being restricted to those who followed the treatment plan up to the corresponding time.
Note that the score functions of the logistic models in Equations 3 and 4 can be replaced with the score functions of other generalized linear models. Inference for \(\hat{\mu}_{\tau}\) and \(\hat{\mu}_{\tau}\) can then be made using the empirical sandwich variance estimator [8], as described above. Importantly, software is available which automates computation of the point and variance estimates for a given set of estimating functions [13, 14]. Estimating equations for ICE g-computation with time-to-event data, as opposed to repeated measures, are provided in Appendix 1 and the following reference [9].
## 4 Simulation study
### Simulation setup
To explore the finite-sample performance of the empirical sandwich variance estimator against theoretical expectations, a simulation study was conducted. The interest parameters in the simulations were \(\mathbb{E}[Y_{i,3}(1,1,1)]\) and \(\mathbb{E}[Y_{i,3}(0,0,0)]\), which correspond to the mean had everyone been treated at all three time points and the mean had everyone not been treated at all three time points, respectively. Here, \(L_{i,k}\) consisted of a binary variable that depended on previous values and previous treatments. Loss to follow-up was informative based on treatment. Full details on the data generating mechanism are provided in Appendix 2.
For estimation of the interest parameters, unstratified and stratified ICE g-computation M-estimators were assessed (corresponding estimating equations provided in Appendix 2). M-estimators were evaluated at five difference sample sizes, \(n\in\{250,500,1000,2000,5000\}\), with 5000 iterations each. The following metrics were evaluated: bias, empirical standard error (ESE), average standard error (ASE), standard error ratio (SER), and 95% CI coverage [26]. Bias was defined as the mean of the estimated risk minus the true risk
under the plan, with the true risk determined by simulating 10 million observations under the plan. ESE was estimated by the standard deviation of the point estimates of the simulation. ASE was estimated by the mean of the estimated standard errors. SER was defined as the ASE divided by the ESE. CI coverage was estimated by the proportion of 95% CI that contained the true risk under the plan. Whether the M-estimator procedure failed to find the root within 10000 iterations was also tracked. Failures to converge were ignored for computation of the other metrics.
Simulations were conducted using Python 3.9.4 (Python Software Foundation, Beaverton, OR, USA) with the following packages: NumPy[27], SciPy[28], delicatessen[13], and pandas[29]. Code to replicated the simulation results is provided at github.com/pzivich/publications-code.
### Results
Simulation results are presented in Tables 2-3. As seen across sample sizes, both unstratified and stratified ICE g-computation M-estimators were approximately unbiased under correct model specification. In general, unstratified ICE g-computation has a smaller ESE, and thus was more precise, relative to stratified ICE g-computation. However, the difference between the ESE of the stratified and unstratified approaches diminished as sample sizes increased. For small sample sizes, like \(n=250\), stratified ICE g-computation occasionally failed to converge (up to 6%). Both the differences in precision and failure to converge are likely due to data becoming sparse (i.e., few observations followed the plan of interest and thus the nuisance models either could not be estimated or had large variances). These issues were more pronounced for the always-treat parameter as treatment was less prevalent at all time points in the data generating mechanism.
Performance of the empirical sandwich variance estimator aligned with theoretical expectations. Across the varying sample sizes, the SER was near 1 and CI coverage was near 0.95 for both estimators. However, the sandwich variance estimator under performed for the smallest sample size (\(n=250\)), most likely attributable to data sparsity previously noted.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & Bias & ESE & ASE & SER & Coverage & Faileda \\ \hline \(n=250\) & & & & & & \\ Unstratified & -0.003 & 0.061 & 0.061 & 1.01 & 0.94 & 0 \\ Stratified & 0.005 & 0.081 & 0.074 & 0.91 & 0.91 & 288 \\ \(n=500\) & & & & & & \\ Unstratified & -0.002 & 0.043 & 0.043 & 1.00 & 0.94 & 0 \\ Stratified & 0.003 & 0.056 & 0.054 & 0.97 & 0.94 & 55 \\ \(n=1000\) & & & & & & \\ Unstratified & -0.004 & 0.031 & 0.031 & 0.99 & 0.95 & 0 \\ Stratified & 0.001 & 0.039 & 0.038 & 0.99 & 0.95 & 1 \\ \(n=2000\) & & & & & & \\ Unstratified & -0.004 & 0.021 & 0.022 & 1.01 & 0.95 & 0 \\ Stratified & 0.001 & 0.027 & 0.027 & 1.00 & 0.95 & 0 \\ \(n=5000\) & & & & & & \\ Unstratified & -0.004 & 0.014 & 0.014 & 0.99 & 0.94 & 0 \\ Stratified & 0.001 & 0.017 & 0.017 & 0.99 & 0.95 & 0 \\ \hline \hline \end{tabular} ESE: empirical standard error, ASE: average standard error, SER: standard error ratio (ASE/ESE), Coverage: 95% confidence interval (CI) coverage. The parameter of interest in the simulation was \(\mathbb{E}[Y_{3}(1,1,1)]\).
Bias was defined as the mean of the estimated risk minus the true causal risk. ESE: was defied as the standard deviation of the simulation estimates. ASE was the mean of the estimated standard errors across all simulations. SER was the ASE divided by the ESE. 95% CI coverage was defined as the proportion of 95% CIs containing the true causal risk. Failed indicates whether the root-finding procedure failed to converge in 10,000 iterations. Failed iterations were ignored for calculation of other metrics. Results are for 5000 iterations.
Footnote a: Failed convergences were ignored when calculating the evaluation metrics.
\end{table}
Table 2: ICE g-formula always-treat simulation results
## 5 Example
As an illustration of ICE g-computation, we estimate the prevalence of hypertension had all cigarette smoking been prevented among adolescents enrolled in school for the 1994-1995 academic year. While preventing cigarette smoking is not a 'treatment', the framework of the previous sections also applies to exposures, interventions, and actions more generally. Data came from the Add Health public-use in-home questionnaire (\(n=6504\)), which removes some variables and constitutes a subsample of the full Add Health data to limit potential for deductive disclosures. Add Health is a school-based, nationally representative study of adolescents in grades 7-12 that began in 1994-1995. After wave I (1994-1995), additional waves have been conducted in 1996 (wave II), 2001-2002 (wave III), and 2008-2009 (wave IV). Details on the design of Add Health are available in Harris et al. (2019) [30]. To maintain similar lengths of time between follow-up visits (e.g., approximately seven years), only data from waves I, III, and IV were used. The following exclusion criteria were applied: no self-reported heart problem resulting in difficulty using your hands, arms, legs, or feet at wave I (underlying health conditions potentially related to elevated blood pressure that were not commonly observed); between 13-18 years old at wave I (to prevent data sparsity by age); in a high school with grade levels at wave I (for definition of education at wave I); and best described race was not 'other' at wave I ('other' was not an option for wave III).
The interest parameter for our analysis was the prevalence of hypertension at wave IV had none of the defined population been current cigarette smokers at waves I and III, \(\hat{\mu}_{2}^{0}=\mathbb{E}[Y_{i,2}(0,0)]\). Additionally, we assessed the causal effect of this smoking ban relative to the natural course (i.e., observed patterns of smoking) [31], \(\mu_{2}^{d}=\mu_{2}^{0}-\mu_{2}^{n}\) where \(\mu_{2}^{n}=\mathbb{E}[Y_{i,2}]\). At wave IV, participants with type I (systolic between 140-159 or diastolic between 90-99) or type II (systolic 160+ or diastolic 100+) hypertension as measured by the interviewer were classified as having hypertension. For each visit, current cigarette smoking was defined as those who reported smoking cigarettes at least one day in the previous 30 days from the interview data. Those who reported zero days or never smoking were classified as not current cigarette smokers.
For identification, we assumed sequential exchangeability of treatment and censoring given the following time-varying and time-fixed covariates. Time-varying confounders included gender, race, ethnicity, height,
\begin{table}
\begin{tabular}{c c c c c c c} \hline & Bias & ESE & ASE & SER & Coverage & Failed \\ \hline \(n=250\) & & & & & & \\ Unstratified & -0.002 & 0.034 & 0.033 & 0.97 & 0.93 & 0 \\ Stratified & 0.000 & 0.047 & 0.046 & 0.98 & 0.92 & 13 \\ \(n=500\) & & & & & & \\ Unstratified & -0.001 & 0.023 & 0.023 & 1.00 & 0.94 & 0 \\ Stratified & 0.001 & 0.033 & 0.033 & 0.99 & 0.93 & 0 \\ \(n=1000\) & & & & & & \\ Unstratified & -0.001 & 0.017 & 0.017 & 1.00 & 0.94 & 0 \\ Stratified & 0.000 & 0.023 & 0.023 & 1.00 & 0.95 & 0 \\ \(n=2000\) & & & & & & \\ Unstratified & -0.001 & 0.012 & 0.012 & 1.00 & 0.94 & 0 \\ Stratified & 0.000 & 0.017 & 0.016 & 0.99 & 0.94 & 0 \\ \(n=5000\) & & & & & & \\ Unstratified & -0.001 & 0.007 & 0.007 & 1.01 & 0.95 & 0 \\ Stratified & 0.000 & 0.010 & 0.010 & 1.00 & 0.95 & 0 \\ \hline \end{tabular} ESE: empirical standard error, ASE: average standard error, SER: standard error ratio (ASE/ESE), Coverage: 95% confidence interval (CI) coverage. The parameter of interest in the simulation was \(\mathbb{E}[Y_{3}(1,1,1)]\).
Bias was defined as the mean of the estimated risk minus the true causal risk. ESE: was defied as the standard deviation of the simulation estimates. ASE was the mean of the estimated standard errors across all simulations. SER was the ASE divided by the ESE. 95% CI coverage was defined as the proportion of 95% CIs containing the true causal risk. Failed indicates whether the root-finding procedure failed to converge in 10,000 iterations. Failed iterations were ignored for calculation of other metrics. Results are for 5000 iterations.
\end{table}
Table 3: ICE g-formula never-treat simulation results
weight, exercise, self-rated health, alcohol use, prior hypertension, and health insurance coverage. While gender, race, and ethnicity are not traditionally considered to be time-varying covariates, there has been increasing recognition that these characteristics can vary over time [32, 33, 34]. Time-fixed confounders include age and ever trying a cigarette at wave I. Further details on variable definitions are provided in Appendix 3. To construct the analytic data set, those with any missing covariates at wave I were excluded. To ensure missing data was monotonic across time, a participant's covariates were all set to missing if any of their covariates at previous waves were missing (i.e., those with missing covariates were censored).
For estimation of the mean under a smoking ban, the unstratified ICE g-computation estimator, \(\hat{\mu}_{2}^{0}\), was used. Rather than calculating the mean under the natural course directly, we instead used unstratified ICE g-computation, where \(\bar{a}_{i,2}^{*}=\bar{a}_{i,2}\). This approach has the advantage of accounting for informative loss to follow-up by \(\bar{A}_{i,k}\) and \(\bar{L}_{i,k}\) for the natural course mean. The ICE g-computation estimators for each plan were stacked together. Finally, an estimating function for the difference between the proportion under the smoking ban and natural course was stacked as well. The full set of stacking estimating equations for \(\hat{\mu}_{2}^{d}\) were
\[\sum_{i=1}^{n}\begin{bmatrix}I(C_{i,2}=0)\left\{Y_{i,2}-\text{expit}(X_{i,1}^{T }\hat{\beta}_{1})\right\}X_{i,1}\\ I(C_{i,1}=0)\left\{\tilde{Y}_{i,2}^{*}-\text{expit}(X_{i,0}^{T}\hat{\beta}_{0} )\right\}X_{i,0}\\ \tilde{Y}_{i,1}^{*}-\hat{\mu}_{2}^{0}\\ I(C_{i,2}=0)\left\{Y_{i,2}-\text{expit}(X_{i,1}^{T}\hat{\beta}_{1})\right\}X_{ i,1}\\ I(C_{i,1}=0)\left\{\tilde{Y}_{i,2}^{*^{\prime}}-\text{expit}(X_{i,0}^{T}\hat{\beta}_{0} )\right\}X_{i,0}\\ \tilde{Y}_{i,1}^{*}-\hat{\mu}_{2}^{n}\\ (\hat{\mu}_{0}^{0}-\hat{\mu}_{2}^{n})-\hat{\mu}_{2}^{d}\end{bmatrix}=\mathbf{0}\]
where \(\tilde{Y}_{i,k}^{*}\) is the predicted hypertension probability under the cigarette smoking ban and \(\tilde{Y}_{i,k}^{*^{\prime}}\) is the predicted hypertension probability under the observed value of cigarette smoking ban at time \(k\). The first three estimating functions are for the smoking ban ICE g-computation, the next three are for the natural course ICE g-computation, and the last estimating function is for the difference between the smoking ban and the natural course.
The following specifications were used for outcome models. Height and weight were rescaled to be standard normal, and then modeled using restricted cubic splines with knots located at the 5\({}^{\text{th}}\), 33\({}^{\text{rd}}\), 67\({}^{\text{th}}\), and 95\({}^{\text{th}}\) percentiles. Age, exercise, alcohol, self-rated health, and education were modeled as disjoint indicator terms. The outcome model for hypertension at wave IV included baseline and time-varying variables from wave III only, expect for current cigarette smoking which included both wave I and wave III smoking status. No interaction terms were included in models besides an interaction term between wave I and wave III smoking status for hypertension at wave IV. The second model of each ICE g-computation only included variables from wave I.
Analyses were conducted with Python 3.9.4 and replicated in R 4.2.0 (Vienna, Austria) with numDeriv and rootSolve[35, 36]. Data are freely available from the University of North Carolina at Chapel Hill Dataverse hosted by the Odum Institute [37, 38, 39]. Code to preprocess the data set and replicate the analysis is provided at gihtub.com/pzivich/publications-code. To compare runtimes, unstratified ICE g-computation was also implemented using a generalized linear model, with the variance estimated using a nonparametric bootstrap with 500 resamples. Bootstrap iterations were run in sequence and up to seven in parallel. Runtime results were reported for Python only.
### Results
After application of the exclusion criteria, 5657 (87%) observations remained in the analytic data set. Descriptive statistics for the analytic data set are provided in Tables 4-5. Between waves I and III, 1694 (30%) observations were censored. Between waves III and IV, an additional 594 (15%) of observations were censored. Had all current smoking been prevented at waves I and III, the estimated prevalence of hypertension at wave IV would have been 0.175 (95% CI: 0.157, 0.193), which is 1.15 percentage points lower (95% CI: -0.025, 0.002) than the natural course. The sandwich variance estimator provided a similar CI to the bootstrap but was substantially faster (Table 6). This result remained true even when up to seven bootstrap iterations were run in parallel.
\begin{table}
\begin{tabular}{l c} \hline & Wave I (\(n=5657\)) \\ \hline Current cigarette smokera & 1462 (26\%) \\ Age & 16 [15, 17] \\ Ever tried smoking a cigarette & 3141 (56\%) \\ Female & 2779 (49\%) \\ Race & \\ White & 3911 (69\%) \\ Black & 1434 (25\%) \\ Native American & 90 (2\%) \\ Asian or Pacific Islander & 222 (4\%) \\ Hispanic & 358 (6\%) \\ Current grade level & \\ 7th & 851 (15\%) \\
8th & 882 (16\%) \\
9th & 997 (18\%) \\
10th & 1027 (18\%) \\
11th & 1019 (18\%) \\
12th & 881 (16\%) \\ Height (inches) & 66 [63, 69] \\ Weight (pounds) & 135 [118, 160] \\ Alcohol use in prior 12 months & \\ Never & 3048 (54\%) \\
1-2 days total & 969 (17\%) \\
1-3 times a month & 1106 (20\%) \\
1-2 times a week & 343 (6\%) \\
3 or more times a week & 191 (3\%) \\ Exercise over previous seven days & \\ None & 917 (16\%) \\
1-2 times & 1797 (32\%) \\
3-4 times & 1400 (25\%) \\
5 or more times & 1543 (27\%) \\ Self-rated health & \\ Excellent & 1637 (29\%) \\ Very good & 2299 (41\%) \\ Good & 1369 (24\%) \\ Fair & 332 (6\%) \\ Poor & 20 (0\%) \\ \hline \end{tabular}
\end{table}
Table 4: Descriptive Statistics for Add Health Wave I
\begin{table}
\begin{tabular}{l c} \hline \hline & Wave III (\(n=3963\)) \\ \hline Current cigarette smoker & 1322 (33\%) \\ Female & 1882 (47\%) \\ Race & \\ White & 2764 (70\%) \\ Black & 970 (24\%) \\ Native American & 69 (2\%) \\ Asian or Pacific Islander & 160 (4\%) \\ Hispanic & 232 (6\%) \\ Highest grade completed & \\ Less than high school & 451 (11\%) \\ High school & 1220 (31\%) \\ At least some college & 2207 (56\%) \\ Pursuit of graduate degree & 85 (2\%) \\ Height (inches) & 67 [64, 70] \\ Weight (pounds) & 163 [138, 194] \\ Alcohol use in prior 12 months & \\ Never & 1048 (26\%) \\
1-2 days total & 444 (11\%) \\
1-3 times a month & 1293 (33\%) \\
1-2 times a week & 801 (20\%) \\
3 or more times a week & 377 (10\%) \\ Exercise over previous seven days & \\ None & 782 (20\%) \\
1-2 times & 664 (17\%) \\
3-4 times & 597 (15\%) \\
5 or more times & 1920 (48\%) \\ Self-rated health & \\ Excellent & 1317 (33\%) \\ Very good & 1662 (42\%) \\ Good & 812 (20\%) \\ Fair & 156 (4\%) \\ Poor & 16 (0\%) \\ No or unknown health insurance status & 890 (22\%) \\ Ever diagnosed with HPB or HTN & 231 (6\%) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Descriptive Statistics for Add Health Wave III
We note that this analysis should only be viewed as an illustration of ICE g-computation, as the identification assumptions are unlikely to be reasonably met in this example. Specifically, other uses of smoking tobacco were ignored. Socio-economic status and diet were not included in the adjustment set, a likely violation of treatment exchangeability. Measurement error of self-reported covariates is probable, particularly for self-reported cigarette use. The analysis also ignored the Add Health sampling weights, so inference to the stated Add Health target population is not appropriate. Finally, follow-up visits were every seven years. As described elsewhere [40], how data is discretized or coarsened over time can result in a loss of information regarding time-varying covariates, which can lead to bias. As such, the follow-up design of Add Health may produce bias in the estimate for the intervention and outcome examined in this analysis.
## 6 Conclusions
Here, we framed the ICE g-computation estimator as an M-estimator to reduce the computational burden of variance estimation with the nonparametric bootstrap. Performance of the empirical sandwich variance estimator in the simulation study aligned with expectations and provided notable reductions in runtimes in the applied example. As indicated in our simulations, stratified ICE g-computation may fail to converge or have poor performance with small sample sizes. In these cases, unstratified ICE g-computation may be preferred, under the additional assumption that the parametric constraints used in the models are deemed to be close approximations.
This paper focused on deterministic plans that did not depend on the natural course. However, ICE g-computation has been extended for plans that depend on the natural course or stochastic plans (i.e., plans where treatment is assigned probabilistically) [41]. Generalizations of the proposed M-estimators could also be developed for these extensions of ICE g-computation. Use of M-estimators is also not limited to g-computation. Inverse probability weighting estimators for longitudinal data can be expressed as estimating equations [9], thereby avoiding the conservative estimation of the variance via the 'robust' variance estimator in some settings [1, 42]. Multiply-robust estimators can also be expressed as estimating equations [6, 7, 9, 43, 44]. For some versions of these multiply-robust estimators, closed-form variance estimators based on the influence curve are available [43, 44]. For those without a simple or readily available closed-form expression, the empirical sandwich variance estimator remains an appealing option.
\begin{table}
\begin{tabular}{l c c} \hline \hline Prevent cigarette smoking & Risk (95\% CI) & Runtimea \\ Sandwich & 0.175 (0.157, 0.193) & 5.4 \\ Bootstrap in sequenceb & 0.175 (0.156, 0.194) & 45.7 \\ Bootstrap in parallelc & 0.175 (0.156, 0.194) & 29.4 \\ Effect of cigarette smoking ban & Ban effect (95\% CI) & Runtimea \\ Sandwich & -0.011 (-0.025, 0.002) & 14.3 \\ Bootstrap in sequenceb & -0.011 (-0.025, 0.002) & 89.6 \\ Bootstrap in parallelc & -0.011 (-0.025, 0.002) & 47.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results for the illustrative example of cigarette smoking on prevalent hypertension using data from Add Health
## Acknowledgments
Conflicts of Interest: None to declare. Financial Support: This work was supported in part by T32AI007001 (PNZ), R01AI157758 (SRC, JKE, BES). Data and Code: Data used for the illustrative example are publicly available from the University of North Carolina at Chapel Hill Dataverse hosted by the Odum Institute. Code to replicate the illustrative example and the simulation experiment is available at
[https://github.com/pzivich/publications-code](https://github.com/pzivich/publications-code).
|
2305.09977 | DRackSim: Simulator for Rack-scale Memory Disaggregation | Memory disaggregation has emerged as an alternative to traditional server
architecture in data centers. This paper introduces DRackSim, a simulation
infrastructure to model rack-scale hardware disaggregated memory. DRackSim
models multiple compute nodes, memory pools, and a rack-scale interconnect
similar to GenZ. An application-level simulation approach simulates an x86
out-of-order multi-core processor with a multi-level cache hierarchy at compute
nodes. A queue-based simulation is used to model a remote memory controller and
rack-level interconnect, which allows both cache-based and page-based access to
remote memory. DRackSim models a central memory manager to manage address space
at the memory pools. We integrate community-accepted DRAMSim2 to perform memory
simulation at local and remote memory using multiple DRAMSim2 instances. An
incremental approach is followed to validate the core and cache subsystem of
DRackSim with that of Gem5. We measure the performance of various HPC workloads
and show the performance impact for different nodes/pools configuration. | Amit Puri, John Jose, Tamarapalli Venkatesh, Vijaykrishnan Narayanan | 2023-05-17T06:17:06Z | http://arxiv.org/abs/2305.09977v2 | # DRackSim: Simulator for Rack-scale Memory Disaggregation
###### Abstract.
Memory disaggregation has emerged as an alternative to traditional server architecture in data centers. This paper introduces DRackSim, a simulation infrastructure to model rack-scale hardware disaggregated memory. DRackSim models multiple compute nodes, memory pools, and a rack-scale interconnect similar to GenZ. An application-level simulation approach simulates an x86 out-of-order multi-core processor with a multi-level cache hierarchy at compute nodes. A queue-based simulation is used to model a remote memory controller and rack-level interconnect, which allows both cache-based and page-based access to remote memory. DRackSim models a central memory manager to manage address space at the memory pools. We integrate community-accepted DRAMSim2 to perform memory simulation at local and remote memory using multiple DRAMSim2 instances. An incremental approach is followed to validate the core and cache subsystem of DRackSim with that of Gen5. We measure the performance of various HPC workloads and show the performance impact for different nodes/pools configuration.
Disaggregated systems, Remote memory, Data Centers +
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
+
Footnote †: journal journal: Computer systems organization
+
Footnote †: journal: Computer systems organization
etc., for the remote memory address space. Lastly, a _rack-level interconnect_ ties the compute nodes and remote memory pools to allow remote memory access as network packets. Following the latency of memory-semantic interconnect such as GenZ, it should model both cache-based and page-based access to remote memory.
In this paper, we introduce a simulation framework that models an environment similar to rack-scale memory disaggregation with the components that are mentioned above. DRackSim follows an application-level simulation approach that uses Intel's PIN platform (Hung et al., 2017) and introduces two simulation models: a trace-based and a cycle-level simulation model. The trace-based simulation uses a trace for main memory accesses with approximate timing information and is produced by a pintool by simulating a multi-core cache hierarchy. Multiple traces are collected similarly, each representing one of the compute nodes in a rack. All the traces are processed in parallel for local and remote memory simulation, including an interconnect for remote memory access, representing a rack-scale model. Trace-based simulations usually lack the modeling details and restrict an exploration of design space. Therefore we also build a cycle-level simulation model that uses an instruction stream produced by instrumenting a workload with pintool. The instructions execution is then simulated on a detailed x86-based out-of-order simulation at each compute node. The same approach is applied to model multiple compute nodes in a rack. Both simulation modes support a full spectrum of single to multi-core processor architecture and a multilevel cache hierarchy. DRackSim also models some system-level components necessary to explore the system research space for disaggregated memory systems. The compute node has a memory-management unit (MMU) for address translation and uses an address space management unit similar to an OS memory manager with 4-level page tables for the allocation of memory pages in either local or remote memory. The interconnect is based on a queue simulation model and can be configured to meet the latency of the target interconnect hardware by mapping the correct network parameters. Finally, we integrate an open-source cycle-accurate memory simulator DRASim2 (Rams et al., 2017) for simulating DRAM at compute nodes locally and at remote memory pools. To maintain a rack-scale time ordering of global events, compute nodes, memory pools, and interconnect are all synchronized with a global clock. As real hardware disaggregated memory systems are still in the prototype stage, we use an incremental approach to separately validate different components of our simulation framework and perform rigorous testing for the reproducibility of results. Further, our simulator base uses a multi-threaded approach, allowing it to perform fast and scalable simulations even with many multi-core nodes and memory pools. The main contribution of our work is as follows:
* We introduce DRackSim, an application-level simulation framework for rack-scale disaggregated memory systems that can model multiple compute nodes and memory pools with a global memory manager.
* We present two simulation modes with different levels of modeling details that can be used where appropriate.
* We model both the cache-based and page-based access to remote memory through an interconnect model whose latency is similar to GenZ.
* We perform incremental validation on the CPU core and cache subsystem with single and multi-threaded benchmarks. Finally, we compare the performance of large in-memory workloads on our rack-scale disaggregated memory simulator using various configurations that also show the impact of using remote memory and slowdown due to congestion and contention.
The rest of the paper is organized as follows: In the next section, we discuss the background and motivation behind our work. Section-3 discusses the design and operations of DRackSim. We discuss the validation aspect in section-4 and use case experiments in section-5.
## 2. Background and Motivation
We use binary instrumentation to perform application-level simulation, avoid the complexities of simulating a full system, and focus more on disaggregation. Although dynamic instrumentation has some known limitations for modeling core subsystems, it has become a first choice of researchers due to its speed and ability to build scalable models (Bang et al., 2016; Chen et al., 2017).
**Dynamic Binary Instrumentation with Pin**: Intel Pin provides a framework for performing dynamic binary instrumentation of an executable on x86 platforms that can be used to analyze a workload under study. Pin provides its functionality through two primary routines named instrumentation and analysis. It allows binary instrumentation of an executable at different levels of granularity, such as instruction, basic block, routine, or complete application image. We use instruction-level instrumentation to produce traces of each executed instruction with its dependencies. Further, Pin also supports multi-threaded workload instrumentation and provides additional information about the instrumented instruction, such as instruction types, branch or not, branch target, etc. This information can be used to model lower-level architectural details for simulating a CPU core. We utilize the rich API provided by Pin to create pin tools that can analyze the workloads and perform out-of-order core simulations.
**DRAMSim2**: Accurate simulation of main memory is important for measuring the performance of the systems that focus on memory, such as in disaggregated memory systems. Therefore, we integrate a cycle-accurate memory system simulator DRAMSim2 (Rams et al., 2017) that can accurately model DRAM hardware and is accepted as a standard simulator by the research community. DRAMSim2 can accurately model DDR2/3 memory and provides a configurable programming interface. It also provides a simple _MemorySystem_ object interface for initializing a DRAM memory model, whereas _callback_ functions are triggered on completion of each memory request. To model DDR4 memory, we use hardware parameters of DDR4 (Rams et al., 2017) in the DRAMSim2 _device.ini_ file that initiates a memory instance for simulation.
**Motivation:** The first question that arises while building a new simulator is: why yet another simulator? Hardware memory disaggregation is a relatively new research area and an emerging memory architecture. Although software disaggregation and its real-world implementations (Bang et al., 2016; Chen et al., 2017) have existed for a while now, it is quite different from hardware disaggregation. The concept of remote memory pools (or memory blades) and their access over memory-semantic fabrics is new. It supports fine-grained cache-line
access to remote memory and a page swap mechanism which is also supported in software memory disaggregated systems. There do exist some simulation environments for hardware memory disaggregation, but they are limited to evaluation within a single node (Krishnan et al., 2017), and the hardware implementations are still in the prototype stage (Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019). Modifying open-source architectural simulators such as Gem5, MARSSx86, Sniper, etc., requires a huge effort to simulate disaggregated memory models while considering their inability to model multiple compute nodes. Modeling disaggregated memory only at a rack scale makes studying higher and lower-level factors impacting the performance possible. With more compute nodes, memory access traffic will produce congestion on the network and contention in queues at remote memory pools. Memory access latency is critical for application performance, and both network congestion and memory contention are significant for performance evaluation. To our knowledge, no other simulator can fulfill such modeling requirements, which is the primary reason behind building a new simulator.
## 3. DRackSim Design and Operations
The objective of DRackSim is to enable researchers to quickly explore new hardware structures and system-level designs on top of disaggregated memory systems to get enough insights to translate into real designs. Fig 2 gives an overview of the complete simulation process that shows two major components of DRackSim: a front-end and a back-end. A Pin-based front-end performs the application analysis whose output serves as input for the multi-node rack-scale simulation at the back-end. DRackSim supports two different modes of operation for the front end: a trace-based simulation model and a cycle-level detailed simulation model. The trace-based model depends on static main-memory access traces that are directly produced by our pintool and are used for disaggregated memory simulation. On the other hand, cycle-level simulation occurs on top of instruction traces produced by Pintool and is consumed on-the-fly for x86-based out-of-order CPU simulation at compute node. We first explain the front end of our DRackSim, followed by the description of back-end components.
### Trace-Based Model
The trace-based simulation allows fast simulation of disaggregated memory racks with multiple compute nodes and memory pools. Pin provides with its package a tool named _Allcache_ to perform a functional simulation of the single-core cache hierarchy. We modify this tool to support multi-core TLB and 3-level of cache hierarchy (private I/D-L1, L2, and shared L3) and also add support for multi-threaded workloads. The instrumentation is done at the instruction-level granularity, and each thread is mapped to one of the cores based on its thread-id. Our pintool generates a reference whenever an instrumented instruction has a memory operand. A memory reference consists of the thread-id, virtual memory address, address space ID (while instrumenting multiple workloads simultaneously), its type (read/write/fetch), and memory access size. The memory references are passed through the TLB/cache model, which generates an approximate cycle number for each last-level cache (LLC) miss. The cache latency determines the time it takes to hit or miss at any cache level. If the memory access is a miss at LLC, it is recorded in a trace file with the current cycle number. An aggregate counter maintains the cycle number and is incremented on completing each memory access. Figure 2(a) shows an example of generating main memory traces on a single cache level with 4-cycle latency. The same process is followed to collect LLC misses on other cores. Cache misses from all the cores are merged and sorted to produce a single trace file representing all the main memory accesses while maintaining the workload's multi-threaded nature, as shown in Figure 2(b). This is similar to an in-order processing model, while system performance (IPC-Instructions per cycle) is calculated using the number of simulated instructions, main memory accesses, and average memory latency. However, average memory access latency is known only after the traces are simulated for DRAM access at local or remote memory at the back end. Even though main memory traces are less accurate than a real CPU model driving the memory model, it is convenient to use them. Main memory traces consume significantly less disk space than CPU-generated memory traces, and cache simulation during instrumentation makes it easy to produce, reuse, and port them. Another limitation of this model is that it produces static traces for main-memory accesses and does not allow system-level optimizations that change the state of TLBs and caches during the simulation, such as a hot-page migration, page-swapping systems, etc. But it allows quick evaluation of rack-scale disaggregated memory systems, which generate extensive memory requests from multiple nodes and have to be simulated together for remote memory and network interconnect.
### Cycle-Level Simulation Model
Due to some known limitations of the trace-based model, we present a cycle-level simulation model for multi-core out-of-order x86 processor architecture at the compute node. As with the trace-based model, the same technique is followed here for simulating the multi-node environment. While multiple input streams are produced at the front end in parallel (one for each compute node), they are consumed on-the-fly at the back end to simulate multiple nodes. In the detailed simulation, the pintool scope is restricted to producing an instruction trace stream by intercepting each executed instruction in the workload. The trace consists of instruction type (int/x87 ALU, int/x87 mul/div, SSE (vector), branch, nop, etc.), instruction address, its memory operands address/size, and register dependencies for that instruction. Although the core and cache subsystem modeling is performed at the back end in cycle-level simulation, we explain the details here and keep the back-end explanation focused on disaggregated memory modeling.
#### 3.2.1. Out-of-Order Core Modeling
Figure 4 shows the details of the OOO pipeline core modeling subsystem. The core architecture implements multiple pipeline stages (fetch, decode, issue, execute, write-back, and commit) at a higher level of abstraction with detailed modeling of hardware structures such as instruction queue, reservation stations, re-order buffer, architecture register file (ARF), register-alias table (RAT), and load-store queue. The instructions are read from the instruction stream generated by Pin, after which a fetch unit simulates the instruction fetch for multiple instructions within a cache line rather than fetching them individually. This is also pointed out in the previous work as many other academic simulators fetch each instruction separately (Bartner et al., 2017; Krishnan et al., 2017). This prevents the
fetch unit from performing false accesses to iTLB and iCache. The decode unit decodes the instruction, puts it into a buffer, and checks for the branch and its prediction result. The instruction waits in the decode buffer until the hardware resources are allocated. A limitation of performing binary instrumentation is that the execution of a process is decoupled, and it never goes down the wrong branch path in simulation. But the Pin API can tell whether an instruction is a branch. A branch predictor matches the prediction result with the information passed by the pin, and a penalty is added in case of misprediction, during which the CPU remains stalled.
After decoding, the issue unit allocates an entry for the instruction in the reservation station (RS) and re-order buffer (ROB) or otherwise stalls it if no free entry is available. The incoming instruction in the RS clears its register dependencies by accessing the register file (ARF) or from the register alias table (RAT), which points to a ROB entry. The instruction waits if some previous instruction does not yet free a register. The memory read operands (if present) are sent to an address generation unit (AGU) to simulate effective address calculation and forward the address to the load-store queue for memory access. If the same load address is already present in the queue as a store, it is forwarded without waiting. Once the dependencies are clear, instructions are moved to a ready queue from which the dispatch unit selects and allocate execution units to them based on their instruction type and opcode. The execution latency can simply be configured for each type of instruction and its operation based on the number of cycles it takes to execute in the target processor model. On completing the execution, the result gets broadcast among all the hardware structures: the waiting RS entries clear their memory or register dependency, instruction status changes to 'executed' in ROB, and a write-back is performed to memory if there is a write operand. In the simulation, we save the RS and ROB indexes to be updated after instruction execution and save the simulation time by avoiding costly searches. Only in the commit stage is the ROB entry released, and updates to the register file are performed to make it available globally. DRackSim allows the user to configure the width and latency of any stage as per the target hardware.
DRackSim's cache model consists of a multi-level cache hierarchy with private L1 I-D, L2, and shared L3 cache. The non-blocking
Figure 4. Out-of-Order core modeling subsystem
Figure 3. Trace Generation (a) Recording Main-memory access (b) Final Multi-threaded Trace
Figure 2. DRackSim infrastructure overview. Abbreviations: “LM” Local memory, “MMU” Memory management unit, “NIC” Network Interface
caches support multiple outstanding misses using miss-status handling registers (MSHRs) with a configurable number of entries. The memory access for instruction fetch or load/store queue starts at the TLB for virtual to physical address translation and uses 4KB fixed-size pages. The latency for different cache levels determines the time it takes for a cache hit or for a cache miss to reach the next memory level in the hierarchy. Once the memory access reaches LLC MSHR, it is also queued for the main memory access and DRAM simulation. On completing the memory access, the new block replaces an old cache block and writes it back to memory if the victim block is modified (like in a write-back cache). The caches can be configured to be either write-allocate or no-write-allocate. Finally, the cache subsystem notifies the corresponding entry in the load/store queue on completing a memory request, which is then broadcast to the waiting instructions in the pipeline.
### Back-end Modeling
The back-end of DRackSim simulates an environment similar to rack-scale memory disaggregation with multiple compute nodes simulated in parallel on different simulation threads (in both trace-based or detailed). The memory accesses produced by the compute nodes drive the DRAM simulation at local or remote memory. The local memory requests are simulated at the node's local memory, whereas remote memory requests are passed through an interconnect model before being simulated for memory access at the remote memory pool. We explain here all the simulated components to model disaggregated memory behavior.
#### 3.3.1. Compute node memory manager
The memory manager at compute node is an abstraction of processor MMU for address translation and an OS-like memory manager for address space management and memory allocation at the compute node. A memory request reaches MMU on a TLB miss and performs a page-table walk with a defined latency. If the page is not in memory, the request is forwarded to the page-fault handler for memory allocation and creates a page-table entry (PTE). DRackSim models 4-level page tables for mapping virtual addresses to the physical pages. The memory manager allocates a new page either in the local or remote memory based on the allocation policy and availability of free memory space. The page-fault service stalls the CPU and incurs fixed latency, which can be configured to model the time it takes to service the page fault in an OS. The memory allocation policy at compute node can be configured to allocate memory pages in any ratio from local and remote memory. The modeling of these components allows exploring system-level optimizations for disaggregated memory, such as page migration from remote to local memory. Another direction is exploring remote memory allocation policies that can avoid contention at the remote memory pool queues and can be a reason for high tail latency.
#### 3.3.2. Remote Memory Organization
Remote memory in disaggregated memory systems can either be managed as shared memory or as a distributed one. In the shared memory approach, the remote memory address space is visible to the OS at compute nodes, and a node can allocate a page at any address space. The owner node will act as a home agent for that memory page. Such memory organization will generate significant coherency traffic to the remote memory, limiting the system's scalability. Multiple nodes allocating remote memory pages will also require frequent communication with a central authority (such as the global memory manager) to prevent an address conflict during a remote page allocation and will create a bottleneck at the global memory manager. Another way is the distributed organization, where the remote memory address space is not initially visible to compute nodes. The remote memory can be reserved in chunks of size, say 4MB to 16MB, whenever a node requests. DRackSim models distributed approach and uses another layer of an address mapping at compute nodes, as shown in Figure 5 for translating the local physical addresses to the remote physical address, as described in previous work (Han et al., 2017). This mapping is required to access allocated remote memory chunks, which differs from virtual address translation at TLBs. Frequently used addresses can be kept in the cache at the remote memory controller, which will add a few extra cycles on each remote memory access. Linux provides a memory hot-plug service for increasing or decreasing the system memory during run-time, which can be used for normal page allocation after initialization. With the distributed memory approach, the coherency traffic is limited to a single compute node and does not limit the scalability. This approach will also require a global memory manager to reserve remote memory for a compute node, but allocation in larger chunks will not create a bottleneck.
#### 3.3.3. Global Memory Manager
The global memory manager takes care of the remote memory address space in all the memory pools and reserves remote memory from one of the pools upon receiving a request from compute node. A global manager handle conflicts during remote memory reservations to different nodes and acts as a load balancer for choosing a memory pool for allocation. As pointed out by (Han et al., 2017), memory pools are bound to face contention in their queues when several compute nodes (with different memory access patterns) simultaneously access a remote memory pool. Memory pool selection should be done so that all pools face the same number of memory requests and avoid contention as much as possible. After reservation, the global memory manager will share chunk details (size, pool-id) with the requesting node, which creates an entry in its mapping table. DRackSim follows a round-robin memory pool selection while reserving a memory chunk from one of the pools.
#### 3.3.4. Interconnect Model
The interconnect model in DRackSim is based on a queue simulation that simulates the behavior of memory-semantic fabrics such as GenZ or other similar interconnects proposed for disaggregated memory. As shown in Figure 5(a), GenZ uses
Figure 5. Address translation for remote memory access
a media controller (similar to NIC) at the endpoints to interface with compute nodes (memory requestor) and memory pools (memory responder). Memory semantic fabrics like GenZ allow cache-based and page-based access to remote memory from compute nodes. The on-chip integration of the fabric and lightweight network protocol implementation in the hardware allows low-latency cache line access from remote memory during an LLC miss. Similarly, DMA-based remote memory access allows the memory-to-memory transfer of coarse-grained data (such as pages). All the remote memory accesses from multiple compute nodes pass through a rack-level interconnect (GenZ switch) before accessing the pooled memory. The cache-line access from remote memory on an LLC miss takes around 150-200 ns with moderate traffic, and page access takes around 1-1.5us with the GenZ design specifications [30].
DRackSim simulates both types of memory accesses from the compute nodes to remote memory pools. If an LLC miss refers to remote memory address space, it accesses the local-to-remote address map (discussed above) at the media controller. The memory access is encapsulated into a network packet containing the destination memory pool-id and its remote physical address. The model uses fixed 64-byte packets for a memory request, as the payload consists of only a memory address. The packet is then pushed into the queue structure at the media controller's network interface after adding a delay for packetization. The packet transmits from compute node to the input port of the rack-switch, where transmission and propagation delays determine the entry time of the packet at the switch input port, where a processing delay is added. The interconnect model implements a virtual queue at switch ports to avoid head-of-line blocking, and a 2-stage switch arbitrator selects the packet for forwarding in each cycle. The first stage arbitrator selects one of the input ports, and the second stage selects from one of the virtual queues from the selected input port. The packet is added to the buffer at the destination output port after adding a switching delay and is then sent toward the network interface of the destination memory pool to simulate the remote memory access. A similar way is followed at the memory pool to send back the response to a compute node using the source address of the memory access packet. The response packet holds a cache line of data as a payload, and its size can be configured accordingly. Similarly, write-backs from compute nodes to remote memory also use a packet size capable of storing a cache line.
Further, the RDMA request from compute nodes can consist of one or multiple pages of remote memory. Such memory accesses are simulated by sending one request packet that mentions the start memory address and the burst size of the transfer. The response from the memory pool is sent in the form of multiple small-size packets that the user can configure. The response packet size should be chosen wisely so that the subsequent cache line remote memory accesses are not starved. The reassembly logic collects all the response packets at compute node to form a memory page and notify on receiving a complete page, which can be used to model a page migration system. The network interface and the ports at the switch support configurable size buffers at both ends and implements a back-pressure flow control in case a buffer gets full. The interconnect model latency of DRackSim can be mapped to simulate target hardware (GenZ in our case) for both cache line and page transfer. Figure 5(b) shows a complete view of interconnect simulation model in DRackSim.
#### 3.3.5. Memory Simulation
We integrate cycle-accurate DRAMSim2 for DDR4 simulation for the disaggregated memory system. We initialize multiple instances of memory unit using its _Memory System_ interface, each representing either the local memory at a compute node or remote memory at a memory pool. DRAMSim2 provides a _callback_ functionality to notify the CPU model driving the memory model on the completion of every memory access. We modify the _MemorySystem_ interface and _callback_ functionality so that each memory unit (at a node or remote pool) can have a separate identity. We further modify the _addtransaction_ function in DRAMSim2 to include a node-id, transaction-id, and some other metadata for stats collection. The _addtransaction_ function in DRAMSim2 is used to send a memory request to a memory unit. All these modifications allow us to trace the completion of memory accesses at each memory unit instance and send back a response to the requesting node.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Element** & **Parameter** \\ \hline CPU & 3.6GHz, 8-width, 64-InsQ, 64-RS, 192-ROB, 128-LSQ \\ \hline L1 Cache & 32KB(I/D), 8-Way, 2-Cyc, 64B block \\ \hline L2 Cache & 256KB, 4-Way, 20-Cyc, 64B block \\ \hline L3 Cache & 2MB per core shared, 16-Way, 32-Cyc, 64B block \\ \hline \end{tabular}
\end{table}
Table 1. Simulation Parameters
Figure 6. (a) GenZ Interface with compute node and pooled memory (b) Interconnect Simulation Model detail
#### 3.3.6. Simulator Implementation
At the front end, the pintool can instrument multiple applications to create workload mixes on a single compute node. The user can skip any number of instructions to only instrument the region of interest in a workload. At the back-end, DRackSim model consists of multiple independent components such as compute nodes, memory pools, interconnect, etc., and we synchronize all of them with a single global clock. This is necessary to maintain the time-ordering of global events, such as simultaneous network and remote memory accesses from multiple compute nodes. However, the frequencies of individual components can be configured separately, and the global clock only provides a reference time for functional correctness in simulation. We utilize _thread-barriers_ for this purpose which controls the simulation flow of multiple nodes, with each node running on a separate simulation thread.
## 4. Validation
It is important to cover different validation aspects while developing a new architectural simulator. The first one is the functional correctness of the program. In our case, application functionality is decoupled from the actual simulation process, where Pin runs it natively. Pintool only adds some instrumentation primitives and does not change the application's functionality or execution flow. The second aspect is the accuracy of performance metrics. While it is important to validate the simulator with actual hardware or with another standard open-source simulator, the vast performance gap between various community-accepted architectural simulators is also a reality and is pointed out in previous work (Brands et al., 2017). Ayaz et al. surveyed all the major architectural simulators, such as Gem5, MARSSx86, Sniper, etc., and observed their performance against real x86 hardware. It was found that a significant variation exists between the IPC values and LLC misses on all benchmarks. The study observes overestimated branch mispredictions, imprecise instruction decoding to micro-operations, inflated cache misses, and lack of modeling for all hardware optimizations as the main source of inaccuracies. However, the simulator can be calibrated to match the performance of real hardware and should provide enough insight into actual hardware performance.
Due to the unavailability of a full-scale hardware disaggregated system, we follow an incremental approach to validate different components of our simulator. The integrated on-chip interconnects, such as GenZ, are either unavailable or only have been tested with small-scale prototypes using FPGAs (Han et al., 2016; Chen et al., 2017; Chen et al., 2017). In this section, we validate the core and cache subsystems of DRackSim against Gem5 system emulation mode (SE) and show the impact of network interconnect separately through our experiments in the next section. We set the processor width and instructions latency for different instruction types to the same values for calibrated validation and used the same size structures for all the hardware resources (such as InsQRS,ROB,LSQ). We further fix our simulator's page fault and TLB-miss latency as per Gem5, which only adds 1 or 2 cycles on each such event in SE mode. Table 1 shows the system configuration for CPU validation. We also extensively validate the cache subsystem
Figure 8. Normalized L3 Cache Misses for _Splash-3_ benchmarks
Figure 7. Normalized IPC values for _Splash-3_ benchmarks
using the last-level cache misses, which can represent the behavior of the complete cache hierarchy.
We perform the CPU validation for 1, 2, and 4-cores over SPLASH-[26] benchmarks by spanning as many threads as the number of cores. Figure 7 shows the CPU validation results with normalized IPC values of our simulator compared to the IPC values of Gem5. The IPC numbers of DRackSim are close to Gem5 IPC for most of the benchmarks and show a mean absolute percentage error of 17% for all the core configurations. The variation is common among different simulators, as shown for the validation efforts in the past work [1; 2]. This could be due to the lack of support for fused micro-operations (_\(\mu\)ops_), the pipeline depth, and the abstraction of actions performed during branch misprediction. Similarly, we validated LLC misses for all the workloads, as shown in figure 8. We only observe an unexpectedly large error in the case of _contiguous_, _ocean_ with a 1-core CPU. Besides this, the mean absolute percentage error is around 3% over all the benchmarks on 1, 2, or 4 cores CPU. We further perform an in-depth validation for the LLC misses with _fft_, _fmm_, _and_ barnes_ over a range of L3 configurations on a 4-core CPU, shown in figure 9. To maximize the misses at LLC, we also reduce the L2 size to 64KB. Here also, we observe a slight variation in the LLC misses for DRackSim compared to Gem5. We observe a mean absolute percentage error of 11% over all the configurations. The LLC misses are slightly inflated or deflated for some configurations, which could be due to the implementation details of the cache hierarchy. We do not implement a separate write buffer, so the caches must evict a block during the write-backs. Another reason can be using separate load and store queues in Gem5, whereas DRackSim has a unified load/store queue that can create a small difference in the total number of non-redundant loads and stores. These differences can generate variation in cache accesses, and inaccuracies can accumulate from lower-level caches to LLC.
## 5. Evaluation
In this section, we evaluated DRackSim for various disaggregated memory configurations to show the impact of interconnect and remote memory accesses on the overall system performance using IPC and average memory access delays. When more compute nodes access a shared memory pool, the network traffic on a particular link increases significantly, and the memory pool also faces contention in its queues. Similarly, when remote memory is used for allocating a higher percentage of memory pages of an application, the performance will come down due to more time taken by remote memory accesses. Different workloads suffer a different extent of performance slowdown based on their memory access pattern and the number of main memory requests. As the benefits of memory disaggregation are targeted toward HPC data centers, we choose large in-memory workloads and HPC mini-apps to realize the performance difference, shown in table 2. WL-1 workload mix represents a mix of 4 applications from NASA parallel benchmark suite (NPB) [9]. WL-2 workload consists of _lulesh_[13], _miniFE_[7], _SimpleMOC_[11], and _XSBench_[29] mini-HPC applications simulating scientific workflows.
### Impact of Node-to-Pool Configurations
Firstly, we experiment separately with WL-1 and WL-2 in different node-to-pool configurations. We vary the number of compute nodes and memory pools to observe the performance difference due to the changing intensity of remote memory accesses. As shown in figure 10, we consider five configurations with 1N:1P (one compute node and one memory pool), 4N:1P, 4N:4P, 8N:1P, and 8N:2P. Each compute node executes one of the workloads (2 nodes execute the same benchmark in the case of 8N:1P and 8N:2P) and allocates 50% of the pages each at local and remote memory in a round-robin manner. We simulate 100M cycles for all the experiments in this section, with multiple nodes simultaneously running their workloads. We observe a significant performance degradation for all the benchmarks in all the multi-node configurations compared to 1N:1P. This performance gap is due to the in-memory computation requirements of the workloads and high average memory access latency while sharing a memory pool with multiple nodes. The slowdown is especially high in the case of 4N:1P and is highest for 8N:1P configuration. We also observe that for all workloads, the
\begin{table}
\begin{tabular}{|l|l|} \hline
**Description** & **Workloads** \\ \hline WL-1 mix & NAS:mg, NAS:sp, NAS:bt, NAS:ft \\ \hline WL-2 mix & lulesh, miniFE, SimpleMOC, XSBench \\ \hline \end{tabular}
\end{table}
Table 2. Benchmarks
Figure 9. Normalized L3 Cache Misses over different cache configurations
2N:1P configuration suffers only a slight performance slowdown compared to 1N:1P. Further, the WL-2 mix suffers comparatively lesser performance degradation in high node configurations for each workload than in WL-1. This is due to a comparatively lesser number of main memory requests for the workloads in WL-2. The IPC results can also be verified by the average memory access delays of each workload in different node-to-pool configurations, as shown in figure 11. A significant portion of the memory access latency is spent while accessing the remote memory (shown in yellow), which keeps increasing with an increasing number of nodes. The overall performance of 4N:1P and 8N:2P comes out to be very similar for most of the workloads. With more nodes and memory pools, there will be a requirement to explore pool allocation policies to load-balance the memory access traffic to all the memory pools. Round-robin selection can not maintain a uniform memory request rate for all the memory pools and can result in high tail latency, and require exploration of sophisticated algorithms.
### Impact of Memory Allocation Percentage
We further experiment over 4N:1P configuration by changing the percentage of memory allocation at local and remote. We consider a scenario in which 50% of memory is allocated from both local and remote and compare it with 25% local and 75% local memory. The pages are allocated in the respective memory based on the configuration in a round-robin manner (e.g., one page in local, followed by three pages in remote for 25:75). Figure 12 shows the performance in terms of IPC for all the workloads in WL-1 and WL-2 with all three configurations. We can observe a severe performance impact on all the workloads in WL-1 after decreasing the allocation percentage in the local memory. In contrast, WL-2 workloads face only a small IPC degradation while moving from 25% to 75% local memory allocation. The reason is straightforward and signifies the importance of the memory request rate of different workloads and their ability to face only a small performance degradation even with 75% remote memory allocation. This leads to another research direction where a compute node running multiple workloads decides the percentage of local and remote memory allocation to a workload based on its memory access pattern and frequency of memory access.
In figure 13, we observe the average memory access delay for each workload in all the configurations with different local-to-remote allocation percentages. The trend justifies the IPC values observed before, and with WL-2 workloads, the lesser number of memory requests compared to WL-1 has a lesser impact on the
Figure 11. Average Memory Latency of benchmarks in workload mix WL1 and WL2 on a different Node-to-Pool configuration with a memory allocation ratio of 50:50 for local and remote
Figure 10. IPC of benchmarks in workload mix WL1 and WL2 on a different Node-to-Pool configuration with a memory allocation ratio of 50:50 for local and remote
performance slowdown. The average memory latency increases as the local-to-remote memory allocation ratio is decreased. All these experiments also validate the interconnect model, which faces more congestion while the number of remote memory requests increases. Due to this, it takes more time to process the memory request packets. Some of the increased memory access latency is also due to the contention at queues at the remote memory pools.
## 6. Conclusion and Future Work
Memory disaggregation is an emerging alternative to traditional server architecture that can overcome the issue of memory under-utilization. This paper presents a hardware disaggregated memory simulator DRackSim for rack-scale simulations of multiple compute nodes and memory pools. We also discuss the possible memory organizations and follow a distributed memory model to allocate remote memory. A global memory manager is presented to manage the remote memory address space, which also allocates memory to a requesting node. We model both the cache line and page-based remote memory accesses, which can be used to exploit system optimization, such as occasional hot-page migration with frequent cache-based access to remote memory. As with any other simulator, we rely on community support for the incremental development of the simulation infrastructure and modeling new features for design space exploration. The simulator can be extended to model shared memory organization which is transparent to OS at compute nodes. Further, we did not model the cache-coherence in the interest of maintaining the speed of the simulation. Although the number of LLC misses (validated earlier) decides the system performance in the disaggregated memory system, we intend to implement a few missing features in the future. We also make the source code of DRackSim available to the research community working in high-performance computing. We look forward to interesting research directions in disaggregated memory based on our simulation infrastructure.
|
2302.02885 | Run Time Assurance for Autonomous Spacecraft Inspection | As autonomous systems become more prevalent in the real world, it is critical
to ensure they operate safely. One approach is the use of Run Time Assurance
(RTA), which is a real-time safety assurance technique that monitors a primary
controller and intervenes to assure safety when necessary. As these autonomous
systems become more complex, RTA is useful because it can be designed
completely independent of the primary controller. This paper develops several
translational motion safety constraints for a multi-agent autonomous spacecraft
inspection problem, where all of these constraints can be enforced with RTA. A
comparison is made between centralized and decentralized control, where
simulations of the inspection problem then demonstrate that RTA can assure
safety of all constraints. Monte Carlo analysis is then used to show that no
scenarios were found where the centralized RTA cannot assure safety. While some
scenarios were found where decentralized RTA cannot assure safety, solutions
are discussed to mitigate these failures. | Kyle Dunlap, David van Wijk, Kerianne L. Hobbs | 2023-02-06T15:52:43Z | http://arxiv.org/abs/2302.02885v2 | # Run Time Assurance for Autonomous Spacecraft Inspection
###### Abstract
As autonomous systems become more prevalent in the real world, it is critical to ensure they operate safely. One approach is the use of Run Time Assurance (RTA), which is a real-time safety assurance technique that monitors a primary controller and intervenes to assure safety when necessary. As these autonomous systems become more complex, RTA is useful because it can be designed completely independent of the primary controller. This paper develops several translational motion safety constraints for a multi-agent autonomous spacecraft inspection problem, where all of these constraints can be enforced with RTA. A comparison is made between centralized and decentralized control, where simulations of the inspection problem then demonstrate that RTA can assure safety of all constraints. Monte Carlo analysis is then used to show that no scenarios were found where the centralized RTA cannot assure safety. While some scenarios were found where decentralized RTA cannot assure safety, solutions are discussed to mitigate these failures.
## Introduction
In order for autonomy to be used safely on mission-critical systems, assurance methods must be considered. For space systems in particular, a mistake or fault could result in damage to multi-million or multi-billion dollar equipment, as well as potential loss of invaluable data and space-based services. One method of assuring safety is with the use of Run Time Assurance (RTA)[1], which is an online safety assurance technique designed to filter potentially unsafe inputs from a primary controller and intervene to assure safety of the system when necessary. RTA systems decouple the task of safety assurance from all other control objectives, allowing it to scale well to increasingly complex systems. One type of RTA filter is the Active Set Invariance Filter (ASIF)[2], which utilizes Control Barrier Functions (CBF)[3] to minimize deviation from the primary controller while still assuring safety of the system. Additionally, the ASIF approach allows multiple constraints to be enforced at once, without needing complex switching or decision logic.
This paper focuses on assuring safety for an autonomous spacecraft inspection problem, where multiple active deputy spacecraft cooperate to inspect a passive, stationary chief spacecraft. A group of deputies inspecting a chief is shown in Figure 1. Several safety constraints on position and velocity are developed and enforced with RTA. While most of these constraints can be enforced through ASIF RTA, a specific case is shown where alternate RTA methods would better assure safety.
CBFs for online safety assurance are becoming increasingly popular, with applications to autonomous driving[4], segways[5], fixed-wing aircraft[6], and many more. For spacecraft related problems, CBFs have been used to assure safety for docking[7], rendezvous[8], proximity operations[9], and attitude control[10]. The safety
constraints in this work are adapted to an ASIF RTA approach from safety requirements for automatic spacecraft maneuvering [11, 12]. This work builds upon previous constraints developed for spacecraft docking [13] and inspection [14].
The main contributions of this paper are developing mathematical constraints for the spacecraft inspection problem, comparing centralized and decentralized RTA methods, performing Monte Carlo analysis to show that all constraints can be enforced simultaneously, and considering the case where ASIF RTA can be combined with a switching-based RTA approach.
## Run Time Assurance
This paper considers control affine dynamical systems, where a continuous-time system model is given by a system of ordinary differential equations,
\[\dot{\mathbf{x}}=f(\mathbf{x})+g(\mathbf{x})\mathbf{u}. \tag{1}\]
Here, \(\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{n}\) denotes the state vector, \(\mathbf{u}\in\mathcal{U}\subseteq\mathbb{R}^{m}\) denotes the control vector, and \(f:\mathcal{X}\rightarrow\mathbb{R}^{n}\) and \(g:\mathcal{X}\rightarrow\mathbb{R}^{n\times m}\) are locally Lipschitz continuous functions. Additionally, \(\mathcal{X}\) is the set of all possible state values and \(\mathcal{U}\) is the admissible control set.
As shown in Figure 2, a feedback control system with RTA is divided into a performance-focused primary controller and a safety-focused RTA filter. The primary controller first computes a desired control input, \(\mathbf{u}_{\mathrm{des}}\), based on the state \(\mathbf{x}\). The RTA filter evaluates \(\mathbf{u}_{\mathrm{des}}\) at the current state and modifies it as necessary to produce a safe control input, \(\mathbf{u}_{\mathrm{act}}\), which is then passed to the plant. In Figure 2, the primary controller is highlighted red to indicate low safety confidence, while the RTA filter is highlighted blue to indicate high safety confidence. This structure allows the designer to isolate unverified or unsafe components of the control system.
For a dynamical system, safety can be defined by a set of \(M\) inequality constraints \(\varphi_{i}(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}\), \(\forall i\in\{1,...,M\}\), where \(\varphi_{i}(\mathbf{x})\geq 0\) when the constraint is satisfied. The set of states that satisfies all of these
Figure 1: Inspection Problem.
Figure 2: Feedback control system with RTA.
constraints is known as the _allowable set_\(\mathcal{C}_{\text{A}}\), and is defined as,
\[\mathcal{C}_{\text{A}}:=\{\mathbf{x}\in\mathcal{X}\,|\,\varphi_{i}(\mathbf{x})\geq 0, \forall i\in\{1,...,M\}\}. \tag{2}\]
A state \(\mathbf{x}(t_{0})\) is then said to be safe if it lies in a _forward invariant_ subset of \(\mathcal{C}_{\text{A}}\) known as the _safe set_\(\mathcal{C}_{\text{S}}\), where,
\[\mathbf{x}(t_{0})\in\mathcal{C}_{\text{S}}\Longrightarrow\mathbf{x}(t)\in\mathcal{C} _{\text{S}},\forall t\geq t_{0}. \tag{3}\]
Note that the control input \(\mathbf{u}\) is bounded by the admissible control set \(\mathcal{U}\), and therefore \(\mathcal{C}_{\text{S}}\) is highly dependent on the controller. \(\mathcal{C}_{\text{S}}\) is then said to be _control invariant_ if there exists a control law \(\mathbf{u}\in\mathcal{U}\) that renders \(\mathcal{C}_{\text{S}}\) forward invariant. \(\mathcal{C}_{\text{S}}\) can be defined explicitly by a set of \(M\) control invariant inequality constraints, \(h_{i}(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}\), \(\forall i\in\{1,...,M\}\), where again \(h_{i}(\mathbf{x})\geq 0\) when the constraint is satisfied. \(\mathcal{C}_{\text{S}}\) is then defined as,
\[\mathcal{C}_{\text{S}}:=\{\mathbf{x}\in\mathcal{X}\,|\,h_{i}(\mathbf{x})\geq 0, \forall i\in\{1,...,M\}\}. \tag{4}\]
While there are many approaches to developing an RTA filter[15, 16], ASIF is an optimization-based algorithm designed to be minimally invasive towards the primary controller while still respecting multiple safety constraints. The ASIF algorithm uses CBFs to define safety, and a Quadratic Program (QP) to compute a safe control input. The ASIF algorithm used in this paper is defined as follows.
**Active Set Invariance Filter**
\[\mathbf{u}_{\text{act}}(\mathbf{x},\mathbf{u}_{\text{des}})=\underset{\mathbf{u }\in\mathcal{U}}{\text{argmin}}\left\|\mathbf{u}_{\text{des}}-\mathbf{u}\right\|_{2}^{2} \tag{5}\] \[\text{s.t.}\quad BC_{i}(\mathbf{x},\mathbf{u})\geq 0,\quad\forall i\in\{1,...,M\}\]
Here, \(BC\) represents one of \(M\) barrier constraints, which are satisfied when \(BC_{i}(\mathbf{x},\mathbf{u})\geq 0\). The objective of these barrier constraints is to render \(\mathcal{C}_{\text{S}}\) forward invariant under \(\mathbf{u}_{\text{act}}\). Nagumo's condition[17] is used to do this, where the boundary of \(\mathcal{C}_{\text{S}}\) is examined to ensure \(\dot{h}_{i}(\mathbf{x})\geq 0\), thus causing \(\mathbf{x}\) to never leave \(\mathcal{C}_{\text{S}}\). This is written as,
\[\dot{h}_{i}(\mathbf{x})=\nabla h(\mathbf{x})\dot{\mathbf{x}}=L_{f}h_{i}(\mathbf{x})+L_{g}h_{i}( \mathbf{x})\mathbf{u}\geq 0, \tag{6}\]
where \(L_{f}\) and \(L_{g}\) are Lie derivatives of \(h_{i}\) along \(f\) and \(g\) respectively. Note that this condition should only be enforced along the boundary of \(\mathcal{C}_{\text{S}}\), and therefore a strengthening function \(\alpha(x):\mathbb{R}\rightarrow\mathbb{R}\) is introduced to relax the constraint away from the boundary. \(\alpha(x)\) must be a continuous, strictly increasing class \(\kappa\) function and have the condition \(\alpha(0)=0\). The barrier constraint is therefore defined as,
\[BC(\mathbf{x},\mathbf{u}):=\nabla h(\mathbf{x})(f(\mathbf{x})+g(\mathbf{x})\mathbf{u})+\alpha(h(\mathbf{x} )). \tag{7}\]
**SPACERATT IN INSPECTION PROBLEM**
This paper focuses on the task of spacecraft inspection, where multiple active deputy spacecraft are inspecting a passive chief spacecraft. The analysis takes place in Hill's frame[18], as shown in Figure 3. The origin of the frame, \(\mathcal{O}_{H}\), is located at the center of mass of the chief, the unit vector \(\hat{x}\) points away from the center of the Earth, \(\hat{y}\) points in the direction of motion of the chief, and \(\hat{z}\) is normal to \(\hat{x}\) and \(\hat{y}\). The linearized relative motion dynamics between a deputy and the chief are given by the Clohessy-Wiltshire equations[19],
\[\dot{\mathbf{x}}=A\mathbf{x}+B\mathbf{u}, \tag{8}\]
where the state \(\mathbf{x}=[x,y,z,\dot{x},\dot{y},\dot{z}]^{T}\in\mathcal{X}=\mathbb{R}^{6}\), the control \(\mathbf{u}=[F_{x},F_{y},F_{z}]^{T}\in\mathcal{U}=[-u_{\max},u_{\max}]^{3}\), and
\[A=\begin{bmatrix}0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\\ 3n^{2}&0&0&0&2n&0\\ 0&0&0&-2n&0&0\\ 0&0&-n^{2}&0&0&0\end{bmatrix},B=\begin{bmatrix}0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ \frac{1}{m}&0&0\\ 0&\frac{1}{m}&0\\ 0&0&\frac{1}{m}\end{bmatrix}. \tag{9}\]
Here, \(m\) is the mass of the deputy and \(n\) is the mean motion of the chief's orbit. Each deputy is independent of all others, where all follow the same dynamics.
For this analysis, the attitude of each deputy is not modeled, and therefore it is assumed that each deputy is always pointing a sensor towards the chief. Letting \(\boldsymbol{p}\) define the position \([x,y,z]^{T}\) of the deputy in Hill's frame, the unit vector defining the orientation of the deputy's sensor boresight, \(\hat{r}_{b}\), is given by,
\[\hat{r}_{b}=-\frac{\boldsymbol{p}}{\|\boldsymbol{p}\|_{2}}. \tag{10}\]
Since the direction of Earth's center and chief spacecraft location are fixed in Hill's reference frame, the Sun is considered to rotate around the spacecraft. For this analysis, it is assumed that the Sun rotates at a constant rate in the \(x-y\) plane. The unit vector pointing to the Sun, \(\hat{r}_{s}\), is defined as,
\[\hat{r}_{s}=[\cos\theta_{s},\sin\theta_{s},0], \tag{11}\]
where \(\theta_{s}\) is the angle of the Sun with respect to the \(x\)-axis, and \(\dot{\theta}_{s}=-n\). \(\hat{r}_{b}\) and \(\hat{r}_{s}\) are shown in Figure 4. This example assumes each deputy is modeled as a 6U CubeSat in Low Earth Orbit (LEO), where \(n=0.001027\) radians per second and \(m=12\) kg.
Figure 4: Sensor boresight and Sun vectors.
Figure 3: Hill’s Frame.
#### Safety Constraints
While many safety constraints could be developed for this task, the following constraints define \(\mathcal{C}_{\mathrm{A}}\) for this analysis. The constraints are defined for \(N\) deputies, \(\forall i\in\mathbb{Z}_{1:N}\), \(\forall j\in\mathbb{Z}_{1:N}\), \(i\neq j\).
Safe Separation.Each deputy spacecraft shall not collide with the chief. This constraint is defined as,
\[\varphi_{1}(\mathbf{x}):=\|\mathbf{p}_{i}\|_{2}-(r_{\mathrm{d}}+r_{\mathrm{c}})\geq 0, \tag{12}\]
where \(r_{\mathrm{d}}\) is the collision radius of the deputy and \(r_{\mathrm{c}}\) is the collision radius of the chief. Additionally, each deputy spacecraft shall not collide with another deputy. This constraint is defined as,
\[\varphi_{2}(\mathbf{x}):=\|\mathbf{p}_{i}-\mathbf{p}_{j}\|_{2}-2r_{\mathrm{d}}\geq 0. \tag{13}\]
Dynamic Speed Constraint.The speed of each deputy shall decrease as it moves closer to the chief. This reduces risk of a high speed collision, as well as risk in the event of a fault.[9] Additionally, each deputy should be moving slow enough to appropriately inspect the chief. This constraint is defined as,
\[\varphi_{3}(\mathbf{x}):=\nu_{0}+\nu_{1}\|\mathbf{p}_{i}\|_{2}-\|\mathbf{v}_{i}\|_{2}\geq 0, \tag{14}\]
where \(\nu_{0}\) is a minimum allowable docking speed, \(\nu_{1}\) is a constant rate at which \(\mathbf{p}\) shall decrease, and \(\mathbf{v}_{i}=[\dot{x},\dot{y},\dot{z}]^{T}\).
Keep Out Zone.Each deputy shall not align its sensor with the Sun. This constraint is defined as,
\[\varphi_{4}(\mathbf{x}):=\theta_{b_{i}}-\frac{\alpha_{FOV}}{2}\geq 0, \tag{15}\]
where \(\theta_{b_{i}}\) is the angle between the deputy's sensor boresight and the Sun, and \(\alpha_{FOV}\) is the sensor's field of view, as shown in Figure 5. \(\theta_{b_{i}}\) is found using the dot product,
\[\theta_{b_{i}}=\arccos{(\hat{r}_{b_{i}}\cdot\hat{r}_{s})}. \tag{16}\]
Additionally, each deputy shall not align its sensor with the Sun if it were to point its sensor at another deputy, in the event that it must change its inspection target. This constraint is defined as,
\[\varphi_{5}(\mathbf{x}):=\theta_{b,d_{i}}-\frac{\alpha_{FOV}}{2}\geq 0, \tag{17}\]
where \(\theta_{b,d_{i}}\) is defined as,
\[\theta_{b,d_{i}}=\arccos{\left(\frac{\mathbf{p}_{i}-\mathbf{p}_{j}}{\|\mathbf{p}_{i}-\bm {p}_{j}\|_{2}}\cdot\hat{r}_{s}\right)}. \tag{18}\]
Figure 5: Sun keep out zone.
_Keep In Zone._ Each deputy shall not travel too far from the chief, such that they remain in a specified proximity. This constraint is defined as,
\[\varphi_{6}(\mathbf{x}):=r_{\max}-\mathbf{p}_{i}\geq 0, \tag{19}\]
where \(r_{\max}\) is the maximum relative distance.
_Passively Safe Maneuvers._ Each deputy shall not collide with the chief in the event of a fault or loss of power, where it may not be able to use its thrusters. That is, if \(\mathbf{u}=0\) for an extended period of time, \(\varphi_{1}(\mathbf{x})\) shall be enforced for the entire time period. The closed form solution to the Clohessy-Wiltshire equations can be used to determine \(\mathbf{p}_{i}\) for any point in time, where,
\[x(t)=(4-3\cos nt)x_{0}+\frac{\sin nt}{n}\dot{x}_{0}+\frac{2}{n}( 1-\cos nt)\dot{y}_{0},\] \[y(t)=6(\sin nt-nt)x_{0}+y_{0}-\frac{2}{n}(1-\cos nt)\dot{x}_{0}+ \frac{4\sin nt-3nt}{n}\dot{y}_{0}, \tag{20}\] \[z(t)=z_{0}\cos nt+\frac{\dot{z}_{0}}{n}\sin nt.\]
This trajectory is known as a Free Flight Trajectory (FFT). Letting \(\mathbf{p}_{i}=[x_{0},y_{0},z_{0}]^{T}\) and \(\mathbf{p}_{i}(t)=[x(t),y(t),z(t)]^{T}\), the constraint is defined as,
\[\varphi_{7}(\mathbf{x}):=\inf_{t\in[t_{0},t_{0}+T]}\|\mathbf{p}_{i}(t)\|_{2}-(r_{ \mathrm{d}}+r_{\mathrm{c}})\geq 0, \tag{21}\]
where \(T\) is the time period to evaluate over starting at \(t_{0}\). Note that safety is only guaranteed for all time if \(T=\infty\), but for practical implementation \(T\) is a finite value. Additionally, each deputy shall not collide with another deputy during a FFT. This constraint is defined as,
\[\varphi_{8}(\mathbf{x}):=\inf_{t\in[t_{0},t_{0}+T]}\|\mathbf{p}_{i}(t)-\mathbf{p}_{j}(t)\| _{2}-2r_{\mathrm{d}}\geq 0. \tag{22}\]
_Structural Damage Threshold._ Each deputy shall not maneuver aggressively with high velocities. This also ensures each deputy is moving slow enough to appropriately inspect the chief. This is defined in terms of three separate constraints,
\[\begin{split}\varphi_{9}(\mathbf{x}):=v_{\max}^{2}-\dot{x}_{i}^{2} \geq 0,\quad\varphi_{10}(\mathbf{x}):=v_{\max}^{2}-\dot{y}_{i}^{2}\geq 0,\\ \varphi_{11}(\mathbf{x}):=v_{\max}^{2}-\dot{z}_{i}^{2}\geq 0,\end{split} \tag{23}\]
where \(v_{\max}\) is the maximum allowable velocity. In addition, each deputy shall remain within the bounds of its actuation limits. This is enforced through the admissible control set \(\mathcal{U}\) such that \(\mathbf{u}_{i}\in[-u_{\max},u_{\max}]^{3}\).
_Fuel Limit._ The deputy shall adhere to a maximum cumulative fuel use limit, which is considered in terms of \(\Delta v\). This constraint is defined as,
\[\varphi_{12}(\mathbf{x}):=\Delta v_{\max}-\Delta v_{i}\geq 0, \tag{24}\]
where \(\Delta v_{\max}\) is the maximum \(\Delta v\) use, either for the mission or for the life of the spacecraft. At a given time \(t\), \(\Delta v_{i}\) is defined as,
\[\Delta v_{i}=\int_{t_{0}}^{t}\frac{F_{\mathrm{total}_{i}}}{m}, \tag{25}\]
where \(F_{\mathrm{total}}=|F_{x}|+|F_{y}|+|F_{z}|\) is the total thrust of the spacecraft.
_Values._ The values used for this problem are defined in Table 1.
**Control Invariant Constraints**
Due to the admissible control set \(\mathcal{U}\), the constraints that define \(\mathcal{C}_{\mathrm{A}}\) are not all suitable to explicitly define safety. Instead, control invariant safety constraints must be developed that define \(\mathcal{C}_{\mathrm{S}}\). These constraints are defined as follows. Note that the fuel limit is not considered to be part of \(\mathcal{C}_{\mathrm{S}}\) because fuel use is directly tied to actuation, so limiting fuel use would prevent the RTA from enforcing all other constraints. An alternate solution to enforcing the fuel limit will be discussed in the next section.
_Safe Separation._ To maintain safe separation, each deputy spacecraft must consider when to start slowing down to avoid a collision.[14] First, consider the projection of the deputy's velocity onto its position vector,
\[\mathbf{v}_{pr_{i}}=\frac{\langle\mathbf{v}_{i},\mathbf{p}_{i}\rangle}{\|\mathbf{p}_{i}\|_{2}}. \tag{26}\]
To avoid a collision, the deputy must slow \(\mathbf{v}_{pr_{i}}\) to zero. Therefore, the following constraint must be satisfied,
\[\|\mathbf{p}_{i}\|_{2}+\int_{t_{0}}^{t_{0}+T_{\rm b}}\mathbf{v}_{pr_{i}}+a_{\rm max}t \,dt\geq r_{\rm d}+r_{\rm c}, \tag{27}\]
where \(a_{\rm max}\) is the maximum acceleration of the deputy and \(T_{\rm b}=(0-\mathbf{v}_{pr_{i}})/a_{\rm max}\) is the time for \(\mathbf{v}_{pr_{i}}\) to reach zero. \(a_{\rm max}\) is found by considering the worst-case acceleration due to natural motion from the system dynamics in Eq. (8). This occurs when \(\mathbf{x}=[-\|\mathbf{r}_{\rm H}\|,0,0,-v_{\rm max},-v_{\rm max},-v_{\rm max}]^{T}\), which results in,
\[a_{\rm max}=\frac{u_{\rm max}}{m}-3n^{2}\|\mathbf{r}_{\rm H}\|-2nv_{\rm max}. \tag{28}\]
By computing the integral in Eq. (27) and noting that the constraint only needs to be enforced when the deputy is moving towards the chief (\(\mathbf{v}_{pr_{i}}<0\)), this becomes,
\[h_{1}(\mathbf{x}):=\sqrt{2a_{\rm max}[\|\mathbf{p}_{i}\|_{2}-(r_{\rm d}+r_{\rm c})]}+ \mathbf{v}_{pr_{i}}\geq 0. \tag{29}\]
Similarly, each deputy must conisager when to slow down to avoid collision with another deputy. This constraint is defined as,
\[h_{2}(\mathbf{x}):=\sqrt{4a_{\rm max}(\|\mathbf{p}_{i}-\mathbf{p}_{j}\|_{2}-2r_{\rm d})}+ \mathbf{v}_{pr_{ij}}\geq 0, \tag{30}\]
where,
\[\mathbf{v}_{pr_{ij}}=\frac{\langle\mathbf{v}_{i}-\mathbf{v}_{j},\mathbf{p}_{i}-\mathbf{p}_{j} \rangle}{\|\mathbf{p}_{i}-\mathbf{p}_{j}\|_{2}}. \tag{31}\]
Note that in Eq. (30), \(a_{\rm max}\) is multiplied by 4 because both deputies can slow down to avoid a collision.
_Dynamic Speed Constraint._ From Reference [13], it can be shown that \(\varphi_{3}\) is control invariant if \(u_{\rm max}\) adheres to the following limit,
\[\nu_{1}\sqrt{3}v_{\rm max}+3n^{2}\frac{\sqrt{3}v_{\rm max}-\nu_{0}}{\nu_{1}}+ 2nv_{\rm max}\leq\frac{u_{\rm max}}{m}. \tag{32}\]
This limit ensures that at the point of worst-case acceleration, \(u_{\rm max}\) is large enough to overcome the natural motion, thus preventing the deputy's speed from increasing and violating \(\varphi_{3}\). If the constraint holds at this point, then it will hold for all other points. Given the values from Table 1, the limit holds and therefore \(h_{3}=\varphi_{3}\).
\begin{table}
\begin{tabular}{c|c} \hline Parameter & Value \\ \hline \(m\) & 12 kg \\ \(n\) & 0.001027 rad/s \\ \(r_{\rm d}\) & 5 m \\ \(r_{\rm c}\) & 5 m \\ \(\nu_{0}\) & 0.2 m/s \\ \(\nu_{1}\) & 2n rad/s \\ \(\Delta v_{\rm max}\) & 20 m/s \\ \(\alpha_{FOV}\) & 60 degrees \\ \(r_{\rm max}\) & 1000 m \\ \(T\) & 500 s \\ \(v_{\rm max}\) & 1 m/s \\ \(u_{\rm max}\) & 1 N \\ \hline \end{tabular}
\end{table}
Table 1: Safety Constraint Values
_Keep Out Zone._ To maintain safe separation from the conical keep out zone, it is considered in terms of position rather than angles.[14] In this case, the unit vector of the cone, \(\hat{r}_{c}\), is assumed to align with the \(-\hat{r}_{s}\) vector and rotate in the \(x-y\) plane at the rate \(n\). The vector pointing from \(\mathbf{p}\) to its projection on \(\hat{r}_{c}\) is given by,
\[\mathbf{p}_{\hat{r}_{c}}=\mathbf{p}-\langle\mathbf{p},\hat{r}_{c}\rangle\hat{r}_{c}. \tag{33}\]
The projection of \(\mathbf{p}\) onto the cone is then,
\[\mathbf{p}_{c}=\mathbf{p}+\sin\theta\cos\theta\left(\|\mathbf{p}_{\hat{r}_{c}}\|_{2}\hat{r }_{c}+\frac{\langle\mathbf{p},\hat{r}_{c}\rangle\mathbf{p}_{\hat{r}_{c}}}{\|\mathbf{p}_{ \hat{r}_{c}}\|_{2}}\right)-\cos^{2}\theta\mathbf{p}_{\hat{r}_{c}}-\sin^{2}\theta \langle\mathbf{p},\hat{r}_{c}\rangle\hat{r}_{c}, \tag{34}\]
where \(\theta=\alpha_{FOV}/2\). This defines the closest position on the cone to the deputy. Similar to the safe separation constraint, the deputy must slow down to avoid entering the cone. This can be written as,
\[h_{KOZ}(\mathbf{x}):=\sqrt{2a_{\max}\|\mathbf{p}-\mathbf{p}_{c}\|_{2}}+\mathbf{v}_{pr,c}\geq 0, \tag{35}\]
where \(\mathbf{v}_{pr,c}=\langle\mathbf{v}-\mathbf{v}_{c},\mathbf{p}-\mathbf{p}_{c}\rangle/\|\mathbf{p}-\bm {p}_{c}\|_{2}\) and \(\mathbf{v}_{c}=[0,0,n]\times\mathbf{p}_{c}\). For the case of each deputy pointing towards the chief, \(h_{4}=h_{KOZ}\) where \(\mathbf{p}=\mathbf{p}_{i}\) and \(\mathbf{v}=\mathbf{v}_{i}\). For the multi-agent keep out zone, \(h_{5}=h_{KOZ}\) where \(\mathbf{p}=\mathbf{p}_{i}-\mathbf{p}_{j}\) and \(\mathbf{v}=\mathbf{v}_{i}-\mathbf{v}_{j}\) when \(\theta_{b,d_{i}}>\frac{\pi}{2}\), and otherwise \(\mathbf{p}=\mathbf{p}_{j}-\mathbf{p}_{i}\) and \(\mathbf{v}=\mathbf{v}_{j}-\mathbf{v}_{i}\). This sign change allows each deputy to recognize when one is pointing towards the Sun.
_Keep In Zone._ Again similar to the safe separation constraint, the deputy must slow down to avoid reaching \(r_{\max}\).[14] This can be written as,
\[h_{6}(\mathbf{x}):=\sqrt{2a_{\max}(r_{\max}-\|\mathbf{p}_{i}\|_{2})}-\mathbf{v}_{pr_{i}} \geq 0. \tag{36}\]
_Passively Safe Maneuvers._ This constraint was constructed in a way that already considers the trajectory of the deputy spacecraft. Because of this, safety is guaranteed for all time if \(T=\infty\), and therefore \(h_{7}=\varphi_{7}\) and \(h_{8}=\varphi_{8}\).
_Structural Damage Threshold._ From Reference [13], it can be shown that \(\varphi_{9}\), \(\varphi_{10}\), and \(\varphi_{11}\) are respectively control invariant if \(u_{\max}\) adheres to the following limits,
\[3n^{2}r_{\max}+2nv_{\max}\leq\frac{u_{\max}}{m},\quad 2nv_{\max}\leq\frac{u_{ \max}}{m},\quad n^{2}r_{\max}\leq\frac{u_{\max}}{m}. \tag{37}\]
These limits again ensure that at the point of worst-case acceleration, \(u_{\max}\) is large enough to overcome the natural motion, thus preventing the deputy's speed from increasing and violating the constraints. If the constraints hold at this point, then they will hold for all other points. Given the values from Table 1, the limits hold and therefore \(h_{9}=\varphi_{9}\), \(h_{10}=\varphi_{10}\), and \(h_{11}=\varphi_{11}\).
**Centralized vs. Decentralized RTA**
The control invariant constraints defined in the previous section can be enforced using ASIF in two different ways. First, the control of all deputies can be decentralized, where each deputy controls itself and uses a separate RTA filter. Each deputy has knowledge of the state of all other deputies, but they do not have knowledge of the control of all others. Decentralized RTA is beneficial because it allows each agent to operate independently. Second, the control of all deputies can be centralized, where all deputies are controlled by a central controller and there is then only one RTA filter. This central controller has knowledge of the state and control of all deputies. Centralized RTA is beneficial because it allows all constraints for all agents to be considered and enforced at the same time.
**Switching-Based Fuel Limit**
If a deputy violates the fuel limit, the RTA should guide the deputy spacecraft to a safe state where no fuel can be used. In this case, closed elliptical Natural Motion Trajectories (eNMT)[20] are used because they are
"parking orbits" where the deputy can orbit the chief without using any fuel. Closed eNMTs centered at the origin of Hill's frame satisfy the following,
\[\dot{y}(0)=-2nx(0),\quad\dot{x}(0)=\frac{n}{2}y(0). \tag{38}\]
While not all of the constraints in the previous section can be enforced on an eNMT, the deputy will at minimum be able to maintain safe separation from the chief. To enforce the fuel limit, the RTA uses a switching-based approach that switches control to a backup controller when the constraint is violated. This is sometimes referred to as a _latched_ RTA approach, as the system does not switch back and forth between the primary and backup controllers, but rather remains latched to the backup controller until a specified condition is met. The switching filter used for this analysis can be defined as follows.
**Switching Filter**
\[\mathbf{u}_{\rm act}(\mathbf{x})=\begin{cases}\mathbf{u}_{\rm des}(\mathbf{x})&\mathrm{if} \quad\varphi_{12}(\phi^{\mathbf{u}_{\rm b}}(t,\mathbf{x}))\geq 0,\quad\forall t\in[t_{0},t_ {0}+T]\\ \mathbf{u}_{\rm b}(\mathbf{x})&\mathrm{if}\quad otherwise\end{cases} \tag{39}\]
Here, \(\phi^{\mathbf{u}_{\rm b}}\) represents a prediction of the state \(\mathbf{x}\) at time \(t\) under the backup control law \(\mathbf{u}_{\rm b}\). Note that for practical implementation, this trajectory is only simulated for a finite time period \(T\). The backup controller used for this analysis is an LQR tracking controller that guides the deputy to the nearest eNMT. The backup control law is,
\[\mathbf{u}_{\rm b}=-K_{1}\mathbf{e}(\mathbf{x}-\mathbf{x}_{\rm des})-K_{2}\mathbf{z}, \tag{40}\]
where \(K_{1}\) and \(K_{2}\) are LQR gains, \(\mathbf{e}=\mathbf{x}-\mathbf{x}_{\rm des}\) is the error between the current and desired states, \(\mathbf{z}\) tracks the integral of the error over time, and \(\mathbf{x}_{\rm des}\) is defined as,
\[\mathbf{x}_{\rm des}=[x,y,z,ny/2,-2nx,\dot{z}]^{T}. \tag{41}\]
Note that a traditional LQR controller cannot be used because \(\mathbf{x}_{\rm des}\) is not a stationary point.
## Results
This section presents and discusses simulation results of the centralized and decentralized ASIF RTA, Monte Carlo simulations, and simulation results of the switching-based fuel limit RTA.
### Simulation Results
To evaluate the ability of the RTA to enforce all constraints, a simulation of the inspection task with centralized RTA is shown in Figure 6. In this Figure, safe regions are shaded green and unsafe regions are shaded red. The primary controller for this simulation is an aggressive LQR controller designed to violate the safety constraints. Note that the purpose of this primary controller is not to complete the inspection task, but rather to show that RTA assures safety of all constraints. The simulation is run with 5 deputies for 3,000 seconds, where for the first 1,000 seconds the primary controller attempts to move each deputy to the origin, and for the last 2,000 seconds the primary controller attempts to move each deputy to a distance greater than \(r_{\rm max}\). Figure 6 shows that the centralized RTA assures safety of each constraint simultaneously.
The same simulation was run again with decentralized RTA, where the results are shown in Figure 7. In this case, the decentralized RTA is also able to assure safety of each constraint simultaneously using a slightly different behavior.
Figure 6: Simulation results for centralized ASIF RTA.
Figure 7: Simulation results for decentralized ASIF RTA.
### Monte Carlo
In order for human operators to trust an RTA filter, it must be shown to assure safety for all possible scenarios. One way to achieve this is through the use of Monte Carlo analysis, which runs a large number of simulations to cover a substantial amount of the state space. While not every state can be tested, this does allow the designer to verify that RTA can assure safety for almost any state. For this analysis, 2,000 simulations with 5 deputies were run for 500 seconds each. No primary control input was used during these simulations. Latin hypercube sampling was used to compute initial conditions, where initial conditions that violated any safety constraint were removed and resampled. The same set of 2,000 initial conditions was used to simulate the centralized and decentralized RTA filters.
Figure 8 shows the results of the Monte Carlo simulations for both centralized and decentralized RTA, where failure refers to any initial condition where RTA failed to assure safety for at least one deputy or the QP failed to find a safe control input that satisfied all constraints. Overall, the centralized RTA was able to assure safety of all constraints for 100% of the points, while the decentralized RTA was able to assure safety of all constraints for 90.95% of the points.
The points where the decentralized RTA failed were all due to the multi-agent keep out zone, where each deputy shall not align its sensor with the Sun if it were to point its sensor at another deputy. These failures were due to the design of the constraint, where each deputy is assumed to accelerate at the maximum rate to avoid entering the exclusion zone. In the event that more than one deputy is attempting to adhere to conflicting constraints at the same time, it may not be able to accelerate at the maximum rate. For the case
Figure 8: Monte Carlo simulation results. Note that all centralized RTA test cases were successful so there are no blue or yellow stars in the figures. 90.95% of decentralized cases were successful (both succeeded, in cyan), leaving 9.05% of stars red.
of decentralized RTA, the other deputies are not aware of the conflicting constraints, causing the RTA to eventually fail. For the case of centralized RTA, the centralized controller is aware of all constraints, and can adjust the control of each deputy to avoid this scenario. Multiple failure mitigation strategies can be used for decentralized RTA, including adjusting the strengthening function \(\alpha(x)\) or removing the multi-agent keep out zone constraint when necessary. The Monte Carlo simulation was run again using the same 2,000 initial conditions, but removing the multi-agent keep out zone, where the decentralized RTA was then able to assure safety of all constraints for 100% of the points.
**Switching-Based Fuel Limit**
To evaluate the ability of the switching-based RTA to enforce the fuel limit, another simulation of the inspection task is shown in Figure 9 with only one deputy. The primary controller for this simulation is an aggressive LQR controller designed to use as much fuel as possible. The simulation is run for 7,000 seconds.
The trajectory of the deputy is shown to maneuver to an eNMT around the chief before the fuel limit is violated in Figure 8(a), where the position of the deputy changes from cyan to mag
Figure 9: Simulation results with switching-based fuel limit.
Figure 8(b) shows the fuel use constraint, and Figure 8(c) shows the control limits. It can be seen that before the total \(\Delta v\) exceeds the limit, the system switches to the backup controller, which guides the deputy to the nearest eNMT, where the deputy orbits the chief and \(\Delta v\) does not increase for the remainder of the simulation.
## Conclusion
This paper developed several safety constraints for an autonomous spacecraft inspection problem. Most of the constraints were enforced using an ASIF RTA approach, which is an optimization-based approach that is minimally invasive towards the primary controller, where individual simulations were used to show that the ASIF RTA is able to enforce all constraints simultaneously. A comparison was made between centralized and decentralized control, where Monte Carlo analysis was used to show that centralized RTA was able to assure safety for all test points, and decentralized RTA was able to assure safety for most test points. Failure mitigation strategies were also presented, including modifying the constraint strengthening function and removing low priority constraints. While ASIF is a beneficial approach that is able to enforce most constraints, a scenario was also presented where a switching-based RTA approach more effectively enforced a fuel constraint. This analysis shows that as the system and constraint complexity increases, RTA methods can be combined to achieve the best safety assurance technique.
|
2303.12773 | The Complexity of Why-Provenance for Datalog Queries | Explaining why a database query result is obtained is an essential task
towards the goal of Explainable AI, especially nowadays where expressive
database query languages such as Datalog play a critical role in the
development of ontology-based applications. A standard way of explaining a
query result is the so-called why-provenance, which essentially provides
information about the witnesses to a query result in the form of subsets of the
input database that are sufficient to derive that result. To our surprise,
despite the fact that the notion of why-provenance for Datalog queries has been
around for decades and intensively studied, its computational complexity
remains unexplored. The goal of this work is to fill this apparent gap in the
why-provenance literature. Towards this end, we pinpoint the data complexity of
why-provenance for Datalog queries and key subclasses thereof. The takeaway of
our work is that why-provenance for recursive queries, even if the recursion is
limited to be linear, is an intractable problem, whereas for non-recursive
queries is highly tractable. Having said that, we experimentally confirm, by
exploiting SAT solvers, that making why-provenance for (recursive) Datalog
queries work in practice is not an unrealistic goal. | Marco Calautti, Ester Livshits, Andreas Pieris, Markus Schneider | 2023-03-22T17:37:39Z | http://arxiv.org/abs/2303.12773v1 | # The Complexity of Why-Provenance for Datalog Queries
###### Abstract
Explaining why a database query result is obtained is an essential task towards the goal of Explainable AI, especially nowadays where expressive database query languages such as Datalog play a critical role in the development of ontology-based applications. A standard way of explaining a query result is the so-called why-provenance, which essentially provides information about the witnesses to a query result in the form of subsets of the input database that are sufficient to derive that result. To our surprise, despite the fact that the notion of why-provenance for Datalog queries has been around for decades and intensively studied, its computational complexity remains unexplored. The goal of this work is to fill this apparent gap in the why-provenance literature. Towards this end, we pinpoint the data complexity of why-provenance for Datalog queries and key subclasses thereof. The takeaway of our work is that why-provenance for recursive queries, even if the recursion is limited to be linear, is an intractable problem, whereas for non-recursive queries is highly tractable. Having said that, we experimentally confirm, by exploiting SAT solvers, that making why-provenance for (recursive) Datalog queries work in practice is not an unrealistic goal.
## 1 Introduction
Datalog has emerged in the 1980s as a logic-based query language from Logic Programming and has been extensively studied since then [1]. The name Datalog reflects the intention of devising a counterpart of Prolog for data processing. It essentially extends the language of unions of conjunctive queries, which corresponds to the select-project-join-union fragment of relational algebra, with the important feature of recursion, much needed to express some natural queries. Among numerous applications, Datalog has been heavily used in the context of ontological query answering. In particular, for several important ontology languages based on description logics and existential rules, ontological query answering can be reduced to the problem of evaluating a Datalog query (see, e.g., [1, 1]), which in turn enables the exploitation of efficient Datalog engines such as DLV [1] and Clingo [1].
As for any other query language, explaining why a result to a Datalog query is obtained is crucial towards explainable and transparent data-intensive applications. A standard way for providing such explanations to query answers is the so-called _why-provenance_[1]. Its essence is to collect all the subsets of the input database that are sufficient to derive a certain answer. More precisely, in the case of Datalog queries, the why-provenance of an answer tuple \(\bar{t}\) is obtained by considering all the possible proof trees \(T\) of the fact \(\operatorname{Ans}(\bar{t})\), with \(\operatorname{Ans}\) being the answer predicate of the Datalog query in question, and then collecting all the database facts that label the leaves of \(T\). Recall that a proof tree of a fact \(\alpha\) w.r.t. a database \(D\) and a set \(\Sigma\) of Datalog rules forms a tree-like representation of a way for deriving \(\alpha\) by starting from \(D\) and executing the rules occurring in \(\Sigma\)[1].
There are recent works that studied the concept of why-provenance for Datalog queries. In particular, there are theoretical studies on computing the why-provenance [1, 1], attempts to under-approximate the why-provenance towards an efficient computation [1], studies on the restricted setting of non-recursive Datalog queries [1], attempts to compute the why-provenance by transforming the grounded Datalog rules to a system of equations [1], and attempts to compute the why-provenance on demand via transformations to existential rules [1].
Despite the above research activity on the concept of why-provenance for Datalog queries, to our surprise, there is still a fundamental question that remains unexplored:
_Main Research Question: What is the exact computational complexity of why-provenance for Datalog queries?_
The goal of this work is to provide an answer to the above question. To this end, for a Datalog query \(Q\), we study the complexity of the following algorithmic problem, dubbed Why-Provenance\([Q]\): given a database \(D\), an answer \(\bar{t}\) to \(Q\) over \(D\), and a subset \(D^{\prime}\) of \(D\), is it the case that \(D^{\prime}\) belongs to the why-provenance of \(\bar{t}\) w.r.t. \(D\) and \(Q^{\prime}\)? Pinpointing the complexity of the above decision problem will let us understand the inherent complexity of why-provenance for Datalog queries w.r.t. the size of the database, which is precisely what matters when using why-provenance in practice.
**Our Contribution.** The takeaway of our complexity analysis is that explaining Datalog queries via why-provenance is,
in general, an intractable problem. In particular, for a Datalog query \(Q\), we show that \(\mathsf{Why\mbox{-}Provenance}[Q]\) is in NP, and there are queries for which it is NP-hard. We further analyze the complexity of the problem when \(Q\) is linear (i.e., the recursion is restricted to be linear) or non-recursive, with the aim of clarifying whether the feature of recursion affects the inherent complexity of why-provenance. We show that restricting the recursion to be linear does not affect the complexity, namely the problem is in NP and for some queries it is even NP-hard. However, completely removing the recursion significantly reduces the complexity; in particular, we prove that the problem is in \(\mathrm{AC}_{0}\).
It is clear that the notion of why-provenance for Datalog queries, and hence the problem \(\mathsf{Why\mbox{-}Provenance}[Q]\), heavily rely on the notion of proof tree. However, as already discussed in the literature (see, e.g., the recent work [1]), there are proof trees that are counterintuitive since they represent unnatural derivations (e.g., a fact is used to derive itself, or a fact is derived in several different ways), and this also affects the why-provenance. With the aim of overcoming this conceptual limitation of proof trees, we propose the class of unambiguous proof trees. All occurrences of a fact in such a proof tree must be proved via the same derivation. We then study the problem \(\mathsf{Why\mbox{-}Provenance}[Q]\) focusing on unambiguous proof trees, and show that its complexity remains the same. This should be perceived as a positive outcome as we can overcome the limitation of arbitrary proof trees without increasing the complexity.
We finally verify that unambiguous proof trees, apart from their conceptual advantage, also help to exploit off-the-shelf SAT solvers towards an efficient computation of the why-provenance for Datalog queries. In particular, we discuss a proof-of-concept implementation that exploits the state-of-the-art SAT solver Glucose (see, e.g., [1]), and present encouraging results based on queries and databases that are coming from the Datalog literature.
_An extended version with further details, as well as the experimental scenarios and the source code, can be found at [https://gitlab.com/mcalautti/datalog-why-provenance._](https://gitlab.com/mcalautti/datalog-why-provenance._)
## 2 Preliminaries
We consider the disjoint countably infinite sets \(\mathbf{C}\) and \(\mathbf{V}\) of _constants_ and _variables_, respectively. We may refer to constants and variables as _terms_. For brevity, given an integer \(n>0\), we may write \([n]\) for the set of integers \(\{1,\ldots,n\}\).
**Relational Databases.** A _schema_\(\mathbf{S}\) is a finite set of relation names (or predicates) with associated arity. We write \(R/n\) to say that \(R\) has arity \(n\geq 0\); we may also write \(\mathsf{ar}(R)\) for \(n\). A _(relational) atom_\(\alpha\) over \(\mathbf{S}\) is an expression of the form \(R(\bar{t})\), where \(R/n\in\mathbf{S}\) and \(\bar{t}\) is an \(n\)-tuple of terms. By abuse of notation, we may treat tuples as the _set_ of their elements. A _fact_ is an atom that mentions only constants. A _database_ over \(\mathbf{S}\) is a finite set of facts over \(\mathbf{S}\). The _active domain_ of a database \(D\), denoted \(\mathsf{dom}(D)\), is the set of constants in \(D\).
**Syntax and Semantics of Datalog Programs.** A _(Datalog) rule_\(\sigma\) over a schema \(\mathbf{S}\) is an expression of the form
\[R_{0}(\bar{x}_{0})\mathbin{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}-R_ {1}(\bar{x}_{1}),\ldots,R_{n}(\bar{x}_{n})\]
for \(n\geq 1\), where \(R_{i}(\bar{x}_{i})\) is a (constant-free) relational atom over \(\mathbf{S}\) for \(i\in\{0,\ldots,n\}\), and each variable in \(\bar{x}_{0}\) occurs in \(\bar{x}_{k}\) for some \(k\in[n]\). We refer to \(R_{0}(\bar{x}_{0})\) as the _head_ of \(\sigma\), denoted \(\mathsf{head}(\sigma)\), and to the expression that appears on the right of the \(\mathbin{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}-\) symbol as the _body_ of \(\sigma\), denoted \(\mathsf{body}(\sigma)\), which we may treat as the set of its atoms.
A _Datalog program_ over a schema \(\mathbf{S}\) is defined as a finite set \(\Sigma\) of Datalog rules over \(\mathbf{S}\). A predicate \(R\) occurring in \(\Sigma\) is called _extensional_ if there is no rule in \(\Sigma\) having \(R\) in its head, and _intentional_ if there exists at least one rule in \(\Sigma\) with \(R\) in its head. The _extensional (database) schema_ of \(\Sigma\), denoted \(\mathsf{edb}(\Sigma)\), is the set of all extensional predicates in \(\Sigma\), while the _intentional schema_ of \(\Sigma\), denoted \(\mathsf{idb}(\Sigma)\), is the set of all intensional predicates in \(\Sigma\). Note that, by definition, \(\mathsf{edb}(\Sigma)\cap\mathsf{idb}(\Sigma)=\emptyset\). The _schema_ of \(\Sigma\), denoted \(\mathsf{sch}(\Sigma)\), is the set \(\mathsf{edb}(\Sigma)\cup\mathsf{idb}(\Sigma)\), which is in general a subset of \(\mathbf{S}\) since some predicates of \(\mathbf{S}\) may not appear in \(\Sigma\).
There are interesting fragments of Datalog programs that somehow limit the recursion and have been extensively studied in the literature. A Datalog program \(\Sigma\) is called _linear_ if, for each rule \(\sigma\in\Sigma\), there exists at most one atom in \(\mathsf{body}(\sigma)\) over \(\mathsf{idb}(\Sigma)\), namely \(\mathsf{body}(\sigma)\) mentions at most one intensional predicate. Roughly, linear Datalog programs can have only linear recursion. Another key fragment is the one that completely forbids recursion. A Datalog program \(\Sigma\) is called _non-recursive_ if its predicate graph, which encodes how the predicates of \(\mathsf{sch}(\Sigma)\) depend on each other, is acyclic. Recall that the nodes of the predicate graph of \(\Sigma\) are the predicates of \(\mathsf{sch}(\Sigma)\), and there is an edge from \(R\) to \(P\) if there is a rule of the form \(P(\bar{x})\mathbin{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}-\ldots,R( \bar{y}),\ldots\) in \(\Sigma\).
An elegant property of Datalog programs is that they have three equivalent semantics: model-theoretic, fixpoint, and proof-theoretic [1]. We proceed to recall the proof-theoretic semantics of Datalog programs since it is closer to the notion of why-provenance. To this end, we need the key notion of proof tree of a fact, which will anyway play a crucial role in our work. For a database \(D\) and a Datalog program \(\Sigma\), let \(\mathsf{base}(D,\Sigma)=\{R(\bar{t})\mid R\in\mathsf{sch}(\Sigma)\mbox{ and }\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\}\), the set of all facts that can be formed using predicates of \(\mathsf{sch}(\Sigma)\) and terms of \(\mathsf{dom}(D)\).
**Definition 1** (Proof Tree).: _Consider a Datalog program \(\Sigma\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a fact \(\alpha\) over \(\mathsf{sch}(\Sigma)\). A proof tree of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) is a finite labeled rooted tree \(T=(V,E,\lambda)\), with \(\lambda:V\to\mathsf{base}(D,\Sigma)\), such that:_
1. _If_ \(v\in V\) _is the root, then_ \(\lambda(v)=\alpha\)_._
2. _If_ \(v\in V\) _is a leaf, then_ \(\lambda(v)\in D\)_._
3. _If_ \(v\in V\) _is a node with_ \(n\geq 1\) _children_ \(u_{1},\ldots,u_{n}\)_, then there is a rule_ \(R_{0}(\bar{x}_{0})\mathbin{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}-R_ {1}(\bar{x}_{1}),\ldots,R_{n}(\bar{x}_{n})\in\Sigma\) _and a function_ \(h:\bigcup_{i\in[n]}\bar{x}_{i}\mathbin{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}- \mathbf{C}\) _such that_ \(\lambda(v)=R_{0}(h(\bar{x}_{0}))\)_, and_ \(\lambda(u_{i})=R_{i}(h(\bar{x}_{i}))\) _for each_ \(i\in[n]\)_._
Essentially, a proof tree of a fact \(\alpha\) w.r.t. \(D\) and \(\Sigma\) indicates that we can prove \(\alpha\) using \(D\) and \(\Sigma\), that is, we can derive \(\alpha\) starting from \(D\) end executing the rules of \(\Sigma\). An example, which will also serve as a running example throughout the paper, that illustrates the notion of proof tree follows.
**Example 1**.: _Consider the Datalog program \(\Sigma\) consisting of_
\[A(x) \coloneqq S(x)\] \[A(x) \coloneqq A(y),A(z),T(y,z,x)\]
_that encodes the path accessibility problem Cook (1974). The predicate \(S\) represents source nodes, \(A\) represents nodes that are accessible from the source nodes, and \(T\) represents accessibility conditions, that is, \(T(y,z,x)\) means that if both \(y\) and \(z\) are accessible from the source nodes, then so is \(x\). We further consider the database_
\[D\ =\ \{S(a),T(a,a,b),T(a,a,c),T(a,a,d),T(b,c,a)\}.\]
_A simple proof tree of the fact \(A(d)\) w.r.t. \(D\) and \(\Sigma\) follows:_
_The following is another, slightly more complex, proof tree of the fact \(A(d)\) w.r.t. \(D\) and \(\Sigma\):_
_Note that the above are only two out of the many proof trees of \(A(d)\) w.r.t. \(D\) and \(\Sigma\). In fact, there exist infinitely many as one can build larger and larger such proof trees: whenever we encounter a node labeled by \(A(a)\), we can choose to apply the recursive rule instead of the rule \(A(x)\ \coloneqq\ S(x)\)._
Now, given a Datalog program \(\Sigma\) and a database \(D\) over \(\operatorname{sch}(\Sigma)\), the _semantics of \(\Sigma\) on \(D\)_, denoted \(\Sigma(D)\), is the set
\[\Sigma(D)\ =\ \{\alpha\ |\ \text{ there is a proof tree of $\alpha$ w.r.t. $D$ and $\Sigma$}\},\]
that is, the set of facts that can be proven using \(D\) and \(\Sigma\).
**Datalog Queries.** Having the syntax and the semantics of Datalog programs in place, it is now straightforward to recall the syntax and the semantics of Datalog queries. A _Datalog query_ is a pair \(Q=(\Sigma,R)\), where \(\Sigma\) is a Datalog program and \(R\) a predicate of \(\operatorname{idb}(\Sigma)\). We further call \(Q\)_linear_ (resp., _non-recursive_) if the program \(\Sigma\) is linear (resp., non-recursive). Now, for a database \(D\) over \(\operatorname{edb}(\Sigma)\), the _answer_ to \(Q\) over \(D\) is defined as the set of tuples
\[Q(D)\ =\ \{\bar{t}\in\mathsf{dom}(D)^{\boldsymbol{\mathrm{ar}}(R)}\mid R(\bar{t })\in\Sigma(D)\},\]
i.e., the tuples \(\bar{t}\) such that the fact \(R(\bar{t})\) can be proven using \(D\) and \(\Sigma\). The class that collects all the Datalog queries is denoted \(\mathsf{Dat}\). We also write \(\mathsf{LDat}\) and \(\mathsf{NRDat}\) for the classes of linear and non-recursive Datalog queries, respectively.
## 3 Why-Provenance for Datalog Queries
As already discussed in the Introduction, why-provenance is a standard way of explaining why a query result is obtained. It essentially collects all the subsets of the database (without unnecessary atoms) that allow us to prove (or derive) a query result. We proceed to formalize this simple idea, and then introduce the main problem of interest.
Given a proof tree \(T=(V,E,\lambda)\) (of some fact w.r.t. some database and Datalog program), the _support_ of \(T\) is the set
\[\mathsf{support}(T)\ =\ \{\lambda(v)\mid v\in V\text{ is a leaf of $T$}\}\,,\]
which is essentially the set of facts that label the leaves of the proof tree \(T\). Note that \(\mathsf{support}(T)\) is a subset of the underlying database since, by definition, the leaves of a proof tree are labeled with database atoms. The formal definition of why-provenance for Datalog queries follows.
**Definition 2** (Why-Provenance for Datalog).: _Consider a Datalog query \(Q=(\Sigma,R)\), a database \(D\) over \(\operatorname{edb}(\Sigma)\), and a tuple \(\bar{t}\in\mathsf{dom}(D)^{\boldsymbol{\mathrm{ar}}(R)}\). The why-provenance of \(\bar{t}\) w.r.t. \(D\) and \(Q\) is defined as the family of sets of facts_
\[\{\mathsf{support}(T)\mid T\text{ is a proof tree of $R(\bar{t})$ w.r.t. $D$ and $\Sigma$}\}\]
_which we denote by \(\mathsf{why}(\bar{t},D,Q)\)._
Intuitively speaking, a set of facts \(D^{\prime}\subseteq D\) that belongs to \(\mathsf{why}(\bar{t},D,Q)\) should be understood as a "real" reason why the tuple \(\bar{t}\) is an answer to the query \(Q\) over the database \(D\), i.e., \(D^{\prime}\) explains why \(\bar{t}\in Q(D)\). By "real" we mean that all the facts of \(D^{\prime}\) are really used in order to derive the tuple \(\bar{t}\) as an answer. Here is a simple example of why-provenance.
**Example 2**.: _Let \(Q=(\Sigma,A)\), where \(\Sigma\) is the program that encodes the path accessibility problem as in Example 1, and let \(D\) be the database from Example 1. It can be verified that the why-provenance of the unary tuple \((d)\) w.r.t. \(D\) and \(Q\) consists of \(\{S(a),T(a,a,d)\}\) and the database \(D\) itself. The former set is actually the support of the first proof tree given in Example 1, while \(D\) is the support of the second proof tree. Recall that \(A(d)\) has infinitely many proof trees w.r.t. \(D\) and \(\Sigma\), whereas \(\mathsf{why}((d),D,Q)\) contains only two sets. Thus, in general, there is no 1-1 correspondence between proof trees of a fact \(R(\bar{t})\) and members of the why-provenance of \(\bar{t}\)._
We would like to pinpoint the inherent complexity of the problem of computing the why-provenance of a tuple w.r.t. a database and a Datalog query. To this end, we need to study the complexity of recognizing whether a certain subset of the database belongs to the why-provenance, that is, whether a candidate explanation is indeed an explanation. This leads to the following algorithmic problem parameterized by a class \(\mathsf{C}\) of Datalog queries; \(\mathsf{C}\) can be, e.g., \(\mathsf{Dat}\), \(\mathsf{LDat}\), or \(\mathsf{NRDat}\):
\[\begin{array}{ll}\boxed{\mathsf{PROBLEM}:}&\mathsf{Why-Provenance[C]}\\ \text{INPUT}:&\text{A Datalog query $Q=(\Sigma,R)$ from $\mathsf{C}$,}\\ &\text{a database $D$ over $\operatorname{edb}(\Sigma)$,}\\ &\text{a tuple $\bar{t}\in\mathsf{dom}(D)^{\boldsymbol{\mathrm{ar}}(R)}$, and $D^{\prime}\subseteq D$.}\\ \text{QUESTION}:&\text{Does $D^{\prime}\in\mathsf{why}(\bar{t},D,Q)$?}\end{array}\]
Our goal is to study the above problem and pinpoint its complexity. We are actually interested in the _data complexity_ of \(\mathsf{Why-Provenance[C]}\), where the query \(Q\) is fixed, and only the database \(D\), the tuple \(\bar{t}\), and \(D^{\prime}\) are part of the input, i.e., for each \(Q=(\Sigma,R)\) from \(\mathsf{C}\), we consider the problem:
\begin{tabular}{|l l|} \hline PROBLEM : & \(\mathsf{Why-Provenance[Q]}\) \\ \hline \(\mathsf{INPUT}\) : & A database \(D\) over \(\mathsf{edb}(\Sigma)\), \\ & a tuple \(\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\), and \(D^{\prime}\subseteq D\). \\ \(\mathsf{QUESTION}\) : & Does \(D^{\prime}\in\mathsf{why}(\bar{t},D,Q)\)? \\ \hline \end{tabular}
By the typical convention, the problem \(\mathsf{Why-Provenance[C]}\) is in a certain complexity class \(C\) in data complexity if, for every query \(Q\) from \(\mathsf{C}\), \(\mathsf{Why-Provenance[Q]}\) is in \(C\). On the other hand, \(\mathsf{Why-Provenance[C]}\) is hard for a certain complexity class \(C\) in data complexity if there exists a query \(Q\) from \(\mathsf{C}\) such that \(\mathsf{Why-Provenance[Q]}\) is hard for \(C\).
## 4 Data Complexity of Why-Provenance
The goal of this section is to pinpoint the data complexity of \(\mathsf{Why-Provenance[C]}\), for each \(\mathsf{C}\in\{\mathsf{Dat},\mathsf{LDat},\mathsf{NRDat}\}\). As we shall see, the main outcome of our analysis is that for recursive queries, even if the recursion is linear, the problem is in general intractable, whereas for non-recursive queries it is highly tractable. We first focus on recursive queries.
### Recursive Queries
We show the following complexity result:
**Theorem 3**.: \(\mathsf{Why-Provenance[C]}\) _is \(\mathsf{NP-}\)complete in data complexity, for each class \(\mathsf{C}\in\{\mathsf{Dat},\mathsf{LDat}\}\)._
Note that there is a striking difference between the problem of why-provenance and the problem of query evaluation, which is known to be in PTIME in data complexity; in fact, for linear Datalog queries it is in NL (Dantsin et al. 2001). To prove Theorem 3, it suffices to show that:
* \(\mathsf{Why-Provenance[Dat]}\) is in NP in data complexity.
* \(\mathsf{Why-Provenance[LDat]}\) is NP-hard in data complexity.
The lower bound is established via a reduction from \(\mathsf{3SAT}\). We actually devise a linear Datalog query \(Q\), and provide a reduction from \(\mathsf{3SAT}\) to \(\mathsf{Why-Provenance[Q]}\). Let us now discuss the key ingredients underlying the upper bound. The central property is that whenever there is a proof tree \(T\) that witnesses the fact that the given subset of the input database belongs to the why-provenance, then there is always a way to compactly represent \(T\) as a polynomially-sized directed acyclic graph. This in turn leads to an easy guess-and-check algorithm that runs in polynomial time. We proceed to give further details for the above crucial property.
**Proof DAG.** We first introduce the notion of proof directed acyclic graph (DAG) of a fact, which is essentially a generalization of the notion of proof tree. Recall that a DAG \(G\) is _rooted_ if it has exactly one node, the _root_, with no incoming edges. A node of \(G\) is a _leaf_ if it has no outgoing edges.
**Definition 4** (Proof DAG).: _Consider a Datalog program \(\Sigma\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a fact \(\alpha\) over \(\mathsf{sch}(\Sigma)\). A proof DAG of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) is a finite labeled rooted DAG \(G=(V,E,\lambda)\), with \(\lambda:V\rightarrow\mathsf{base}(D,\Sigma)\), such that:_
1. _If_ \(v\in V\) _is the root, then_ \(\lambda(v)=\alpha\)_._
2. _If_ \(v\in V\) _is a leaf, then_ \(\lambda(v)\in D\)_._
3. _If_ \(v\in V\) _has_ \(n\geq 1\) _outgoing edges_ \((v,u_{1}),\ldots,(v,u_{n})\)_, then there is a rule_ \(R_{0}(\bar{x}_{0})\mathrel{\mathop{:}}-R_{1}(\bar{x}_{1}),\ldots,R_{n}(\bar{x} _{n})\in\Sigma\) _and a function_ \(h:\bigcup_{t\in[n]}\bar{x}_{i}\to\mathbf{C}\) _such that_ \(\lambda(v)=R_{0}(h(\bar{x}_{0}))\)_, and_ \(\lambda(u_{i})=R_{i}(h(\bar{x}_{i}))\) _for_ \(i\in[n]\)_._
The key difference between a proof tree and a proof DAG is that a proof DAG might reuse nodes to compactly represent a proof tree. This is shown by the following example.
**Example 3**.: _Let \(Q=(\Sigma,A)\), where \(\Sigma\) is the program given in Example 1, and let \(D\) be the database from Example 1. A simple proof DAG of the fact \(A(d)\) w.r.t. \(D\) and \(\Sigma\) is_
\begin{tabular}{c c} \(A(a)\)\(A(a)\)\(T(a,a,d)\) \\ \(S(a)\) \\ \end{tabular} _which compactly represents the first proof tree given in Example 1. The following is another, slightly more complex, proof DAG of the fact \(A(d)\) w.r.t. \(D\) and \(\Sigma\):_
\begin{tabular}{c c} \(A(a)\)\(A(b)\)\(T(a,a,d)\) \\ \(T(b,c,a)\)\(A(c)\)\(T(a,a,c)\) \\ \(A(a)\)\(A(a)\)\(A(b)\)\(T(a,a,c)\) \\ \end{tabular} _It clearly represents the second proof tree from Example 1._
**Compact Representation of Proof Trees.** Given a proof DAG \(G\) (of some fact w.r.t. some database and Datalog program), we define its _support_, denoted \(\mathsf{support}(G)\), as the set of facts that label the leaves of \(G\). The key result follows:
**Proposition 5**.: _For a Datalog program \(\Sigma\), there is a polynomial \(f\) such that, for every database \(D\) over \(\mathsf{edb}(\Sigma)\), fact \(\alpha\) over \(\mathsf{sch}(\Sigma)\), and \(D^{\prime}\subseteq D\), the following are equivalent:_
1. _There exists a proof tree_ \(T\) _of_ \(\alpha\) _w.r.t._ \(D\) _and_ \(\Sigma\) _such that_ \(\mathsf{support}(T)=D^{\prime}\)_._
2. _There exists a proof DAG_ \(G=(V,E,\lambda)\) _of_ \(\alpha\) _w.r.t._ \(D\) _and_ \(\Sigma\) _such that_ \(\mathsf{support}(G)=D^{\prime}\) _and_ \(|V|\leq f(|D|)\)_._
It is easy to show that \((2)\) implies \((1)\) by "unravelling" the proof DAG \(G\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) into a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=\mathsf{support}(G)\). Now, the direction \((1)\) implies \((2)\) is rather non-trivial and requires a careful construction that converts a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) into a compact proof DAG \(G\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) such that \(\mathsf{support}(T)=\mathsf{support}(G)\). This construction proceeds in three main steps captured by Lemmas 6, 7, and 8.
\(\bullet\) The _first step_ is to show that a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\) can be converted into a proof tree \(T^{\prime}\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T^{\prime})=D^{\prime}\) that has "small" depth. Let us recall that the _depth_ of a rooted tree \(T\), denoted \(\mathsf{depth}(T)\), is the length of the longest path from its root to a leaf node. The corresponding lemma follows:
**Lemma 6**.: _For each Datalog program \(\Sigma\), there is a polynomial \(f\) such that, for every database \(D\) over \(\mathsf{edb}(\Sigma)\), fact \(\alpha\) over \(\mathsf{sch}(\Sigma)\), and \(D^{\prime}\subseteq D\), if there exists a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\), then there exists also such a proof tree \(T^{\prime}\) with \(\mathsf{depth}(T^{\prime})\leq f(|D|)\)._
\(\bullet\) The _second step_ consists of proving that a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\) of "small" depth can be converted into a proof tree \(T^{\prime}\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T^{\prime})=D^{\prime}\) of "small" subtree count. Roughly speaking, the subtree count of a proof tree \(T\) is the maximum number of different (w.r.t. node-labels) subtrees of \(T\) rooted at nodes with the same label. Let us formalize this notion.
Two rooted trees \(T=(V,E,\lambda)\) and \(T^{\prime}=(V^{\prime},E^{\prime},\lambda^{\prime})\) are _isomorphic_, denoted \(T\approx T^{\prime}\), if there is a bijection \(h:V\to V^{\prime}\) such that, for each node \(v\in V\), \(\lambda(v)=\lambda^{\prime}(h(v))\), and for each two nodes \(u,v\in V\), \((u,v)\in E\) iff \((h(u),h(v))\in E^{\prime}\). It is clear that \(\approx\) is an equivalence relation over the set of all rooted trees. We further write \(T[\alpha]\), for a fact \(\alpha\), to denote the set of all subtrees of \(T\) whose root is labeled with \(\alpha\), i.e., \(T[\alpha]=\{T[v]\mid v\in V\}\setminus\{\lambda(v)=\alpha\}\) with \(T[v]\) being the subtree of \(T\) rooted at \(v\). Let \(T[\alpha]_{/\approx}\) be the quotient set of \(T[\alpha]\) w.r.t. \(\approx\), i.e., the set of all equivalence classes of \(T[\alpha]\) w.r.t. \(\approx\). In other words, each member of \(T[\alpha]_{/\approx}\) is a maximal set of trees of \(T[\alpha]\) that are labeled in exactly the same way. Then, the _subtree count_ of \(T\), denoted \(\mathsf{scount}(T)\), is \(\max_{\alpha\in\{\lambda(v)\mid v\in V\}}\{|T[\alpha]_{/\approx}|\}\). The key lemma follows:
**Lemma 7**.: _For each Datalog program \(\Sigma\) and a polynomial \(f\), there is a polynomial \(g\) such that, for every database \(D\) over \(\mathsf{edb}(\Sigma)\), fact \(\alpha\) over \(\mathsf{sch}(\Sigma)\), and \(D^{\prime}\subseteq D\), if there exists a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) such that \(\mathsf{support}(T)=D^{\prime}\) and \(\mathsf{depth}(T)\leq f(|D|)\), then there exists also such a proof tree \(T^{\prime}\) with \(\mathsf{scount}(T^{\prime})\leq g(|D|)\)._
\(\bullet\) The _third step_ shows that a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\) of "small" subtree count can be converted into a compact proof DAG \(G\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(G)=D^{\prime}\). Here is the corresponding lemma:
**Lemma 8**.: _For each Datalog program \(\Sigma\) and a polynomial \(f\), there is a polynomial \(g\) such that, for every database \(D\) over \(\mathsf{edb}(\Sigma)\), fact \(\alpha\), and \(D^{\prime}\subseteq D\), if there is a proof tree \(T\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\) and \(\mathsf{scount}(T)\leq f(|D|)\), then there exists a proof DAG \(G=(V,E,\lambda)\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(G)=D^{\prime}\) and \(|V|\leq g(|D|)\)._
It is now clear that the direction (1) implies (2) of Proposition 5 is an immediate consequence of Lemmas 6, 7 and 8.
### Non-Recursive Queries
We now focus on non-recursive Datalog queries, and show the following about the data complexity of why-provenance:
**Theorem 9**.: \(\mathsf{Why\_Provenance}[\mathsf{NRDat}]\) _is in \(\mathrm{AC}_{0}\) in data complexity._
The above result is shown via _first-order rewritability_, i.e., given a non-recursive Datalog query \(Q=(\Sigma,R)\), we construct a first-order query \(Q_{FO}\) such that, for every input instance of \(\mathsf{Why\_Provenance}[Q]\), namely a database \(D\) over \(\mathsf{edb}(\Sigma)\), a tuple \(\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\), and a subset \(D^{\prime}\) of \(D\), the fact that \(D^{\prime}\) belongs to \(\mathsf{why}(\bar{t},D,Q)\) is equivalent to the fact that \(\bar{t}\) is an answer to the query \(Q_{FO}\) over \(D^{\prime}\). Since first-order query evaluation is in \(\mathrm{AC}_{0}\) in data complexity (Vardi 1995), Theorem 9 follows. Before delving into the details, let us first recall the basics about first-order queries.
**First-Order Queries.** A _first-order (FO)_ query \(Q\) is an expression of the form \(\varphi(\bar{x})\), where \(\varphi\) is an FO formula, \(\bar{x}\) is a tuple of (not necessarily distinct) variables, and the set of variables occurring in \(\bar{x}\) is precisely the set of free variables of \(\varphi\). The _answer_ to \(Q\) over a database \(D\) is the set of tuples \(Q(D)=\{\bar{t}\in\mathsf{dom}(D)^{|\bar{x}|}\mid D\models\varphi[\bar{x}/\bar{t }]\}\), where \(|\bar{x}|\) denotes the length of \(\bar{x}\), \(\varphi[\bar{x}/\bar{t}]\) is the sentence obtained after replacing the variables of \(\bar{x}\) with the corresponding constants of \(\bar{t}\), and \(\models\) denotes the standard FO entailment. Let \(\mathsf{var}(\varphi)\) be the set of variables occurring in \(\varphi\). A _conjunctive query (CQ)_ is an FO query \(\varphi(\bar{x})\), where \(\varphi\) is of the form \(\exists\bar{y}\left(R_{1}(\bar{x}_{1})\wedge\cdots\wedge R_{n}(\bar{x}_{n})\right)\) with \(\bar{x}\cap\bar{y}=\emptyset\) and \(\bar{x}_{i}\subseteq\bar{x}\cup\bar{y}\).
**Some Preparation.** Towards the construction of the desired first-order query, we need some auxiliary notions. The _canonical form_ of a fact \(\alpha\), denoted \(\mathsf{can}(\alpha)\), is the atom obtained by replacing each constant \(c\) in \(\alpha\) with a variable \(\langle c\rangle\), i.e., the name of the variable is uniquely determined by the constant \(c\). Given a Datalog query \(Q=(\Sigma,R)\), we say that a labeled rooted tree \(T=(V,E,\lambda)\) is a _Q-tree_ if it is the proof tree of some fact \(R(\bar{t})\) w.r.t. some database \(D\) over \(\mathsf{edb}(\Sigma)\) and \(\Sigma\). The notion of the induced CQ by a \(Q\)-tree follows:
**Definition 10** (**Induced CQ)**.: _Consider a Datalog query \(Q=(\Sigma,R)\) and a \(Q\)-tree \(T=(V,E,\lambda)\), where \(v\in V\) is the root node and \(\lambda(v)=R(c_{1},\ldots,c_{n})\). The CQ induced by \(T\), denoted \(\mathsf{cq}(T)\), is the CQ \(\varphi_{T}(\langle c_{1}\rangle,\ldots,\langle c_{n}\rangle)\) with_
\[\varphi_{T}\;=\;\exists\bar{x}\left(\bigwedge_{\alpha\in\mathsf{support}(T)} \mathsf{can}(\alpha)\right),\]
_where \(\bar{x}\) consists of all \(\langle c\rangle
\(\approx\) is an equivalence relation over the set of CQs. For a Datalog query \(Q\), \(\mathsf{cq}(Q)_{/\approx}\) is the quotient set of \(\mathsf{cq}(Q)\) w.r.t. \(\approx\), i.e., the set of all equivalence classes of \(\mathsf{cq}(Q)\) w.r.t. \(\approx\). Let \(\mathsf{cq}^{\approx}(Q)\) be the set of CQs that keeps one arbitrary representative from each member of \(\mathsf{cq}(Q)_{/\approx}\). Then:
**Lemma 11**.: _For every non-recursive Datalog query \(Q\), it holds that \(\mathsf{cq}^{\approx}(Q)\) is finite._
**First-Order Rewriting.** Having \(\mathsf{cq}^{\approx}(Q)\) in place for a non-recursive Datalog query \(Q=(\Sigma,R)\), we can now proceed with the construction of the desired FO query \(Q_{FO}\).
We start by constructing, for a CQ \(\varphi(\bar{y})\in\mathsf{cq}^{\approx}(Q)\), an FO query \(Q_{\varphi(\bar{y})}=\psi_{\varphi(\bar{y})}(x_{1},\ldots,x_{\boldsymbol{x}(R )})\), where \(x_{1},\ldots,x_{\boldsymbol{x}(R)}\) are distinct variables that do not occur in any of the CQs of \(\mathsf{cq}^{\approx}(Q)\), with the following property: for every database \(D\) and tuple \(\bar{t}\in\mathsf{dom}(D)^{\boldsymbol{x}(R)}\), \(\bar{t}\in Q_{\varphi(\bar{y})}(D)\) iff \(\bar{t}\) is an answer to \(\varphi(\bar{y})\) over \(D\), and, in addition, _all_ the atoms of \(D\) are used in order to entail the sentence \(\varphi[\bar{y}/\bar{t}]\), i.e., there are no other facts in \(D\) besides the ones that have been used as witnesses for the atoms occurring in \(\varphi[\bar{y}/\bar{t}]\). Assume that \(\varphi\) is of the form \(\exists\bar{z}\left(R_{1}(\bar{w}_{1})\wedge\cdots\wedge R_{n}(\bar{w}_{n})\right)\). The formula \(\psi_{\varphi(\bar{y})}\), with free variables \(x_{1},\ldots,x_{\boldsymbol{x}(R)}\), is of the form
\[\exists\bar{y}\exists\bar{z}\left(\varphi_{1}\;\wedge\;\varphi_{2}\;\wedge\; \varphi_{3}\right),\]
where each conjunct is defined as follows. We write \(\bar{x}\) for the tuple \((x_{1},\ldots,x_{\boldsymbol{x}(R)})\) and \(\bar{u}_{P}\), where \(P\) is a predicate, for the tuple of variables \((u_{1},\ldots,u_{\boldsymbol{x}(P)})\). Furthermore, for two tuples of variables \(\bar{u}=(u_{1},\ldots,u_{k})\) and \(\bar{v}=(v_{1},\ldots,v_{k})\), \((\bar{u}=\bar{v})\) is a shortcut for \(\bigwedge_{i=1}^{k}(u_{i}=v_{i})\). The formula \(\varphi_{1}\) is
\[\bigwedge_{i\in[n]}\;R_{i}(\bar{w}_{i})\;\wedge\;(\bar{x}=\bar{y})\;\wedge\; \bigwedge_{\begin{subarray}{c}u,v\in\mathsf{var}(\varphi),\\ u\neq v\end{subarray}}\neg(u=v)\]
which states that each atom in \(\varphi\) should be satisfied by assigning different values to different variables of \(\varphi\). The formula \(\varphi_{2}\) is defined as
\[\bigwedge_{P\in\{R_{1},\ldots,R_{n}\}}\neg\left(\exists\bar{u}_{P}\left(P( \bar{u}_{P})\;\wedge\;\bigwedge_{\begin{subarray}{c}i\in[n],\\ R_{i}=P\end{subarray}}\neg(\bar{w}_{i}=\bar{u}_{P})\right)\right)\]
which essentially states that, for each predicate \(P\) occurring in \(\varphi\), the only atoms in the underlying database with predicate \(P\) are those used as witnesses for the atoms of \(\varphi\). Finally, the formula \(\varphi_{3}\) is defined as
\[\bigwedge_{P\in\mathsf{edb}(\Sigma)\backslash\{R_{1},\ldots,R_{n}\}}\neg \left(\exists\bar{u}_{P}\,P(\bar{u}_{P})\right)\]
which expresses that there are no atoms in the underlying database with a predicate that does not appear in \(\varphi\).
With the FO query \(Q_{\varphi(\bar{y})}\) for each CQ \(\varphi(\bar{y})\in\mathsf{cq}^{\approx}(Q)\) in place, it should be clear that the desired FO query \(Q_{FO}\) is defined as \(\Phi(x_{1}\ldots,x_{\boldsymbol{x}(R)})\), where \(\Phi=\bigvee_{\varphi(\bar{y})\in\mathsf{cq}^{\approx}(Q)}\psi_{\varphi( \bar{y})}\) and the next technical result follows:
**Lemma 12**.: _Given a non-recursive Datalog query \(Q=(\Sigma,R)\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), \(\bar{t}\in\mathsf{dom}(D)^{\boldsymbol{x}(R)}\), and \(D^{\prime}\subseteq D\), it holds that \(D^{\prime}\in\mathsf{why}(\bar{t},D,Q)\) iff \(\bar{t}\in Q_{FO}(D^{\prime})\)._
### Refined Proof Trees
The standard notion of why-provenance relies on arbitrary proof trees without any restriction. However, as already discussed in the literature (see, e.g., the recent work [1]), there are proof trees that are counterintuitive. Such a proof tree, for instance, is the second one in Example 1 as the fact \(A(a)\) is derived from itself. Now, a member \(D^{\prime}\) of \(\mathsf{why}(\bar{t},D,Q)\), witnessed via such an unnatural proof tree, might be classified as a counterintuitive explanation of \(\bar{t}\) as it does not correspond to an intuitive derivation process, which can be extracted from the proof tree, that leads from \(D^{\prime}\) to the fact \(R(\bar{t})\). This leads to the need of considering refined classes of proof trees that overcome the conceptual limitations of arbitrary proof trees. Two well-justified notions considered in the literature are _non-recursive proof trees_ and _minimal-depth proof trees_[1]. Roughly, a non-recursive proof tree is a proof tree that does not contain two nodes labeled with the same fact and such that one is the descendant of the other, whereas a minimal-depth proof tree is a proof tree that has the minimum depth among all the proof trees of a certain tuple. We analyzed the data complexity of why-provenance focusing only on proof trees from those refined classes, and proved that it remains unchanged. Due to space constraints, we omit the details that can be found in the extended version of the paper.
## 5 Unambiguous Proof Trees
Although non-recursive and minimal-depth proof trees form central classes that deserve our attention, there are still proof trees from those classes that can be classified as counterintuitive. More precisely, we can devise proof trees that are both non-recursive and minimal-depth, but they are ambiguous concerning the way some facts are derived.
**Example 4**.: _Let \(Q=(\Sigma,A)\), where \(\Sigma\) is the Datalog program that encodes the path accessibility problem as in Example 1. Consider also the database_
\[D\;=\;\{S(a),S(b),T(a,a,c),T(b,b,c),T(c,c,d)\}.\]
_The following is a proof tree of the fact \(A(d)\) w.r.t. \(D\) and \(\Sigma\) that is both non-recursive and minimal-depth, but suffers from the ambiguity issue mentioned above:_
_Indeed, there are two nodes labeled with the fact \(A(c)\), but their subtrees differ, and thus, it is ambiguous how \(A(c)\) is derived. Hence, the database \(D\), which belongs to the why-provenance of \((d)\) w.r.t. \(D\) and \(Q\) relative to non-recursive and minimal-depth proof trees due to the above proof tree, might be classified as a counterintuitive explanation since it does not correspond to an intuitive derivation process where each fact is derived once due to an unambiguous reason._
The above discussion leads to the novel class of unambiguous proof trees, where all occurrences of a fact in such a tree must be proved via the same derivation.
**Definition 13** (Unambiguous Proof Tree).: _Consider a Datalog program \(\Sigma\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a fact \(\alpha\) over \(\mathsf{sch}(\Sigma)\). An unambiguous proof tree of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) is a proof tree \(T=(V,E,\lambda)\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) such that, for all \(v,u\in V\), \(\lambda(v)=\lambda(u)\) implies \(T[v]\approx T[u]\)._
Considering again Example 4, we can construct an unambiguous proof tree of \(A(d)\) w.r.t. \(D\) and \(\Sigma\) by simply replacing the subtree of the second child of \(A(d)\) with the subtree of its first child (or vice versa). Now, why-provenance relative to unambiguous proof trees is defined as expected: for a Datalog query \(Q=(\Sigma,R)\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a tuple \(\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\), the _why-provenance of \(\bar{t}\) w.r.t. \(D\) and \(Q\) relative to unambiguous proof trees_ is the family
\[\{\mathsf{support}(T)\mid T\text{ is an unambiguous proof tree of }\] \[R(\bar{t})\text{ w.r.t. }D\text{ and }\Sigma\}\]
denoted \(\mathsf{why}_{\mathsf{UN}}(\bar{t},D,Q)\). Considering again Example 4, \(\mathsf{why}_{\mathsf{UN}}((d),D,Q)\) consists of \(\{S(a),T(a,a,c),T(c,c,d)\}\) and \(\{S(b),T(b,b,c),T(c,c,d)\}\), which is what one expects as conceptually intuitive explanations for the tuple \((d)\), unlike the whole database \(D\). The algorithmic problems
\[\mathsf{Why}\text{-}\mathsf{Provenance}_{\mathsf{UN}}[\mathsf{C}]\quad\text{ and}\quad\mathsf{Why}\text{-}\mathsf{Provenance}_{\mathsf{UN}}[Q]\]
are defined in the expected way. We can show that the data complexity of why-provenance remains unchanged.
**Theorem 14**.: _The following hold:_
1. \(\mathsf{Why}\text{-}\mathsf{Provenance}_{\mathsf{UN}}[\mathsf{C}]\) _is_ NP_-complete in data complexity, for each class_ \(\mathsf{C}\in\{\mathsf{Det},\mathsf{LDat}\}\)_._
2. \(\mathsf{Why}\text{-}\mathsf{Provenance}_{\mathsf{UN}}[\mathsf{NRDat}]\) _is in_ \(\mathrm{AC}_{0}\) _in data compl._
For item (1), we show that \(\mathsf{Why}\text{-}\mathsf{Provenance}_{\mathsf{UN}}[\mathsf{Det}]\) is in NP and \(\mathsf{Why}\text{-}\mathsf{Provenance}_{\mathsf{NR}}[\mathsf{LDat}]\) is NP-hard. The latter is established via a reduction from the problem of deciding whether a directed graph has a Hamiltonian cycle. The NP upper bound relies on a characterization of the existence of an unambiguous proof tree of a fact \(\alpha\) w.r.t. a database \(D\) and a Datalog program \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\subseteq D\) via the existence of a so-called _unambiguous proof DAG_\(G\) of \(\alpha\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(G)=D^{\prime}\) of polynomial size. Interestingly, unlike arbitrary proof trees, we can directly go from an unambiguous proof tree \(T\) to a polynomially-sized unambiguous proof DAG with the same support as \(T\), without applying any intermediate steps for reducing the depth or the subtree count of \(T\). This is because an unambiguous proof tree has, by definition, "small" depth and subtree count (in fact, the subtree count is one). The \(\mathrm{AC}_{0}\) upper bound in item (2) is shown via FO rewritability. The target FO query is obtained as in the proof of Theorem 9, but considering only unambiguous proof trees in the definition of \(\mathsf{cq}(Q)\).
### Computing Why-Provenance via SAT Solvers
We proceed to discuss how off-the-shelf SAT solvers can be used to efficiently compute the why-provenance of a tuple relative to unambiguous proof trees. We then discuss a proof-of-concept implementation and report encouraging results of a preliminary experimental evaluation. Let us stress that focusing on unambiguous proof trees was crucial towards these encouraging results as it is unclear how a SAT-based implementation can be made practical for proof trees that are not unambiguous. This is mainly because unambiguous proof trees, unlike other classes of proof trees, have always subtree count one, which is crucial for keeping the size of the Boolean formula manageable.
Consider a Datalog query \(Q=(\Sigma,R)\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a tuple \(\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\). We construct in polynomial time in \(D\) a Boolean formula \(\phi_{(\bar{t},D,Q)}\) such that the why-provenance of \(\bar{t}\) w.r.t. \(D\) and \(Q\) relative to unambiguous proof trees can be computed from the truth assignments that make \(\phi_{(\bar{t},D,Q)}\) true. This relies on the characterization mentioned above of the existence of an unambiguous proof tree of \(R(\bar{t})\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(T)=D^{\prime}\subseteq D\) via the existence of an unambiguous proof DAG \(G\) of \(R(\bar{t})\) w.r.t. \(D\) and \(\Sigma\) with \(\mathsf{support}(G)=D^{\prime}\). The formula \(\phi_{(\bar{t},D,Q)}\) is of the form \(\phi_{graph}\wedge\phi_{acyclic}\wedge\phi_{root}\wedge\phi_{proof}\), where \(\phi_{graph}\) verifies that a truth assignment corresponds to a syntactically correct labeled directed graph \(G\), \(\phi_{acyclic}\) verifies that \(G\) is acyclic, \(\phi_{root}\) verifies that \(R(\bar{t})\) is the unique root of \(G\), and \(\phi_{proof}\) verifies that \(G\) is an unambiguous proof DAG.
The key ingredient in the construction of \(\phi_{(\bar{t},D,Q)}\) is the so-called _downward closure of \(R(\bar{t})\) w.r.t. \(D\) and \(\Sigma\)_, taken from (Elhalawati, Krotzsch, and Mennicke 2022), which, intuitively speaking, is a hypergraph that encodes all possible proof DAGs of \(R(\bar{t})\) w.r.t. \(D\) and \(\Sigma\). We first construct this hypergraph \(H\), which can be done in polynomial time in the size of \(D\), and then guided by \(H\) we build the formula \(\phi_{(\bar{t},D,Q)}\), which essentially searches for an unambiguous proof DAG inside the hypergraph \(H\). Now, a truth assignment \(\tau\) to the variables of \(\phi_{(\bar{t},D,Q)}\) naturally gives rise to a database denoted \(\mathsf{db}(\tau)\). Let \(\llbracket\phi_{(\bar{t},D,Q)}\rrbracket\) be the family
\[\left\{\mathsf{db}(\tau)\mid\tau\text{ is a satisfying assignment of }\phi_{(\bar{t},D,Q)}\right\}.\]
We can then show the next technical result:
**Proposition 15**.: _Consider a Datalog query \(Q=(\Sigma,R)\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a tuple \(\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\). It holds that \(\mathsf{why}_{\mathsf{UN}}(\bar{t},D,Q)=\llbracket\phi_{(\bar{t},D,Q)}\rrbracket\)._
The above proposition provides a way for computing the why-provenance of a tuple relative to unambiguous proof trees via off-the-shelf SAT solvers. But how does this machinery behave when applied in a practical context? In particular, we are interested in the incremental computation of the why-provenance by enumerating its members instead of computing the whole set at once. The rest of the section is devoted to providing a preliminary answer to this question.
### Some Implementation Details
Before presenting our experimental results, let us first briefly discuss some interesting aspects of the implementation. In what follows, fix a Datalog query \(Q=(\Sigma,R)\), a database \(D\) over \(\mathsf{edb}(\Sigma)\), and a tuple \(\bar{t}\in\mathsf{dom}(D)^{\mathsf{ar}(R)}\).
**Constructing the Downward Closure.** Recall that the construction of \(\phi_{(\bar{t},D,Q)}\) relies on the downward closure of \(R(\bar{t})\)
w.r.t. \(D\) and \(\Sigma\). It turns out that the hyperedges of the downward closure can be computed by executing a slightly modified Datalog query \(Q_{\downarrow}\) over a slightly modified database \(D_{\downarrow}\). In other words, the answers to \(Q_{\downarrow}\) over \(D_{\downarrow}\) coincide with the hyperedges of the downward closure. Hence, to construct the downward closure we exploit a state-of-the-art Datalog engine, that is, version 2.1.1 of DLV [1]. Note that our approach based on evaluating a Datalog query differs form the one in [1], which uses an extension of Datalog with set terms.
**Constructing the Formula.** Recall that \(\phi_{(\vec{t},D,Q)}\) consists of four conjuncts, where each one is responsible for a certain task. As it might be expected, the heavy task is to verify that the graph in question is acyclic (performed by the formula \(\phi_{acyclic}\)). Checking the acyclicity of a directed graph via a Boolean formula is a well-studied problem in the SAT literature. For our purposes, we employ the technique of _vertex elimination_[11]. The advantage of this approach is that the number of Boolean variables needed for the encoding of \(\phi_{acyclic}\) is of the order \(O(n\cdot\delta)\), where \(n\) is the number of nodes of the graph, and \(\delta\) is the so-called _elimination width_ of the graph, which, intuitively speaking, is related to how connected the graph is.
**Incrementally Constructing the Why-Provenance.** Recall that we are interested in the incremental computation of the why-provenance, which is more useful in practice than computing the whole set at once. To this end, we need a way to enumerate all the members of the why-provenance without repetitions. This is achieved by adapting a standard technique from the SAT literature for enumerating the satisfying assignments of a Boolean formula, called _blocking clause_. We initially collect in a set \(S\) all the facts of \(D\) occurring in the downward closure of \(R(\vec{t})\) w.r.t. \(D\) and \(\Sigma\). Then, after asking the SAT solver for an arbitrary satisfying assignment \(\tau\) of \(\phi_{(\vec{t},D,Q)}\), we output the database \(\mathsf{db}(\tau)\), and then construct the "blocking" clause \(\vee_{\alpha\in S}\ell_{\alpha}\), where \(\ell_{\alpha}=\neg x_{\alpha}\) if \(\alpha\in\mathsf{db}(\tau)\), and \(\ell_{\alpha}=x_{\alpha}\) otherwise. We then add this clause to the formula, which expresses that no other satisfying assignment \(\tau^{\prime}\) should give rise to the same member of the why-provenance. This will exclude the previously computed explanations from the computation. We keep adding such blocking clauses each time we get a new member of the why-provenance until the formula is unsatisfiable.
### Experimental Evaluation
We now proceed to experimentally evaluate the SAT-based approach discussed above. To this end, we consider a variety of scenarios from the literature consisting of a Datalog query \(Q=(\Sigma,R)\) and a family of databases \(\mathcal{D}\) over \(\mathsf{edb}(\Sigma)\).
**Experimental Scenarios.** All the considered scenarios are summarized in Table 1. Here is brief description:
* [leftmargin=*,noitemsep,topsep=0pt]
* This scenario computes the transitive closure of a graph and asks for connected nodes. The database \(D_{\mathsf{bitcoin}}\) stores a portion of the Bicoin network [12], whereas \(D_{\mathsf{facebook}}\) stores different "social circles" from Facebook [13].
* The scenarios \(\mathsf{Doctors}\)-\(i\), for \(i\in[7]\), were used in [1] and represent queries obtained from a well-known data-exchange benchmark involving existential rules (the existential variables have been replaced with fresh constants). All such scenarios share the same database with 100K facts.
* This scenario used in [1] implements the ELK calculus [10] and asks for all pairs of concepts that are related with the subClassOf relation. The various databases contain different portions of the Galen ontology (The Oxford Library 2007).
* This scenario used in [11] implements the classical Andersen "points-to" algorithm for determining the flow of data in procedural programs and asks for all the pairs of a pointer \(p\) and a variable \(v\) such that \(p\) points to \(v\). The databases are encodings of program statements of different length.
* This scenario (Context-Sensitive Dataflow Analysis) used in [11] is similar to Andersen but asks for null references in a program. The databases \(D_{\mathsf{httpd}}\), \(D_{\mathsf{postgresql}}\), and \(D_{\mathsf{linux}}\) store the statements of the httpd web server, the PostgresSQL DBMS, and the Linux kernel, respectively.
**Experimental Setup.** For each scenario \(s\) consisting of the query \(Q=(\Sigma,R)\) and the family of databases \(\mathcal{D}\), and for each \(D\in\mathcal{D}\), we have computed \(Q(D)\) using DLV, and then selected five tuples \(\vec{t}^{1}_{s,D},\ldots,\vec{t}^{s}_{s,D}\) from \(Q(D)\) uniformly at random. Then, for each \(i\in[5]\), we constructed the downward closure of \(R(\vec{t}^{i}_{s,D})\) w.r.t. \(D\) and \(\Sigma\) by first computing the adapted query \(Q_{\downarrow}\) and database \(D_{\downarrow}\) via a Python 3 implementation and then using DLV for the actual computation of the downward closure, then we constructed the Boolean formula \(\phi_{(\vec{t}^{1}_{s,D},D,Q)}\) via a C++ implementation, and finally we ran the state-of-the-art SAT solver Glucose (see, e.g., [1]), version 4.2.1, with input the above formula to enumerate the members of \(\mathsf{why}_{(\mathsf{Un}}(\vec{t}^{1}_{s,D},D,Q)\). All the experiments have been conducted on a laptop with an Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz, and 32GB of RAM, running Fedora Linux 37. The Python code is executed with Python 3.11.2, and the C++ code has been compiled with g++ 12.2.1, using the -O3 optimization flag.
**Experimental Results.** Due to space constraints, we are going to present only the results based on the Andersen scenario. Nevertheless, the final outcome is aligned with what we have observed based on all the other scenarios.
Concerning the construction of the downward closure and the Boolean formula, we report in Figure 1 the total running time for each database of the Andersen scenario (recall that there are five databases of varying size, and thus we have five plots). Furthermore, each plot consists of five bars that correspond to the five randomly chosen tuples. Each such bar shows the time for building the downward closure plus the time for constructing the Boolean formula. We have observed that almost all the time is spent for computing the downward closure, whereas the time for building the formula is negligible. Hence, our efforts should concentrate on
improving the computation of the downward closure. Moreover, for the reasonably sized databases (68K, 340K, and 680K facts) the total time is in the order of seconds, which is quite encouraging. Now, for the very large databases that we consider (3.4M and 6.8M facts), the total time is between half a minute and a minute, which is also encouraging taking into account the complexity of the query, the large size of the databases, and the limited power of our machine.
For the incremental computation of the why-provenance, we give in Figure 2, for each database of the Andersen scenario, the times required to build an explanation, that is, the time between the current member of the why-provenance and the next one (this time is also known as the delay). Each of the five plots collects the delays of constructing the members of the why-provenance (up to a limit of 10K members or 5 minutes timeout) for each of the five randomly chosen tuples. We use box plots, where the bottom and the top borders of the box represent the first and third quartile, i.e., the delay under which 25% and 75% of all delays occur, respectively, and the orange line represents the median delay. Moreover, the bottom and the top whisker represent the minimum and maximum delay, respectively. All times are expressed in milliseconds and we use logarithmic scale. As we can see, most of the delays are below 1 millisecond, with the median in the order of microseconds. Therefore, once we have the Boolean formula in place, incrementally computing the members of the why-provenance is extremely fast.
## 6 Conclusions
The takeaway of our work is that for recursive queries the why-provenance problem is, in general, intractable, whereas for non-recursive queries it is highly tractable in data complexity. With the aim of overcoming the conceptual limitations of arbitrary proof trees, we considered the new class of unambiguous proof trees and showed that it does not affect the data complexity of the why-provenance problem. Interestingly, we have experimentally confirmed that unambiguous proof trees help to exploit off-the-shelf SAT solvers towards an efficient computation of the why-provenance. Note that we have performed a preliminary comparison with (Elhalawati, Krotzsch, and Mennicke 2022) by focusing on a setting that both approaches can deal with. In particular, we used the scenarios \(\mathsf{Doctors}\)-\(i\), for \(i\in[7]\), and measured the end-to-end runtime of our approach (not the delays). For the simple scenarios, the two approaches are comparable in the order of a second. For the demanding scenarios (\(\mathsf{Doctors}\)-\(i\) for \(i\in\{1,5,7\}\)), our approach is generally faster.
It would be extremely useful to provide a complete classification of the data complexity of the why-provenance problem in the form of a dichotomy result. It would also provide further insights to pinpoint the combined complexity of the problem, where the Datalog query is part of the input. Finally, it is crucial to perform a more thorough experimental evaluation of our SAT-based machinery in order to understand better whether it can be applied in practice.
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline
**Scenario** & **Databases** & **Query Type** & **Number of Rules** \\ \hline \hline TransClosure & \(D_{\text{bitcoin}}\) (235K), \(D_{\text{facebook}}\) (88.2K) & linear, recursive & 2 \\ \hline \(\mathsf{Doctors}\)-\(i\), \(i\in[7]\) & \(D_{1}\) (100K) & linear, non-recursive & 6 \\ \hline Galen & \(D_{1}\) (26.5K), \(D_{2}\) (30.5K), \(D_{3}\) (67K), \(D_{4}\) (82K) & non-linear, recursive & 14 \\ \hline Andersen & \(D_{1}\) (68K), \(D_{2}\) (340K), \(D_{3}\) (680K), \(D_{4}\) (3.4M), \(D_{5}\) (6.8M) & non-linear, recursive & 4 \\ \hline CSDA & \(D_{\text{httpd}}\) (10M), \(D_{\text{postgresql}}\) (34.8M), \(D_{\text{linux}}\) (44M) & linear, recursive & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Experimental scenarios.
Figure 1: Building the downward closure and the Boolean formula.
Figure 2: Incremental computation of the why-provenance. |
2307.11955 | Implicit Interpretation of Importance Weight Aware Updates | Due to its speed and simplicity, subgradient descent is one of the most used
optimization algorithms in convex machine learning algorithms. However, tuning
its learning rate is probably its most severe bottleneck to achieve consistent
good performance. A common way to reduce the dependency on the learning rate is
to use implicit/proximal updates. One such variant is the Importance Weight
Aware (IWA) updates, which consist of infinitely many infinitesimal updates on
each loss function. However, IWA updates' empirical success is not completely
explained by their theory. In this paper, we show for the first time that IWA
updates have a strictly better regret upper bound than plain gradient updates
in the online learning setting. Our analysis is based on the new framework,
generalized implicit Follow-the-Regularized-Leader (FTRL) (Chen and Orabona,
2023), to analyze generalized implicit updates using a dual formulation. In
particular, our results imply that IWA updates can be considered as approximate
implicit/proximal updates. | Keyi Chen, Francesco Orabona | 2023-07-22T01:37:52Z | http://arxiv.org/abs/2307.11955v1 | # Implicit Interpretation of Importance Weight Aware Updates
###### Abstract
Due to its speed and simplicity, subgradient descent is one of the most used optimization algorithms in convex machine learning algorithms. However, tuning its learning rate is probably its most severe bottleneck to achieve consistent good performance. A common way to reduce the dependency on the learning rate is to use implicit/proximal updates. One such variant is the Importance Weight Aware (IWA) updates, which consist of infinitely many infinitesimal updates on each loss function. However, IWA updates' empirical success is not completely explained by their theory. In this paper, we show for the first time that IWA updates have a strictly better regret upper bound than plain gradient updates in the online learning setting. Our analysis is based on the new framework by Chen & Orabona (ICML 2023) to analyze generalized implicit updates using a **dual formulation**. In particular, our results imply that IWA updates can be considered as approximate implicit/proximal updates.
Machine Learning, ICML 2023 Workshop on Duality for Modern Machine Learning, Honolulu, Hawaii, USA. Copyright 2023 by the author(s).
## 1 Introduction
In this paper, we are interested in studying variants of gradient updates in the Online Convex Optimization (OCO) setting (Cesa-Bianchi & Uagosi, 2006; Cesa-Bianchi & Orabona, 2021; Orabona, 2019). In the OCO setting, the learner receives an arbitrary sequence of convex loss functions, selects points before knowing the loss functions, and is evaluated on the values of the loss functions on the points it selects. More in detail, at round \(t\) the learner outputs a point \(\mathbf{x}_{t}\) in a convex feasible set \(V\subseteq\mathbb{R}^{d}\). Then, it receives a loss function \(\ell_{t}:V\rightarrow\mathbb{R}\) and it pays the value \(\ell_{t}(\mathbf{x}_{t})\). Given the arbitrary nature of the losses, the learner cannot guarantee to have a small cumulative loss, \(\sum_{t=1}^{T}\ell_{t}(\mathbf{x}_{t})\). On the other hand, it is possible to minimize the _regret_, that is the difference between the cumulative loss of the algorithm and the one of any arbitrary comparator \(\mathbf{u}\in V\):
\[\operatorname{Regret}_{T}(\mathbf{u})\triangleq\sum_{t=1}^{T}\ell_{t}(\mathbf{x}_{t}) -\sum_{t=1}^{T}\ell_{t}(\mathbf{u})\;.\]
In particular, a successful OCO algorithm must guarantee a regret that grows sublinearly in time for any \(\mathbf{u}\in V\). In this way, its average performance approaches the one of the best comparator in hindsight.
While in the OCO setting we do not assume anything on how the losses are generated, in the case that the losses are i.i.d. from some fixed (but unknown) distribution we can easily convert a regret guarantee into a convergence rate, using the so-called online-to-batch conversion (Cesa-Bianchi et al., 2001). Moreover, most of the time the convergence guarantee we obtain in this way are optimal. Hence, the OCO framework allows to analyze many algorithms in the stochastic, batch, and adversarial setting with a single analysis.
The simplest OCO algorithm is Online Gradient Descent (OGD), that, starting from a point \(\mathbf{x}_{1}\in V\), in each step updates with
\[\mathbf{x}_{t+1}=\Pi_{V}[\mathbf{x}_{t}-\eta\mathbf{g}_{t}],\]
where \(\Pi_{V}\) is the Euclidean projection onto \(V\), \(\mathbf{g}_{t}\) is a gradient of \(\ell_{t}\) in \(\mathbf{x}_{t}\), and \(\eta>0\) is the learning rate. Theoretically and practically, the setting of the learning rate is critical to obtain good performance. In fact, while a learning rate \(\eta=O(\frac{1}{\sqrt{T}})\) will guarantee an optimal \(O(\sqrt{T})\) regret on Lipschitz losses, the constant hidden in the big-O notation can be arbitrarily high. Moreover, things get even worse when each loss function has an _importance weight_\(h_{t}>0\). This is the case, for example, when each loss function is the loss of the predictor on a classification dataset and we have different classification costs. In this case, the update becomes \(\mathbf{x}_{t+1}=\Pi_{V}[\mathbf{x}_{t}-\eta h_{t}\mathbf{g}_{t}]\) and it should be very intuitive that very large \(h_{t}\) will constraint the learning rate to be small, that in turn will hinder the performance of the algorithm.
In this view, a number of algorithms have been proposed to reduce the sensitivity of OGD to the setting of the learning rate. One of the first successful variants of OGD is the
Importance Weight Aware (IWA) updates (Karampatziakis & Langford, 2011). The basic idea is to make an infinite number of gradient updates on the loss function \(\ell_{t}\), each of them with an infinitesimal learning rate. This update can be written as the solution of an ODE and it has a closed form for linear predictors and common loss functions. In other words, the IWA updates follow the gradient flow on each loss function.
While the IWA updates are not so known by the machine learning community, they work extremely well in practice. In fact, they are part of the default optimizer used in the large-scale machine learning library Vowpal Wabbit.1 However, while the IWA updates are very natural and intuitive, the best theoretical guarantee is that their regret upper bound will not be too much worse than the one of plain online gradient descent.
Footnote 1: [https://vowpalwabbit.org/](https://vowpalwabbit.org/)
ContributionsIn this paper, for the first time we show that the IWA updates have a regret bound that is _better_ than the one of plain OGD. We use the very recently proposed framework of Generalized Implicit Follow-the-Regularized-Leader (FTRL) (Chen & Orabona, 2023) that allows to design and analyze more general updates than the classic gradient one. In particular, we show that IWA updates can be seen as approximate implicit/proximal updates because they approximately minimize a certain dual function.
Related WorkWhile IWA updates (Karampatziakis & Langford, 2011) were motivated by the use of importance weights, they end up being similar to the implicit (Kivinen & Warmuth, 1997; Kulis & Bartlett, 2010; Campolongo & Orabona, 2020) and proximal (Rockafellar, 1976) updates. In particular, for some losses like the hinge loss, the IWA update and the implicit/proximal update coincide. In this view, they are also similar to the aProx updates (Asi & Duchi, 2019), which take an implicit/proximal step on truncated linear models. However, as far as we know, no explicit relationship between implicit/proximal updates and IWA updates was known till now. Moreover, the best guarantee for IWA updates just shows that in some restricted cases the regret upper bound is not too much worse than the one of OGD (Karampatziakis & Langford, 2011). Finally, all the previous analysis of implicit updates in online learning are conducted in the primal space, while our analysis is done completely in the dual space.
## 2 Definitions and Tools
We define here some basic concepts and tools of convex analysis, we refer the reader to, e.g., Rockafellar (1970); Bauschke & Combettes (2011) for a complete introduction to this topic. We will consider extended value function that can assume infinity values too. A function \(f\) is _proper_ if it is nowhere \(-\infty\) and finite somewhere. A function \(f:V\subseteq\mathbb{R}^{d}\rightarrow[-\infty,+\infty]\) is _closed_ if \(\{\mathbf{x}:f(\mathbf{x})\leq\alpha\}\) is closed for every \(\alpha\in\mathbb{R}\). For a proper function \(f:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\), we define a _subgradient_ of \(f\) in \(\mathbf{x}\in\mathbb{R}^{d}\) as a vector \(\mathbf{g}\in\mathbb{R}^{d}\) that satisfies \(f(\mathbf{y})\geq f(\mathbf{x})+\langle\mathbf{g},\mathbf{y}-\mathbf{x}\rangle,\;\forall\mathbf{y}\in \mathbb{R}^{d}\). We denote the set of subgradients of \(f\) in \(\mathbf{x}\) by \(\partial f(\mathbf{x})\). The _indicator function of the set_\(V\), \(i_{V}:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\), has value \(0\) for \(\mathbf{x}\in V\) and \(+\infty\) otherwise. We denote the _dual norm_ of a norm \(\|\cdot\|\) by \(\|\cdot\|_{\star}\). A proper function \(f:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\) is \(\mu\)_-strongly convex_ over a convex set \(V\subseteq\operatorname{int}\operatorname{dom}f\) w.r.t. \(\|\cdot\|\) if \(\forall\mathbf{x},\mathbf{y}\in V\) and \(\forall\mathbf{g}\in\partial f(\mathbf{x})\), we have \(f(\mathbf{y})\geq f(\mathbf{x})+\langle\mathbf{g},\mathbf{y}-\mathbf{x}\rangle+\frac{\mu}{2}\|\mathbf{ x}-\mathbf{y}\|^{2}\). For a function \(f:\mathbb{R}^{d}\rightarrow[-\infty,\infty]\), we define the _Fenchel conjugate_\(f^{\star}:\mathbb{R}^{d}\rightarrow[-\infty,\infty]\) as \(f^{\star}(\mathbf{\theta})=\sup_{\mathbf{x}\in\mathbb{R}^{d}}\,\langle\mathbf{\theta}, \mathbf{x}\rangle-f(\mathbf{x})\). From this definition, we immediately have the Fenchel-Young inequality: \(f(\mathbf{x})+f^{\star}(\mathbf{\theta})\geq\langle\mathbf{\theta},\mathbf{x}\rangle,\;\forall \mathbf{x},\mathbf{\theta}\). We will also make use of the following properties of Fenchel conjugates.
**Theorem 2.1** ((Orabona, 2019, Theorem 5.7)).: _Let \(f:\mathbb{R}^{d}\rightarrow(-\infty,+\infty]\) be proper. Then, the following conditions are equivalent:_
1. \(\mathbf{\theta}\in\partial f(\mathbf{x})\)_._
2. \(\langle\mathbf{\theta},\mathbf{y}\rangle-f(\mathbf{y})\) _achieves its supremum in_ \(\mathbf{y}\) _at_ \(\mathbf{y}=\mathbf{x}\)_._
3. \(f(\mathbf{x})+f^{\star}(\mathbf{\theta})=\langle\mathbf{\theta},\mathbf{x}\rangle\)_._
_Moreover, if \(f\) is also convex and closed, we have an additional equivalent condition_
1. \(\mathbf{x}\in\partial f^{\star}(\mathbf{\theta})\)_._
### Generalized Implicit FTRL
In this section, we summarize the generalized formulation of the implicit FTRL algorithm from Chen & Orabona (2023). The main idea is to depart from the implicit or linearized updates, and directly design updates that improve the upper bound on the regret. More in detail, the basic analysis of most of the online learning algorithms is based on the definition of subgradients:
\[\ell_{t}(\mathbf{x}_{t})-\ell_{t}(\mathbf{u})\leq\langle\mathbf{g}_{t},\mathbf{x}_{t}-\mathbf{u} \rangle,\;\forall\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\;. \tag{1}\]
This allows to study the regret on the linearized losses as a proxy for the regret on the losses \(\ell_{t}\). Instead, Chen & Orabona (2023) introduce a new fundamental and more general strategy: using the Fenchel-Young inequality, we have
\[\ell_{t}(\mathbf{x}_{t})-\ell_{t}(\mathbf{u})\leq\ell_{t}(\mathbf{x}_{t})-\langle\mathbf{z}_{t},\mathbf{u}\rangle+\ell_{t}^{\star}(\mathbf{z}_{t}),\;\forall\mathbf{z}_{t}\;.\]
In particular, the algorithm will choose \(\mathbf{z}_{t}\) from the dual space to make a certain upper bound involving this quantity to be tighter. This is a better inequality than (1) because when we select \(\mathbf{z}_{t}=\mathbf{g}_{t}\in\partial\ell_{t}(\mathbf{x}_{t})\), using Theorem 2.1, we recover (1). So, this inequality subsumes the standard one
for subgradients, but, using \(\mathbf{z}_{t}\in\ell_{t}(\mathbf{x}_{t+1})\), it also subsumes the similar inequality used in the implicit case.
The analysis in Chen and Orabona (2023) shows that the optimal setting of \(\mathbf{z}_{t}\) is the one that minimizes the sum of two conjugate functions:
\[H_{t}(\mathbf{z})\triangleq\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{z})+\ell_{t}^{ \star}(\mathbf{z}), \tag{2}\]
where \(\psi_{t,V}\) is the restriction of the regularizer used at time \(t\) on the feasible set \(V\), i.e., \(\psi_{t,V}\triangleq\psi_{t}+i_{V}\). However, we can show that any setting of \(\mathbf{z}_{t}\) that satisfies \(H(\mathbf{z}_{t})<H(\mathbf{g}_{t})\) guarantees a strict improvement in the worst-case regret w.r.t. using the linearized losses. The presence of conjugate functions should not be surprising because we are looking for a surrogate gradient \(\mathbf{z}_{t}\) that lives in the dual space.
Once we have the \(\mathbf{z}_{t}\), we treat them as the gradient of surrogate linear losses. So, putting it all together, Algorithm 1 shows the final algorithm. Chen and Orabona (2023) prove the following general theorem for it.
**Theorem 2.2**.: _Let \(V\subseteq\mathbb{R}^{d}\) be closed and non-empty and \(\psi_{t}:V\to\mathbb{R}\). With the notation in Algorithm 1, define by \(F_{t}(\mathbf{x})=\psi_{t}(\mathbf{x})+\sum_{i=1}^{t-1}(\mathbf{z}_{i},\mathbf{x})\), so that \(\mathbf{x}_{t}\in\operatorname*{argmin}_{\mathbf{x}\in V}\,F_{t}(\mathbf{x})\). Finally, assume that \(\operatorname*{argmin}_{\mathbf{x}\in V}\,F_{t}(\mathbf{x})\) and \(\partial\ell_{t}(\mathbf{x}_{t})\) are not empty for all \(t\). For any \(\mathbf{z}_{t}\in\mathbb{R}^{d}\) and any \(\mathbf{u}\in\mathbb{R}^{d}\), we have_
\[\operatorname*{Regret}_{T}(\mathbf{u})\leq\psi_{T+1}(\mathbf{u})-\min_{ \mathbf{x}\in V}\,\psi_{1}(\mathbf{x})\] \[+\sum_{t=1}^{T}[\psi_{t+1,V}^{\star}(\mathbf{\theta}_{t}-\mathbf{g}_{t})- \psi_{t,V}^{\star}(\mathbf{\theta}_{t})+\langle\mathbf{x}_{t},\mathbf{g}_{t}\rangle-\delta _{t}]\] \[+F_{T+1}(\mathbf{x}_{T+1})-F_{T+1}(\mathbf{u}),\]
_where \(\delta_{t}\triangleq H_{t}(\mathbf{g}_{t})-H_{t}(\mathbf{z}_{t})\)._
The Theorem 2.2 is stated with very weak assumptions to show its generality, but it is immediate to obtain concrete regret guarantees just assuming, for example, strongly convex regularizers and convex and Lipschitz losses and using well-known methods, such as Orabona (2019, Lemma 7.8) However, we can already understand why this is an interesting guarantee. Let's first consider the case that \(\mathbf{z}_{t}=\mathbf{g}_{t}\) and the constant regularizer \(\frac{1}{2\eta}\|\mathbf{x}\|_{2}^{2}\). In this case, we recover the OGD algorithm. Even the guarantee in the Theorem exactly recovers the best known one (Orabona, 2019, Corollary 7.9), with \(\delta_{t}=0\). Instead, if we set \(\mathbf{z}_{t}\) to be the minimizer of \(H_{t}\), Chen and Orabona (2023) shows that we recover the implicit/proximal update. Finally, if we set \(\mathbf{z}_{t}\) such that \(H_{t}(\mathbf{z}_{t})<H_{t}(\mathbf{g}_{t})\) we will have that \(\delta_{t}>0\). Hence, in each single term of the sum we have a negative factor that makes the regret bound smaller and we can interpret the resulting update as an _approximate implicit/proximal update_.
## 3 Importance Weight Aware Updates
The IWA updates were motivated by the failure of OGD to deal with arbitrarily large importance weights. In fact, the standard approach to use importance weights in OGD is to simply multiply the gradient by the importance weight. However, when the importance weight is large, we might have an update that is far beyond what is necessary to attain a small loss on it. Karampatziakis and Langford (2011) proposed IWA, a computationally efficient way to use importance weights without damaging the convergence properties of OGD. In particular, IWA updates are motivated by the following invariance property: an example with importance weight \(h\in\mathbb{N}\) should be treated as if it is an unweighted example appearing \(h\) times in the dataset.
More formally, IWA updates are designed for importance weighted convex losses over linear predictors. So, let \(\mathbf{q}_{t}\in\mathbb{R}^{d}\) be the \(t^{\text{th}}\) sample and \(h_{t}\in\mathbb{R}_{+}\) its importance weight. Each loss function \(\ell_{t}:\mathbb{R}^{d}\to\mathbb{R}\) is defined as \(\ell_{t}(\mathbf{x})\triangleq\hat{\ell}_{t}(\langle\mathbf{q}_{t},\mathbf{x}\rangle)\), where \(\mathbf{x}\) is the predictor, \(\langle\mathbf{q}_{t},\mathbf{x}\rangle\) is the forecast on sample \(\mathbf{q}_{t}\) of the linear predictor \(\mathbf{x}\), and \(\hat{\ell}_{t}:\mathbb{R}\to\mathbb{R}\) is the \(h_{t}\)-weighted convex loss function. For example \(\hat{\ell}_{t}(p)=\frac{h_{t}}{2}(p-y_{t})^{2}\) for linear regression with square loss, \(\hat{\ell}_{t}(p)=h_{t}\ln(1+e^{-y_{t}p})\), and \(\hat{\ell}_{t}(p)=h_{t}\max(1-py_{t},0)\) for linear classification with hinge loss.
The key idea of Karampatziakis and Langford (2011) is performing a sequence of \(N\) updates on each loss function \(\ell_{t}\), each of them with learning rate \(\eta/N\), and take \(N\to\infty\). Given the particular shape of the loss functions, all the gradients for a given sample \(\mathbf{q}_{t}\) points in the same direction: \(\nabla\ell_{t}(\mathbf{x})=\hat{\ell}_{t}^{\prime}(\langle\mathbf{q}_{t},\mathbf{x} \rangle)\mathbf{q}_{t}\). Therefore, the cumulative effect of performing \(N\) consecutive updates in a row on each sample \(\mathbf{q}_{t}\) amounts to a single update in the direction of \(\mathbf{q}_{t}\) rescaled by a single scalar. Hence, we just have to find this scalar. More in details, the effect of doing a sequence of infinitesimal updates can be modelled by an ordinary differential equation (ODE), as detailed in the following theorem.
**Theorem 3.1** ((Karampatziakis and Langford, 2011, Theo
rem 1)).: _Let \(\hat{\ell}\) to be continuously differentiable. Then, the limit for \(N\to\infty\) of the OGD update with \(N\) updates on the same loss function with learning rate \(\eta/N\) is equal to the update_
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}-s_{t}(1)\mathbf{q}_{t},\]
_where the scaling function \(s_{t}:\mathbb{R}\to\mathbb{R}\) satisfies \(s_{t}(0)=0\) and the differential equation_
\[s^{\prime}_{t}(h)=\eta\hat{\ell}^{\prime}_{t}(\langle\mathbf{q}_{t},\mathbf{x}_{t}-s_{ t}(h)\mathbf{q}_{t}\rangle)\;.\]
IWA Updates are Generalized Implicit UpdatesAs we said above, IWA updates do not have strong theoretical guarantees. Indeed, we do not even know if they give the same performance of the plain gradient updates in the worst case. Here, we show that IWA is an instantiation of the generalized implicit FTRL. This implies that its regret upper bound is better than the one of online gradient descent. Moreover, this also gives a way to interpret IWA updates as approximate proximal/implicit updates.
Denote by \(p_{t}\triangleq\langle\mathbf{q}_{t},\mathbf{x}_{t}\rangle\), \(\mathbf{x}_{t}(h)\triangleq\mathbf{x}_{t}-s_{t}(h)\mathbf{q}_{t}\), \(p_{t}(h)\triangleq\langle\mathbf{q}_{t},\mathbf{x}_{t}(h)\rangle\), and \(\mathbf{g}_{t}(h)\triangleq\hat{\ell}^{\prime}_{t}(p_{t}(h))\mathbf{q}_{t}\). Consider the generalized implicit FTRL with regularization \(\psi_{t}(\mathbf{x})=\frac{1}{2\eta}\|\mathbf{x}\|_{2}^{2}\) and \(V=\mathbb{R}^{d}\). Set \(\mathbf{z}_{t}\) as
\[\mathbf{z}_{t} \triangleq\int_{0}^{1}\!\mathbf{g}_{t}(h)\,\mathrm{d}h\] \[=\frac{1}{\eta}\left(\int_{0}^{1}\eta\hat{\ell}^{\prime}_{t}( \langle\mathbf{q}_{t},\mathbf{x}_{t}-s_{t}(h)\mathbf{q}_{t}\rangle)\,\mathrm{d}h\right) \mathbf{q}_{t}\] \[=\frac{1}{\eta}s_{t}(1)\mathbf{q}_{t}\;. \tag{3}\]
Then, the iterates of Algorithm 1 are the same as the iterates of IWA updates
\[\mathbf{x}_{t+1}=\frac{\mathbf{\theta}_{t}-\mathbf{z}_{t}}{1/\eta}=\mathbf{x}_{t}-\frac{\mathbf{z} _{t}}{1/\eta}=\mathbf{x}_{t}-s_{t}(1)\mathbf{q}_{t}\;.\]
In words, we can now analyze IWA updates as an instantiation of generalized implicit updates.
In particular, Theorem 3.2 shows sufficient conditions on the loss \(\hat{\ell}_{t}\) to guarantee that IWA updates are as good as the subgradient \(\mathbf{g}_{t}\) by proving that \(\mathbf{z}_{t}\) satisfy \(H_{t}(\mathbf{z}_{t})\leq H_{t}(\mathbf{g}_{t})\).
**Theorem 3.2**.: _Assume \(s^{\prime}_{t}(h)\) to be continuous in \([0,1]\). If \(\forall h\in[0,1]\), \(\hat{\ell}^{\prime}_{t}(p_{t}(h))\) satisfies one of the following requirements:_
* \(\hat{\ell}^{\prime}_{t}(p_{t}(h))\geq 0\)_,_ \(\hat{\ell}^{\prime\prime\prime}(p_{t}(h))\geq 0\)__
* \(\hat{\ell}^{\prime}_{t}(p_{t}(h))\leq 0\)__
_then \(\mathbf{z}_{t}=\int_{0}^{1}\!\mathbf{g}_{t}(h)\,\mathrm{d}h\) satisfies_
\[H_{t}(\mathbf{z}_{t})\leq H_{t}(\mathbf{g}_{t})\;. \tag{4}\]
Before proving it, we will need the following technical lemmas.
**Lemma 3.3**.: _Let \(\hat{\ell}_{t}:\mathbb{R}\to\mathbb{R}\) to be three times differentiable._
* _If_ \(\hat{\ell}^{\prime}(p_{t}(h))\geq 0,\hat{\ell}^{\prime\prime\prime}(p_{t}(h))\geq 0\)_, then_ \(s^{\prime}_{t}(h)\) _is non-negative, non-increasing, convex._
* _If_ \(\hat{\ell}^{\prime}_{t}(p_{t}(h))\leq 0,\hat{\ell}^{\prime\prime\prime}_{t}(p_{t}(h))\leq 0\)_, then_ \(s^{\prime}_{t}(h)\) _is non-positive, non-decreasing, concave._
Proof.: First, observe that
\[s^{\prime}_{t}(h) =\eta\hat{\ell}^{\prime}(\langle\mathbf{x}_{t}-s_{t}(h)\mathbf{q}_{t}, \mathbf{q}_{t}\rangle)=\eta\hat{\ell}^{\prime}(p_{t}(h))\] \[s^{\prime\prime}(h) =\eta\hat{\ell}^{\prime\prime}(p_{t}(h))(-\|\mathbf{q}_{t}\|^{2})s^{ \prime}_{t}(h)\] \[s^{\prime\prime\prime}(h) =\eta\hat{\ell}^{\prime\prime\prime}(p_{t}(h))\|\mathbf{q}_{t}\|^{4} (s^{\prime}_{t}(h))^{2}\] \[\quad+\eta\hat{\ell}^{\prime\prime}(p_{t}(h))(-\|\mathbf{q}_{t}\|_{2} ^{2})s^{\prime\prime}(h)\;.\]
Case 1: \(\hat{\ell}^{\prime}(p)\geq 0,\hat{\ell}^{\prime\prime\prime}(p)\geq 0\). In this case, \(s^{\prime}_{t}(h)\geq 0\), \(s^{\prime\prime}(h)\leq 0\), \(s^{\prime\prime\prime}(h)\geq 0\). That is, \(s^{\prime}_{t}(h)\) is non-negative, non-increasing, and convex.
Case 2: \(\hat{\ell}^{\prime}(p)\leq 0,\hat{\ell}^{\prime\prime\prime}(p)\leq 0\). In this case, \(s^{\prime}_{t}(h)\leq 0\), \(s^{\prime\prime}(h)\geq 0\), \(s^{\prime\prime\prime}(h)\leq 0\). That is, \(s^{\prime}_{t}(h)\) is non-positive, non-decreasing, and concave.
**Lemma 3.4**.: _Let \(s^{\prime}_{t}(h)\) to be continuous in \([0,1]\)._
* _If_ \(s^{\prime}_{t}(h)\) _is convex and non-negative, then_ \(\forall h\in[0,1]\) _we have_ \[\frac{1}{2}(s^{\prime}_{t}(0)+s^{\prime}_{t}(h))\geq\frac{h}{2}(s^{\prime}_{t}(0)+s ^{\prime}_{t}(h))\geq s_{t}(h)\;.\]
* _If_ \(s^{\prime}_{t}(h)\) _is concave and non-positive, then_ \(\forall h\in[0,1]\) _we have_ \[\frac{1}{2}(s^{\prime}_{t}(0)+s^{\prime}_{t}(h))\leq\frac{h}{2}(s^{\prime}_{t}(0) +s^{\prime}_{t}(h))\leq s_{t}(h)\;.\]
Proof.: Given that \(s^{\prime}_{t}(h)\) is non-negative, we have \(\frac{1}{2}(s^{\prime}_{t}(0)+s^{\prime}_{t}(h))\geq\frac{h}{2}(s^{\prime}_{t}(0) +s^{\prime}_{t}(h))\). Now, observe that \(\frac{h}{2}(s^{\prime}_{t}(0)+s^{\prime}_{t}(h))\) is the area of the trapezium with first base \(s^{\prime}_{t}(0)\), second base \(s^{\prime}_{t}(h)\), and height \(h\). Given that the function is convex and non-negative, this area is bigger than the integral of \(s^{\prime}_{t}\) between 0 and \(h\), that is equal to \(s_{t}(h)\), that proves the statement.
We can prove the other case in a similarly way.
We can now prove Theorem 3.2.
Proof of Theorem 3.2.: The left hand side of (4) is equal to
\[\psi^{\star}\left(\int_{0}^{1}\!\mathbf{\theta}_{t}-\mathbf{g}_{t}(h)\,\mathrm{d}h \right)+\ell^{\star}_{t}\left(\int_{0}^{1}\!\mathbf{g}_{t}(h)\,\mathrm{d}h\right)\;.\]
Since \(\psi^{\star}\) and \(\ell^{\star}_{t}\) are convex, applying Jensen's inequality, the left hand side of (4) is upper bounded by
\[\int_{0}^{1}\psi^{\star}(\boldsymbol{\theta}_{t}-\boldsymbol{g}_{t}(h))\, \mathrm{d}h+\int_{0}^{1}\ell^{\star}_{t}(\boldsymbol{g}_{t}(h))\,\mathrm{d}h\;.\]
Moreover, the right hand side of (4) is equal to
\[\int_{0}^{1}\psi^{\star}(\boldsymbol{\theta}_{t}-\boldsymbol{g}_{t})\,\mathrm{ d}h+\int_{0}^{1}\ell^{\star}_{t}(\boldsymbol{g}_{t})\,\mathrm{d}h\;.\]
So, if we can prove that
\[\int_{0}^{1}\psi^{\star}(\boldsymbol{\theta}_{t}-\boldsymbol{g}_{t}(h))\, \mathrm{d}h+\int_{0}^{1}\ell^{\star}_{t}(\boldsymbol{g}_{t}(h))\,\mathrm{d}h\] \[\leq\int_{0}^{1}\psi^{\star}(\boldsymbol{\theta}_{t}-\boldsymbol {g}_{t})\,\mathrm{d}h+\int_{0}^{1}\ell^{\star}_{t}(\boldsymbol{g}_{t})\, \mathrm{d}h,\]
then (4) is proved.
For this, it is sufficient to prove that \(\forall h\in[0,1]\), we have
\[\psi^{\star}(\boldsymbol{\theta}_{t}-\boldsymbol{g}_{t}(h))+\ell^{\star}_{t}( \boldsymbol{g}_{t}(h))\leq\psi^{\star}(\boldsymbol{\theta}_{t}-\boldsymbol{g }_{t})+\ell^{\star}_{t}(\boldsymbol{g}_{t})\;.\]
Given that \(\psi^{\star}_{t}(\boldsymbol{\theta})=\frac{1}{2\lambda}\|\boldsymbol{\theta} \|_{2}^{2}\), where \(\lambda=1/\eta\), and by using the fact that \(\langle\boldsymbol{g},\boldsymbol{x}\rangle=\ell_{t}(\boldsymbol{x})+\ell^{ \star}_{t}(\boldsymbol{g})\), for any pair of \(\boldsymbol{x}\), \(\boldsymbol{g}\) satisfying \(\boldsymbol{g}\in\partial\ell_{t}(\boldsymbol{x})\), the inequality above can be written as
\[\frac{1}{2\lambda}\|\boldsymbol{\theta}_{t}-\boldsymbol{g}_{t}(h )\|_{2}^{2}+\langle\boldsymbol{g}_{t}(h),\boldsymbol{x}_{t}(h)\rangle-\ell_{ t}(\boldsymbol{x}_{t}(h))\] \[\leq\frac{1}{2\lambda}\|\boldsymbol{\theta}_{t}-\boldsymbol{g}_{t }\|_{2}^{2}+\langle\boldsymbol{g}_{t},\boldsymbol{x}_{t}\rangle-\ell_{t}( \boldsymbol{x}_{t})\;.\]
Simplifying this inequality, we have
\[\frac{1}{2\lambda}\|\boldsymbol{g}_{t}(h)\|_{2}^{2}-\frac{1}{2 \lambda}\|\boldsymbol{g}_{t}\|_{2}^{2}\] \[\leq\ell_{t}(\boldsymbol{x}_{t}(h))-\ell_{t}(\boldsymbol{x}_{t})- \langle\boldsymbol{g}_{t}(h),\boldsymbol{x}_{t}(h)-\boldsymbol{x}_{t}\rangle\;.\]
Since \(\ell_{t}(\boldsymbol{x})\) is convex, we obtain
\[\ell_{t}(\boldsymbol{x}_{t}(h))-\ell_{t}(\boldsymbol{x}_{t})- \langle\boldsymbol{g}_{t}(h),\boldsymbol{x}_{t}(h)-\boldsymbol{x}_{t}\rangle\] \[\geq\langle\boldsymbol{g}_{t},\boldsymbol{x}_{t}(h)-\boldsymbol{x }_{t}\rangle-\langle\boldsymbol{g}_{t}(h),\boldsymbol{x}_{t}(h)-\boldsymbol{x }_{t}\rangle\] \[=\langle\boldsymbol{g}_{t}(h)-\boldsymbol{g}_{t},\boldsymbol{x}_{ t}-\boldsymbol{x}_{t}(h)\rangle\;.\]
So, we just need to prove that
\[\frac{1}{2\lambda}\|\boldsymbol{g}_{t}(h)\|_{2}^{2}-\frac{1}{2\lambda}\| \boldsymbol{g}_{t}\|_{2}^{2}\leq\langle\boldsymbol{g}_{t}(h)-\boldsymbol{g}_ {t},\boldsymbol{x}_{t}-\boldsymbol{x}_{t}(h)\rangle\;.\]
Using \(\|\boldsymbol{a}\|_{2}^{2}-\|\boldsymbol{b}\|_{2}^{2}=\langle\boldsymbol{a}- \boldsymbol{b},\boldsymbol{a}+\boldsymbol{b}\rangle\), the inequality above can be rewritten as
\[\frac{1}{2\lambda}\langle\boldsymbol{g}_{t}(h)-\boldsymbol{g}_{t },\boldsymbol{g}_{t}(h)+\boldsymbol{g}_{t}\rangle \leq\langle\boldsymbol{g}_{t}(h)-\boldsymbol{g}_{t},\boldsymbol{x }_{t}-\boldsymbol{x}_{t}(h)\rangle\] \[=\langle\boldsymbol{g}_{t}(h)-\boldsymbol{g}_{t},s_{t}(h) \boldsymbol{q}_{t}\rangle\;.\]
That is,
\[\frac{1}{2}\left(\hat{\ell}^{\prime}_{t}(p_{t}(h))-\hat{\ell}^{ \prime}_{t}(p_{t})\right)\left(\frac{1}{\lambda}\hat{\ell}^{\prime}_{t}(p_{t }(h))+\frac{1}{\lambda}\hat{\ell}^{\prime}_{t}(p_{t})\right)\|\boldsymbol{q}_ {t}\|_{2}^{2}\] \[\leq\left(\hat{\ell}^{\prime}_{t}(p_{t}(h))-\hat{\ell}^{\prime}_{t }(p_{t})\right)s_{t}(h)\|\boldsymbol{q}_{t}\|_{2}^{2}\;.\]
Since \(s^{\prime}_{t}(h)=\frac{1}{\lambda}\hat{\ell}^{\prime}_{t}(p_{t}(h))\), multiplying both side by \(1/\lambda\), the above inequality becomes
\[\frac{1}{2}(s^{\prime}_{t}(h)-s^{\prime}_{t}(0))(s^{\prime}_{t}(0) +s^{\prime}_{t}(h))\] \[\leq(s^{\prime}_{t}(h)-s^{\prime}_{t}(0))s_{t}(h)\;. \tag{5}\]
Now, we consider two cases.
**Case 1:**\(\hat{\ell}^{\prime}_{t}(p_{t}(h))\geq 0\) and \(\hat{\ell}^{\prime\prime\prime}_{t}(p_{t}(h))\geq 0\). By Lemma 3.3, \(s^{\prime}_{t}(h)\) is non-negative, non-increasing, and convex. So, in particular we have \(s^{\prime}_{t}(h)-s^{\prime}_{t}(0)\leq 0\). In this case, by Lemma 3.4, we have \(\frac{1}{2}(s^{\prime}_{t}(0)+s^{\prime}_{t}(h))\geq s_{t}(h)\).
**Case 2:**\(\hat{\ell}^{\prime}_{t}(p_{t}(h))\leq 0\), and \(\hat{\ell}^{\prime\prime\prime}_{t}(p_{t}(h))\leq 0\). By Lemma 3.3, \(s^{\prime}_{t}(h)\) is non-positive, non-decreasing, concave. So, in particular, we have \(s^{\prime}_{t}(h)-s^{\prime}_{t}(0)\geq 0\). In this case, by Lemma 3.4, we have \(\frac{1}{2}(s^{\prime}_{t}(0)+s^{\prime}_{t}(h))\leq s_{t}(h)\).
Combining the two cases, we conclude that (5) is true that implies that (4) is true as well.
### Examples of Losses and IWA Updates
Now, we present some examples of loss functions that satisfy the requirements of Theorem 3.2 and their corresponding IWA updates from Karampatziakis & Langford (2011). In all the following examples, the prediction of the algorithm on sample \(\boldsymbol{q}\) is \(\langle\boldsymbol{q},\boldsymbol{x}\rangle\).
**Logistic loss:**\(\hat{\ell}(p)=h\ln(1+e^{-\gamma p})\).
The IWA update is \(\frac{W(e^{h\gamma\|\boldsymbol{q}\|_{2}^{2}+yp+e^{\gamma p}})-h\eta\| \boldsymbol{q}\|_{2}^{2}-e^{\gamma p}}{y\|\boldsymbol{q}\|_{2}^{2}}\) for \(y\in\{-1,1\}\), where \(W(x)\) is the Lambert function. We have that
\[\hat{\ell}^{\prime}(p)=\frac{-yh}{1+e^{py}},\quad\hat{\ell}^{\prime\prime \prime}(p)=hy^{3}(-e^{-py-1})\;.\]
When \(y\geq 0\), \(\hat{\ell}^{\prime}(p)\leq 0\) and \(\hat{\ell}^{\prime\prime\prime}(p)\leq 0\). When \(y\leq 0\), \(\hat{\ell}^{\prime}(p)\geq 0\) and \(\hat{\ell}^{\prime\prime\prime}(p)\geq 0\).
**Exponential loss:**\(\hat{\ell}(p)=e^{-yp}\).
The IWA update is \(\frac{py-\ln(h\|\boldsymbol{q}\|_{2}^{2}+e^{py})}{\|\boldsymbol{q}\|_{2}^{2}y}\) for \(y\in\{-1,1\}\). We have that
\[\hat{\ell}^{\prime}(p)=y(-e^{-py}),\quad\hat{\ell}^{\prime\prime\prime}(p)=y^{ 3}(-e^{-py})\;.\]
When \(y\geq 0\), \(\hat{\ell}^{\prime}(p)\leq 0\) and \(\hat{\ell}^{\prime\prime\prime}(p)\leq 0\). When \(y\leq 0\), \(\hat{\ell}^{\prime}(p)\geq 0\) and
**Logarithmic loss:**\(\hat{\ell}(p)=y\ln(y/p)+(1-y)\ln((1-y)/(1-p))\).
The IWA update is \(\frac{p-1+\sqrt{(p-1)^{2}+2h\eta\|\mathbf{q}\|_{2}^{2}}}{\|\mathbf{q}\|_{2}^{2}}\) for \(y=0\), and \(\frac{p-\sqrt{p^{2}+2h\eta\|\mathbf{q}\|_{2}^{2}}}{\|\mathbf{q}\|_{2}^{2}}\) for \(y=1\).
* if y=0 \[\hat{\ell}^{\prime}(p)=\frac{1}{1-p},\quad\hat{\ell}^{\prime\prime\prime}(p)= -\frac{2}{(p-1)^{3}}\;.\]
* if y=1 \[\hat{\ell}^{\prime}(p)=-\frac{1}{p},\quad\hat{\ell}^{\prime\prime\prime}(p)= -\frac{2}{p^{3}}\;.\]
For both cases, \(\hat{\ell}^{\prime}(p)\) and \(\hat{\ell}^{\prime\prime\prime}(p)\) will have the same sign.
**Squared loss:**\(\hat{\ell}(p)=\frac{1}{2}(y-p)^{2}\)**.** The IWA update is \(\frac{p-y}{\|\mathbf{q}\|_{2}^{2}}(1-e^{-h\eta\|\mathbf{q}\|_{2}^{2}})\).
\[\hat{\ell}^{\prime}(p)=p-y,\quad\hat{\ell}^{\prime\prime\prime}(p)=0\;.\]
In this case, the sign of the first derivative can change. However, according to Section 4.2 of Karampatziakis and Langford (2011), for the squared loss IWA will not overshoot the minimum. This means that for any \(h\in[0,1]\), \(p_{t}(h)-y\) will always have the same sign, so the conditions are verified.
## 4 Empirical Evaluation
IWA as an instantiation of generalized implicit updates, guarantees that in the worst case scenario, it will be at least as good as FTRL with linear models. Generalized implicit FTRL is a flexible framework in that \(\mathbf{z}_{t}\) has a bunch of choices. In this section, we compare the performance of IWA updates with different choices of \(\mathbf{z}_{t}\) in Algorithm 1, when \(\psi_{t}=\frac{\lambda_{t}}{2}\|\mathbf{x}\|_{2}^{2}\) and \(\lambda_{t}\) is set as in Chen and Orabona (2023, Corollary 4.3). In particular, we consider:
* FTRL with linearized losses: \(\mathbf{z}_{t}=\mathbf{g}_{t}\);
* Implicit FTRL with aProx updates: \(\mathbf{z}_{t}=\min\left\{1,\frac{\lambda_{t}\ell_{t}(\mathbf{x}_{t})}{\|\mathbf{g}_{t} \|^{2}}\right\}\mathbf{g}_{t}\);
* Implicit FTRL with IWA updates: \(\mathbf{z}_{t}=\frac{1}{\eta}s_{t}(1)\mathbf{q}_{t}\);
* Implicit FTRL with proximal updates.
We conduct linear prediction experiments on datasets from LibSVM (Chang and Lin, 2011). We show experiments on classification tasks using the logistic loss, and regression tasks with squared loss. We normalize the datasets and added a constant bias term to the features. Given that in the online learning setting, we do not have the training data and validation data to tune the initial learning rate, we will plot the averaged loss, \(\frac{1}{t}\sum_{i=1}^{t}\ell_{i}(\mathbf{x}_{i})\), versus different choice of initial learning rate \(\eta_{0}\), that at the same time show the algorithms' sensitivity to the hyperparameter \(\eta_{0}\) and their best achievable performance. We consider \(\eta_{0}\in[10^{-3},10^{3}]\). Each algorithm is run 10 times with different shuffling of the data and we plot the average of the averaged losses.
Figure 1 shows the averaged loss versus a different selection of hyperparameter \(\eta_{0}\) for regression tasks with squared loss. The figure demonstrates that FTRL with linearized updates is very sensitive to the choice of the hyperparameter \(\eta_{0}\), while the implicit FTRL updates are robust to different setting of hyperparameters. The range of learning rate selection is much broader than that of FTRL with linear models.
Figure 2 shows the averaged loss versus different selections of hyperparameter \(\eta_{0}\) for classificat
Figure 1: Squared loss, averaged loss vs. hyperparameter \(\eta_{0}\).
loss. In the experiments, implicit FTRL with IWA updates improves upon FTRL with linearized models. Both IWA updates and aProx updates allow broader learning rate selection. This is in line with previous results in Asi and Duchi (2019) in the offline setting. Note that in this case the proximal operator does not have a closed-form solution. Yet, IWA provided a way to approximate proximal updates efficiently and, in some sense, to enjoy the stability of proximal updates.
## 5 Conclusion
Generalized implicit FTRL as a flexible framework, allows the design of new algorithms and immediate theoretical analysis. By proving that IWA is a concrete instantiation of this new framework, we prove that IWA updates have a regret bound that is _better_ than the one of plain OGD, and provide a perspective to interpret IWA updates as approximate implicit/proximal updates.
## Acknowledgements
Francesco Orabona is supported by the National Science Foundation under the grant no. 2046096 "CAREER: Parameter-free Optimization Algorithms for Machine Learning".
|
2307.00680 | CLIMAX: An exploration of Classifier-Based Contrastive Explanations | Explainable AI is an evolving area that deals with understanding the decision
making of machine learning models so that these models are more transparent,
accountable, and understandable for humans. In particular, post-hoc
model-agnostic interpretable AI techniques explain the decisions of a black-box
ML model for a single instance locally, without the knowledge of the intrinsic
nature of the ML model. Despite their simplicity and capability in providing
valuable insights, existing approaches fail to deliver consistent and reliable
explanations. Moreover, in the context of black-box classifiers, existing
approaches justify the predicted class, but these methods do not ensure that
the explanation scores strongly differ as compared to those of another class.
In this work we propose a novel post-hoc model agnostic XAI technique that
provides contrastive explanations justifying the classification of a black box
classifier along with a reasoning as to why another class was not predicted.
Our method, which we refer to as CLIMAX which is short for Contrastive
Label-aware Influence-based Model Agnostic XAI, is based on local classifiers .
In order to ensure model fidelity of the explainer, we require the
perturbations to be such that it leads to a class-balanced surrogate dataset.
Towards this, we employ a label-aware surrogate data generation method based on
random oversampling and Gaussian Mixture Model sampling. Further, we propose
influence subsampling in order to retaining effective samples and hence ensure
sample complexity. We show that we achieve better consistency as compared to
baselines such as LIME, BayLIME, and SLIME. We also depict results on textual
and image based datasets, where we generate contrastive explanations for any
black-box classification model where one is able to only query the class
probabilities for an instance of interest. | Praharsh Nanavati, Ranjitha Prasad | 2023-07-02T22:52:58Z | http://arxiv.org/abs/2307.00680v1 | # CLIMAX: An exploration of Classifier-Based Contrastive Explanations
###### Abstract
Explainable AI is an evolving area that deals with understanding the decision making of machine learning models so that these models are more transparent, accountable, and understandable for humans. In particular, post-hoc model-agnostic interpretable AI techniques explain the decisions of a black-box ML model for a single instance locally, without the knowledge of the intrinsic nature of the ML model. Despite their simplicity and capability in providing valuable insights, existing approaches fail to deliver consistent and reliable explanations. Moreover, in the context of black-box classifiers, existing approaches justify the predicted class, but these methods do not ensure that the explanation scores strongly differ as compared to those of another class. In this work we propose a novel post-hoc model agnostic XAI technique that provides contrastive explanations justifying the classification of a black box classifier along with a reasoning as to why another class was not predicted. Our method, which we refer to as CLIMAX which is short for Contrastive Label-aware Influence-based Model Agnostic XAI, is based on local classifiers. In order to ensure model fidelity of the explainer, we require the perturbations to be such that it leads to a class-balanced surrogate dataset. Towards this, we employ a label-aware surrogate data generation method based on random oversampling and Gaussian Mixture Model sampling. Further, we propose influence subsampling in order to retaining effective samples and hence ensure sample complexity. We show that we achieve better consistency as compared to baselines such as LIME, BayLIME, and SLIME. We also depict results on textual and image based datasets, where we generate contrastive explanations for any black-box classification model where one is able to only query the class probabilities for an instance of interest.
## 1 Introduction
As AI technology deployment is increasing especially in safety-critical domains, it has necessitated that ML models be interpretable and trustworthy while being accurate. Trustworthiness of an AI system is possible if the target users understand the _how and why_ about ML model predictions. Interpretability is also essential owing to severe biases that are induced in the decision-making process of deep neural networks (DNNs) when subject to adversaries [35, 25, 10]. Governments across the world have introduced regulations towards the ethical use of AI. For instance, General Data Protection Regulation (GDPR) passed in Europe requires businesses to provide understandable justifications to their users for decisions of AI systems that directly affect them [2].
Popular categorization of existing XAI methods is based on XAI models being local [19, 13, 22, 33, 27, 20] or global [13], model agnostic [19, 13] or model specific [23], in-hoc or post-hoc, perturbation or saliency-based [22], concept-based or feature-based [2], etc. The simplest among them is the well-established post-hoc, perturbation-based techniques such as LIME [19] and KernelSHAP [13]. Perturbation-based post hoc explainers offer a model agnostic means of interpreting black-box ML models while requiring query-level access for a single instance. These methods define a data generating process to obtain weighted perturbations (surrogate data) in the neighborhood of the index sample, and subsequently employ easy-to-explain linear regression model to obtain per-feature importance weights. Despite the widespread usage of these techniques, subsequent works have pointed out various issues. For instance, LIME leads to inconsistent explanations on a given sample [28, 34, 33, 22, 27], hampers its use in safety-critical systems. Although KernelSHAP partially counters the stability issue, it employs training data for explanations. However, more importantly, these methods use feature attribution to explain the prediction of a black-box models and do not produce contrastive explanations.
More recently, contrastive [4, 5] and counterfactual approaches [26] have been proposed. The goal of a contrastive explanation is not only to justify the output class of an input, but also what should be absent to maintain the original classification, while counterfactual explanations specify necessary minimal changes in the input so that an alternate output is obtained. In this work, we are interested in label-aware, post-hoc technique for providing model agnostic contrastive explanations in the locality of a given instance (which we refer to as the index sample).
Studies in philosophy and social science point out that in general, humans prefer contrastive explanations [12]. Let us suppose that the predicted class of the black-box model for the \(i\)-th instance is \(c_{i}\), and the alternative class-label is \(c_{-i}\). Here, answering the question, _Why_\(c_{i}\)? leads to just explaining the predicted class as done in most of the post-hoc model agnostic techniques such as LIME, BayLIME, Unravel and KernelSHAP. However, it is natural to seek a contrastive explanation where queries are of the form _why \(c_{i}\) and not \(c_{-i}\)?_. As pointed out in [4], contrastive explanations highlight what is minimally sufficient in an input to justify its classification, and identify contrastive features that should be minimally present and critically absent to distinguish it from another input is seemingly close but would be classified differently. Most of the available contrastive explainers require the original training samples, are model-aware, or use complex data generation procedure which leads to opacity in ex
plainer models [30, 4].
Alternately, we propose a contrastive explainer which is model-agnostic and perturbation-based. In the context of a classification based black-box model, a regression based explanation model provides explanations based on the surrogate dataset generated using the pre-defined data generating process. Note that the data generating process does not mandate samples from all classes since a regression based explainer does not require a balanced dataset. Essentially, this implies that the class-based feature attribution scores are provided when there may be no information about this class in the surrogate dataset. We question the basic paradigm in post-hoc perturbation based methods which advocates the use a local linear regression model and instead, we focus on a local logistic regression model. A classifier based explanation model necessitates that the surrogate samples form a balanced dataset, i.e., there are approximately equal number of samples from different classes. Essentially this implies that class-based attribution score is obtained after ensuring that surrogate data samples with all class information is present in the surrogate dataset. This leads to contrastive explanations and improved stability of the explainer method.
Contributions:In this work, we propose a contrastive label-aware sample-efficient post-hoc explainable AI (XAI) technique called CLIMAX. Briefly, our contributions are as follows:
* We propose two variants of the logistic regression (LR) based explainer and generation of a label-wise balanced surrogate dataset. Similar to LIME, the per-feature weight obtained from the LR model provides the contrastive feature attribution scores. Essentially this allows us to exploit the classification boundary of the black-box model and explain each instance, from a dual point of view, i.e.,
* _Why point 'a' must lie in class \(c_{i}\)_ and
* _Why point 'a' must not lie in classes \(c_{-i}\)_
* Influence functions are a classic technique from robust statistics which trace a model's prediction through the learning algorithm and back to its training data thereby identifying training points most responible for a given prediction. We use this module within our surrogate data generator as it helps reduce the sample complexity. We observe that the performance of the model after subsampling stays at par with the original model, and sometimes even surpasses it.
## 2 Related Works and Novelty
In this work, we are interested in label-aware model-agnostic post-hoc locally interpretable models. We discuss the related works by highlighting the critical aspects in comparison to the proposed method as in the sequel.
**Instability**: Instability or inconsistency issues in the explanations scores of LIME over several iterations [33, 22] is well-known. This inconsistency occurs due to random perturbation-based surrogate datasets. A deterministic hierarchical clustering approach for consistent explanations was proposed in DLIME [33], and its biggest drawback is that it requires training data for clustering. To avoid the additional task of 'explaining the explainer,' techniques like ALIME [22, 21] are not preferred. A parametric Bayesian method as proposed by [34], where a weighted sum of the prior knowledge and the estimates based on new samples obtained from LIME is used to get explanations in a Bayesian linear regression framework. Both LIME and BayLIME employ hyperparameters (kernel-width) that need to be tuned. Recently, [24] proposed a technique known as focused sampling, which utilizes uncertainty estimates of explanation scores. In [20], authors propose a Gaussian process based active learning module to abate issues of instability. In this work, we used a sampling strategy that ensures that we are balanced with respect to the labels. This ensures that we obtain good stability and low inconsistency in explanation scores.
**Sample Complexity**: Sample efficiency in post-hoc models is a crucial factor in efficiently obtaining reliable explanations, and there is consensus in the community that explainable models must use as few samples for an explanation as possible [24]. Approaches such as LIME, KernelSHAP, and BayLIME do not provide any guidance on choosing the number of perturbations, although this issue has been acknowledged [34]. Influence functions [11] are known to reduce the sampling complexity and the reduced sample set can be used for providing robust explanations. We exploit influence functions to achieve fidelity and sample complexity goals simultaneously via our surrogate dataset.
**Classifier-based Explainers**: Techniques like LIME [19] and KernelSHAP [13] fit linear regression model on classification probabili
Figure 1: Comparisons between CLIMAX and LIME for an instance of the (a) Quora Insincerity Dataset, and (b) The MNIST dataset. The results of CLIMAX are more contrastive as compared to LIME. In the case of textual results we can clearly see the confident nature of CLIMEX, and for the case of Images, we can see that CLIMAX provides more reliable results, as it gives a more precise region, and clearly highlights the ambiguous regions. The same region may play a huge positive role in the other digits. This has been portrayed later.
ties. This leads to a separate set of explanation scores for each class, where the scores try to explain why a class is predicted. Intuitively, regression black-box models are well-explained by linear regression explanation models and black-box classifiers are better explained by linear classifier explainers. In particular, classifier based explainers are expected to provide a robust set of explanations as they can exploit the classification boundary explicitly to provide information about why a point lies in a class \(c_{i}\), and why not in the other classes, \(c_{-i}\). This problem has been acknowledged in [15], and the authors propose explanation scores based on confident item sets. In [29], authors approximate the local decision boundary, but use a variational autoencoder for surrogate data generation, leading to opaque data generation. We propose a classifier based explainer which makes use of probabilities of all classes, and hence is more contrastive.
**Constrastive Explainers**: Contrastive explanations clarify why an event occurred in contrast to another. They are inherently intuitive to humans to both produce and comprehend. There are a few techniques that already exist in the literature, such as the Contrastive Explanations Method, which makes use of pertinent positives and pertinent negatives to define those features are important and those that are not, respectively [7, 9, 32, 4]. In [30], authors propose a framework that convert an existing back-propagation explanation method to build class-contrastive explanations, especially in the context of DNNs. However, these methods are not model agnostic, and often assume access to training data. In [18], authors repurpose Shapley values to generate counterfactual and contrastive global explanations. In [5], authors propose Model Agnostic Contrastive Explanations Method (MACEM), to generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input restricted to be structured tabular data.
**Novelty:** In comparison, CLIMAX is novel in the following ways:
* CLIMAX provides feature importances by explaining as to why the index sample belongs to a specific class and in the process, it also provides strong justification about why other classes were not predicted. This effect is brought about in CLIMAX using local classifiers for explanations, without explicitly solving for pertinent positives and negatives.
* CLIMAX is a perturbation-based technique, which implies that it does not require any access to training data.
* CLIMAX explains the decision boundary of the black-box classifier, which is the most relevant characteristic of classifiers that are optimized for accuracy.
## 3 Mathematical Preliminaries
In this section, we describe the mathematical preliminaries of the popular local explainer namely LIME, for classifier models.
Local explainer models are interpretable models that are used to explain individual predictions of black box machine learning models. Among several methods, Local interpretable model-agnostic explanations (LIME) portrays a concrete implementation of local explainer model. These models are trained to approximate the predictions of the underlying black box model locally, in the neighborhood of the sample of interest and hence, these models may or may not be a valid explainer globally.[14]
**Notation** Let \(f:\mathbb{R}^{d}\rightarrow[0,1]\) denote a black-box binary classifier, that takes a data point \(\mathbf{x}\in\mathbb{R}^{d}\) (\(d\) features) and returns the probability that \(\mathbf{x}\) belongs to a certain class. Our goal is to explain individual predictions of \(f\) locally. Let \(\mathcal{Z}\) be a set of \(n^{\prime}\) randomly sampled instances (perturbations) around \(\mathbf{x}\). The proximity between \(\mathbf{x}\) and any \(\mathbf{z}\in\mathcal{Z}\) is given by \(\pi_{\mathbf{x}}(\mathbf{z})\in\mathbb{R}\). We denote the vector of these distances over the \(n^{\prime}\) perturbations in \(\mathcal{Z}\) as \(\Pi_{\mathbf{x}}(\mathbf{Z})\in\mathbb{R}^{n^{\prime}}\). Let \(\boldsymbol{\phi}\in\mathbb{R}^{d}\) denote the explanation in terms of feature importances for the prediction \(f(\mathbf{x})\).
Let \(\boldsymbol{y}_{1},\boldsymbol{y}_{0}\in\mathbb{R}^{n^{\prime}}\) be the black-box predictions for \(n^{\prime}\) surrogate samples corresponding to class-1 and class-0, respectively, such that for the \(i\)-th instance in \(\mathcal{Z}\), \(y_{1}(i)=f(\mathbf{z}_{i})\) and \(y_{0}(i)=1-f(\mathbf{z}_{i})\), and since they are probabilities, \(y_{1}(i),y_{0}(i)\in[0,1]\). LIME explains the predictions of the classifier \(f\) by learning a linear model locally around each prediction. Hence, in the case of LIME the coefficients of the linear model are assigned as \(\boldsymbol{\phi}\) are treated as the feature contributions to the black box prediction [24]. Accordingly, the objective function for LIME constructs an explanation that approximates the behavior of the black box accurately in the vicinity (neighborhood) of \(\mathbf{x}\) by solving:
\[\operatorname*{arg\,min}_{\boldsymbol{\phi}}\sum_{\mathbf{z}\in\mathcal{Z}}[f( \mathbf{z})-\boldsymbol{\phi}^{T}\mathbf{z}]^{2}\pi_{\mathbf{x}}(\mathbf{z}), \tag{1}\]
which has a closed-form solution for class \(c\in\{0,1\}\) given by:
\[\boldsymbol{\hat{\phi}}_{c}=\mathbf{Z}^{T}\operatorname{diag}(\Pi_{\mathbf{x} }(\mathbf{Z}))\mathbf{Z}+\boldsymbol{I})^{-1}(\mathbf{Z}^{T}\operatorname{ diag}(\Pi_{\mathbf{x}}(\mathbf{Z}))\boldsymbol{y}_{c}. \tag{2}\]
LIME assigns different importance scores to different classes as by design, it is not possible to incorporate the information about probabilities of both the classes into a single linear regression framework. As mentioned earlier, this is sufficient until the question is 'why \(c\)', as this question does not seek explanations about the other classes. Furthermore, the challenge in LIME arises in selecting a valid neighborhood or locality for surrogate sampling. LIME uses random sampling where these samples are chosen heuristically: \(\pi_{\mathbf{x}}(\mathbf{z})\) is computed as the cosine or \(l_{2}\) distance.
## 4 Proposed Techniques and Algorithms
We propose a classifier-based explainer, which we refer to as Contrastive Label-aware Influence-based Model-Agnostic XAI (CLIMAX), to understand and exploit the classification boundary as dictated by the black-box model so as to explain each instance, from dual points of view as stated before. Essentially, our method's reasoning is based on why a given point must lie in class \(c_{i}\) and not in classes \(c_{-i}\). This is possible as unlike LIME, at the time of assigning scores, CLIMAX has access to all the class probabilities and the local classifier fits its boundary according to that.
CLIMAX explains the predictions of the binary classifier \(f(\cdot)\) by learning a logistic regression model locally around each prediction where, the probability of class-1 and class-0 according to the explainer is given by \(\sigma(\boldsymbol{\phi}^{T}\mathbf{z})\) and \((1-\sigma(\boldsymbol{\phi}^{T}\mathbf{z}))\), respectively, where \(\sigma(\cdot)\) is the sigmoid function. We now define two different variants of the CLIMAX method.
### L-Climax
In this section, we propose a local classifier explainer that results in logistic outputs, and we formally refer to this as Logistic CLIMAX, or L-CLIMAX. In order to derive the loss function, we state the following lemma.
**Lemma 1**: _Given a dataset \(\mathcal{D}\) with the \(i\)-th instance such that \(\{\mathbf{z}_{i},\mathbf{y}_{i}\}\in\mathcal{D}\) where \(\mathbf{z}_{i}\in\mathbb{R}^{d}\) are the covariates and \(\mathbf{y}_{i}\in\mathbb{R}^{|\mathcal{C}|}\)
represents the class-probabilities, linear model on logistic outputs can be obtained as_
\[\arg\min_{\mathbf{\phi}}(\ell-\mathbf{\phi}^{T}\mathbf{Z})^{T}\operatorname{diag}(\Pi_{ \mathbf{x}}(\mathbf{Z}))(\ell-\mathbf{\phi}^{T}\mathbf{Z}), \tag{3}\]
_where the \(i\)-th entry of \(\ell\in\mathbb{R}^{n^{\prime}}\) is given by \(\ell(i)=\log\left(\frac{y_{i}}{1-y_{i}}\right)\) obtained from the black-box model, the \(i\)-th column in \(\mathbf{Z}\in\mathbb{R}^{d\times n^{\prime}}\) is given by the surrogate sample \(\mathbf{z}_{i}\), and \(\operatorname{diag}(\Pi_{\mathbf{x}}(\mathbf{Z}))\) is a diagonal matrix whose \((i,i)\)-th entry is given by \(\pi_{\mathbf{x}}(\mathbf{z}_{i})\)._
**Proof 1**: _The output of the logistic explainer model is given as_
\[y_{i}=\sigma(\mathbf{\phi}^{T}\mathbf{z}_{i})=\frac{1}{1+e^{-\mathbf{\phi}^{T}\mathbf{ z}_{i}}}. \tag{4}\]
_The above expression can be rewritten in terms of the log-odds representation of the logistic output as_
\[\mathbf{\phi}^{T}\mathbf{z}_{i}=\log\left(\frac{y_{i}}{1-y_{i}}\right)\triangleq \ell(\mathbf{z}_{i}). \tag{5}\]
_The above formulation allows us to model the black box prediction of each perturbation \(\mathbf{z}_{i}\) as a linear combination of the corresponding feature values (\(\mathbf{\phi}^{T}\mathbf{z}_{i}\)) plus an error term, i.e.,_
\[\ell(\mathbf{z}_{i})=\mathbf{\phi}^{T}\mathbf{z}_{i}+\epsilon_{i}, \tag{6}\]
_where we model \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\). Here, \(l(\mathbf{z}_{i})\) is obtained from the black box classifier. Incorporating the objective function of LIME in the context of (6) leads to_
\[\operatorname*{arg\,min}_{\mathbf{\phi}}\sum_{\mathbf{z}_{i}\in\mathbf{Z}}[\ell( \mathbf{z}_{i})-\mathbf{\phi}^{T}\mathbf{z}_{i}]^{2}\pi_{\mathbf{x}}(\mathbf{z}_{ i}). \tag{7}\]
_Rewriting (7) in terms of vector and matrices namely \(\ell\), \(\mathbf{Z}\) and \(\operatorname{diag}(\Pi_{\mathbf{x}}(\mathbf{Z}))\), we obtain_
\[\operatorname*{arg\,min}_{\mathbf{\phi}}(\ell-\mathbf{\phi}^{T}\mathbf{Z})^{T} \operatorname{diag}(\Pi_{\mathbf{x}}(\mathbf{Z}))(\ell-\mathbf{\phi}^{T}\mathbf{ Z}). \tag{8}\]
Solving 8 by including a regularizer of the form \(\lambda\|\mathbf{\phi}\|_{2}^{2}\), we obtain the closed form solution for \(\mathbf{\phi}\) as
\[\mathbf{\hat{\phi}}=(\mathbf{Z}\operatorname{diag}(\Pi_{\mathbf{x}}(\mathbf{Z})) \mathbf{Z}^{T}+\lambda\mathbf{I})^{-1}\mathbf{Z}\operatorname{diag}(\Pi_{\mathbf{ x}}(\mathbf{Z}))\ell \tag{9}\]
### Ce-Climax
The second variant of CLIMAX constructs an explanation that approximates the behavior of the black box accurately in the vicinity of the index sample (neighborhood) of \(\mathbf{x}_{0}\) by directly optimizing the log-loss, i.e., we obtain feature importance values by solving the following:
\[\operatorname*{arg\,min}_{\mathbf{\phi}}\sum_{\mathbf{z}_{i}\in\mathbf{Z}}f( \mathbf{z}_{i})\log\mathbf{y}_{i}+(1-f(\mathbf{z}_{i}))\log\left(1-y_{i} \right), \tag{10}\]
where \(y_{i}=\sigma(\mathbf{\phi}^{T}\mathbf{z}_{i})\). We call this variant as Cross-Entropy CLIMAX or CE-CLIMAX. Note that unlike LIME, we do not explicitly weigh each surrogate sample using \(\pi_{\mathbf{x}}(\mathbf{z})\) in the second variant.
Some of the salient aspects of both of the above formulation as compared to LIME are as follows:
* LIME-like methods that ask the question "why \(c_{i}\)?" provide explanations label-wise, i.e., they iterate over all labels, explain why the sample index should be a part of that class and provides the scores. L-CLIMAX and CE-CLIMAX iterate over two sets of probabilities, one corresponding to the current class label and the other corresponding to all probabilities of the remaining classes. This can be explicitly seen in the objective where we use both \(\mathbf{y}_{i}\) and \(1-\mathbf{y}_{i}\) together as in 7 and (10).
* Interpretation of \(\mathbf{\phi}\): In the case of LIME, \(\mathbf{\phi}\) determines the feature importances according to the regressor values. In CLIMAX, \(\mathbf{\phi}\) has a slightly different interpretation. Here, \(\mathbf{\phi}\) is larger for those features that help in increasing the 'contrast' between explanations. Nevertheless, in both LIME and CLIMAX, it is safe to say that \(\mathbf{\phi}\) highlights important features.
* Both the above formulation has the simplicity and elegance of LIME and related methods. It remains training data agnostic and model-agnostic. Additionally, L-CLIMAX can be implemented with a slight change to the existing LIME framework.
### Imbalance-aware Surrogate Sampling
An important aspect for realization of L-CLIMAX and CE-CLIMAX is surrogate sampling required to form the the set \(\mathcal{Z}\) in the previous subsection. In order to ensure fidelity of explainers, our sampling technique needs to be imbalance-aware since we use classifier based local explainers.
We use the Bootstrap sampling technique, where we repeatedly sample the neighborhood of \(\mathbf{x}\) with replacement. The main goal is to ensure that the surrogate dataset is balanced, i.e., it consists of atleast a few samples belonging to all classes albeit in different proportions. To achieve this, we perform Gaussian sampling similar to [19] and increase the standard deviation appropriately (to increase the neighborhood size) to obtain surrogate instances from all classes. In order to further reduce imbalance in \(\mathcal{Z}\), we do the following:
* **Random oversampling**: We oversample within the minority class in order to ensure that the classes are perfectly balanced.
* **Gaussian Mixture models**: A Gaussian mixture model (GMM) is a probabilistic model that assumes all the instances are generated from a mixture of a finite number of Gaussian distributions with unknown parameters [17]. We train a GMM consisting of \(c\) Gaussians using the bootstrapped samples, and later use it to appropriately oversample the minority classes to obtain a balanced surrogate dataset.
```
0: Imbalanced Surrogate dataset \(\mathcal{D}\), and corresponding labels \(\mathbf{y}_{\mathcal{D}}\), Number of classes \(c\)
1: Fit a GMM on \(\mathcal{D}\) to get cluster mean and variances.
2: Identify minority classes, based on the number of instances in each cluster.
3: Sample the required number of minority class instances.
4: Oversampled Data from the Gaussian Mixture Model
```
**Algorithm 1** GMM - Sampling from a Gaussian Mixture Model
The above detailed sampling strategies may not necessarily improve the quality of samples. However, diminishing the imbalance in \(\mathcal{Z}\) helps us in maintaining local fidelity and a consistent contrastive nature in the explanation scores \(\mathbf{\phi}\). In the sequel, we also demonstrate the improved stability performance of CLIMAX as compared
to other perturbation based methods. Subsequently, we perform forward feature selection as proposed in LIME, to obtain the top \(k\) features, and then return the scores for the explanation. We have explained the entire algorithm in Algorithm 2.
```
1:Black-box binary classifier model \(f\), Instance \(\mathbf{x}\in\mathbb{R}^{d}\), Number of features \(d\), Number of surrogate samples \(n^{\prime}\)
2:Using \((x_{0},f_{p})\), generate \(n^{\prime}\) surrogate samples and \(\mathbf{y}_{i}=f_{p}(\mathbf{x}_{i})\forall i=[n^{\prime}]\).
3:Train explainer model \(f_{e}\), using the surrogate dataset \(\mathcal{D}\) and \(\mathbf{y}_{i}\) obtained in \(1\).
4:if Surrogate Sampling Style = 'GMM' then
5: Perform GMM Oversampling as mentioned in Algorithm 1
6:else
7: Identify the Minority Classes, \(c_{m}\) and perform Random Oversampling for all \(c_{m}\)
8:endif
9:if Influence Subsampling is True then
10: Perform influence subsampling using (12) and (13).
11:endif
12: Fit the logistic regression model in the locality of \(\mathbf{x}_{0}\) according to (8) or (10).
13:return Feature Importance Scores \(\phi\)
```
**Algorithm 2** CLIMAX - Contrastive Label-aware Influence-based Model-Agnostic XAI method
### Sample Complexity
Sample efficiency in post-hoc models is a crucial factor in obtaining reliable explanations, and there is consensus in the research community that explainable models must use as few samples for an explanation as possible [24]. In both variants of Climax, we oversample the surrogate samples in order to ensure a balanced surrogate dataset, and hence, we have some redundant information within the data. Approaches such as LIME, KernelSHAP, and BayLIME do not provide any guidance on choosing the number of perturbations, although this issue has been acknowledged in [34]. In [20], sample complexity is dictated by an acquisition function and sampling is achieved via Gaussian processes. Often, such methods turn out to be too complex.
We consider subsampling the surrogate samples using influence functions. Rooted in statistics, influence functions estimate how the model parameters change when a data point is upweighted by a small amount \(\mathbf{\epsilon}\). Using influence functions, Koh and Liang [11] proposed a method for estimating the impact of removing a data point from the training set (reducing its weight to \(0\)) on the model parameters. We use this method to perform subsampling within our surrogate dataset to improve its quality. Influence functions help to build a tool to quantify each data point's quality, thereby keeping good examples and dropping bad examples to improve the model's generalization ability. Previous works focus on weighted subsampling, that is, trying to maintain the model performance when dropping several data. The steps in the case of influence subsampling [31] is as follows:
* Train the explainer model on the full set of surrogate samples: \[\hat{\mathbf{\theta}}=\operatorname*{argmin}_{\mathbf{\theta}\in\mathbf{\Theta}}\frac{1 }{n}\sum_{i=1}^{n}L(\mathbf{z}_{i},\mathbf{\theta}),\] (11) where \(\hat{\theta}\), is the set of optimal parameters.
* Compute the influence function for each surrogate sample: \[\mathbf{\rho}=(\mathbf{\rho}(\mathbf{z}_{1},\hat{\mathbf{\theta}}),\mathbf{\rho}(\mathbf{z}_ {2},\hat{\mathbf{\theta}}),..,\mathbf{\rho}(\mathbf{z}_{n},\hat{\mathbf{\theta}})).\] (12) Here, \(\rho\) denotes the value of the influence function [11].
* Compute the sampling probability of each surrogate sample: \[\mathbf{\psi}=(\mathbf{\psi}(\mathbf{z}_{1},\hat{\mathbf{\theta}}),\mathbf{\psi}(\mathbf{z}_ {2},\hat{\mathbf{\theta}}),..,\mathbf{\psi}(\mathbf{z}_{n},\hat{\mathbf{\theta}})),\] (13) where \(\psi\) denotes the sampling probability of each surrogate sample as computed in [11]. Using these quantities, we obtain how influential a point is, and we can trim our surrogate dataset.
* Finally, we perform subsampling based on the influence scores and train a subset model using the reduced set of surrogate samples. \[\mathbf{\tilde{\theta}}=\operatorname*{argmin}_{\mathbf{\theta}\in\mathbf{\Theta}}\frac{1 }{\{i,\mathbf{o}_{i}=1\}}\sum_{\mathbf{o}_{i}=1}L(\mathbf{z}_{i},\mathbf{\theta})\] Where, \(\tilde{\mathbf{\theta}}\) gives the optimal parameters for the subsampled sets and the function \(\mathbf{o}\) is an indicator function which is \(1\) if the \(i^{th}\) point is included in the subsampled set or not.
## 5 Results and Discussions
In this section, we demonstrate the efficacy of the proposed CLIMAX framework on publicly-available datasets. In particular, we are interested in establishing the contrastive capability of CLIMAX and investigating the attributes such as stability (consistency in repeated explanations) and sample efficiency. We employ tabular(structured data), textual, and image datasets, and consider different black-box models for an explanation. 1
Footnote 1: Source code available at [https://github.com/nifynans/CLIMAX](https://github.com/nifynans/CLIMAX)
### Datasets and Pre-processing
We chose four distinct datasets from the UCI Machine Learning repository [6] as well as Scikit-Learn [17] for the tabular data based experiments owing to their usage in the relevant literature for classification based prediction tasks. The description of the tabular datasets is as follows:
* **Breast Cancer**: This dataset consists of \(569\) instances, with \(30\) features computed from an image of a breast mass, describing characteristics of the cell nuclei [16]. Hence, the classification task is to predict if the cancer is malignant or not.
* **Parkinson's**: The Parkinson's classification dataset consists of \(195\) instances of patients suffering and free from Parkinson's disease [6]. With \(22\) unique features per recording, the task is to classify whether a given patient has Parkinson's or not.
* **Ionosphere**: This dataset consists of \(34\) features, and \(351\) instances of radar data that was collected in Goose Bay, Labrador [6]. The targets were free electrons in the ionosphere. The classification task was to label the instances as 'good' or 'bad'.
* **Diabetes**: This dataset consists of \(8\) attributes and \(768\) data points that describes the medical records for Diabetes patients. It contains information about the pregnancy status, insulin levels, blood pressure and other medical attributes about the patients [6].
For text, we use the Quora Insincere Questions dataset [1], where the classification task is to identify if a question is since or not. We also use the 20 News Groups dataset [16] where the classification task is amongst two groups: Atheism and Christianity. We determine whether a given paragraph is written by an atheist or a Christian.
Due to lack of space, we present the results for the 20News Groups dataset in the Supplementary.
For images, we use the MNIST dataset [3] in order to contrast the relevant regions that contribute to the prediction of each digit.
### Baselines
CLIMAX focuses only on classification-based tasks. It is a perturbation-based technique, i.e., we do not assume any knowledge of the training samples or an autoencoder that may be trained on original data, but instead, we obtain surrogate samples in the vicinity of the index sample. Hence, we baseline CLIMAX using other perturbation-based methods that employ similar assumptions in their workflow. We use LIME and BayLIME [34] as our baselines primarily because they are perturbation based, and require the knowledge of the index sample and variance of the features in the training data. Among the array of XAI methods, S-LIME [36], uses the central limit theorem to obtain the optimal number of surrogate samples is a method with good performance, and hence a good baseline. For simulating the black-box prediction model, we used a Random Forest Classifier for all the classification tabular datasets. A summary of the dataset and prediction model statistics can be found in Table 1. We used the open-source Scikit-Learn [17] implementation of the Random Forest classifier to simulate the black-box prediction models.
### Numerical Results
In this section, we numerically demonstrate the stability and the contrastive nature of variants of the CLIMAX algorithm. Our method works on data of different modalities such as tabular, text and image, and hence we showcase its performance for each modality.
#### 5.3.1 Stability in repeated explanations
For evaluating the inconsistency in explanations over multiple runs, we execute CLIMAX, and the baselines using \(500\), \(1000\), \(1500\), \(2000\), and \(2500\) surrogate samples and collected \(20\) consecutive explanations for \(10\) randomly selected index samples for each of the four datasets described in Table 1.The Jaccard's distance(J) [20, 33] for measuring the consistency in explanations across the \(i\)-th, and \(j\)-th run can be computed as follows:
\[J(X_{i},X_{j})=\frac{|X_{i}\cap X_{j}|}{|X_{i}\cup X_{j}|}, \tag{14}\]
where \(X_{i}\) and \(X_{j}\) are sets consisting of top-5 features for iterations \(i\) and \(j\). Intuitively, it can be observed that \(J(X_{i},X_{j})=1\) if \(X_{i}\) and \(X_{j}\) have the same features, and \(J(X_{i},X_{j})=0\) if they have no common features. Thus, a consistent explainer module will have a relatively higher value of this metric than a relatively inconsistent explainer module. We averaged this metric over all possible combinations of iterations and the \(10\) index samples. The results can be seen in Figure 2. We average the values over \(500\), \(1000\), \(1500\), \(2000\), and \(2500\). Across all datasets, incorporating the Cross-Entropy Loss along with sampling from a Gaussian Mixture Model (CE-GMM-CLIMAX) improved the model stability and fidelity to a large extent. Hence, we take only that method and it's influence subsampling counterpart forward and compare it with the other baseline methods in 3. It can be seen that for various sample sizes, CLIMAX outperforms both LIME, BayLIME, and S-LIME across all datasets. For S-LIME, we restrict the \(n_{max}\) parameter to be \(1.5\) times the size of the original number of samples.
#### 5.3.2 CLIMAX Surrogate Data
To evaluate the quality of the surrogate dataset generated by CLIMAX be it through GMM Sampling or Random Oversampling, we collected the surrogate data generated during the stability experiment for twenty samples from all our datasets and calculated the macro-precision and recall scores. CLIMAX improves these scores, through oversampling and then subsequently subsampling by influence. We depict this in Table 2, which explains how our explainer works with the surrogate data.
It can be seen that the bootstrapping samples obtained using the procedure according to [19, 34] is highly imbalanced for all datasets (first row for each dataset). Further, we see that to a large extent the imbalance is removed using ROS and GMM under CLIMAX. Although ROS leads to an improvement in precision and recall scores, the information content in the data is the same as the bootstrapped samples. This necessitates a technique like GMM that also improves the quality of data. In some cases, the IF subsampled points lead to lower precision and recall scores. However, we believe that IF maintains the quality of data, and hence, lower precision-recall scores may not translate to poor explanation quality.
#### 5.3.3 CLIMAX on Text Datasets
To showcase CLIMAX's ability to provide robust textual explanations, we employed the information retrieval based tf-idf (term frequency-inverse document frequency) framework. We first extract features from the data using the tf-idf method. We train the black-box model and choose a test sample as the index sample. We compare our method with LIME in Figure 1.
We see that explanations of CLIMAX agree with LIME on many words (as in the highlighted text). However, the contrast in scores is large mainly because these explanations provide reasoning as to why one class is chosen instead of the other. The explanation of CLIMAX as compared to LIME on a large paragraph is provided using the 20 News Group dataset. Due to lack of space, we have moved this result to the supplementary.
### Climax on Image Dataset
In the case of the image data, we first preprocess the data by using a popular segmentation algorithm called quickshift within the Scikit-image module [19]. We depict the explanations provided by CLIMAX in Fig. 4. Due to space constraints, we provide a comparison between LIME, CLIMAX and CEM in the supplementary.
From Fig. 4, we see that an interesting benefit of contrastive explanations in CLIMAX is the possibility comparing explanations across classes. We show that regions in numerals provide explanations that are complementary to each other. For example, similar to several works that investigate explanations for \(3\) versus \(5\)[8], we see that the explainer is sure about class \(3\) due to the upper half, but neutral about the bottom half. Investigating digit \(5\), we see that the explainer
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Dataset** & \(p\) & \(n_{total}\) & Precision & Recall & AUC-ROC \\ \hline
**Breast Cancer** & 30 & 569 & 0.978 & 0.968 & 1.0 \\
**Parkinson’s** & 22 & 175 & 0.96 & 1.0 & 0.857 \\
**Ionosphere** & 34 & 351 & 0.936 & 0.976 & 0.917 \\
**Diabetes** & 8 & 768 & 0.719 & 0.672 & 0.725 \\ \hline \end{tabular}
\end{table}
Table 1: Description of datasets.
is neutral about the upper half, but neutral about the bottom half. This shows that CLIMAX is not only contrastive within the same image, but consistent across images of different classes. We depict several such examples in the figure.
## 6 Conclusions and Future Work
CLIMAX (**C**ontrastive **L**abel-aware **I**nfluence-based **M**odel-**A**g**onostic **X**AI) is a perturbation based explainer, which exploits the classification boundary to provide contrastive results. CLIMAX perturbs the index sample to obtain surrogate samples by oversampling the instances of the minority class using random oversampling or GMMs, in order to obtain a balanced dataset. It also employs influence subsampling in order to reduce the sample
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Dataset** & **Surrogate Data Sampling** & **Precision** & **Recall** \\ \hline \multirow{5}{*}{Breast Cancer} & Original & 0.475 & 0.5 \\ \cline{2-4} & ROS & 0.954 & 0.947 \\ \cline{2-4} & IF-ROS & 0.954 & 0.947 \\ \cline{2-4} & GMM & 0.85 & 0.81 \\ \cline{2-4} & IF-GMM & 0.83 & 0.789 \\ \hline \multirow{5}{*}{Parkinson’s} & Original & 0.659 & 0.560 \\ \cline{2-4} & ROS & 0.882 & 0.897 \\ \cline{2-4} & IF-ROS & 0.881 & 0.899 \\ \cline{2-4} & GMM & 0.736 & 0.72 \\ \cline{2-4} & IF-GMM & 0.683 & 0.68 \\ \hline \multirow{5}{*}{Ionosphere} & Original & 0.6 & 0.6 \\ \cline{2-4} & ROS & 0.756 & 0.766 \\ \cline{2-4} & IF-ROS & 0.755 & 0.768 \\ \cline{2-4} & GMM & 0.68 & 0.66 \\ \cline{2-4} & IF-GMM & 0.647 & 0.631 \\ \hline \multirow{5}{*}{Diabetes} & Original & 0.75 & 0.75 \\ \cline{2-4} & ROS & 0.976 & 0.978 \\ \cline{2-4} & IF-ROS & 0.976 & 0.973 \\ \cline{1-1} \cline{2-4} & GMM & 0.93 & 0.914 \\ \cline{1-1} \cline{2-4} & IF-GMM & 0.924 & 0.902 \\ \hline \end{tabular}
\end{table}
Table 2: Understanding the imbalance within the surrogate samples and randomly oversampling the data to get a fully balanced dataset.
Figure 4: Understanding the contrastive nature of CLIMAX’s image explanations for the MNIST Dataset. In the first row, the upper region of the digit 2 is marked positive and the same region for 8 is marked ambiguous by CLIMAX. Such a pattern is followed in all of these cases. What we mean by this is, the regions where the characteristics of a particular digit can be explicitly seen are given positive weightage. The same region is given an ambiguous state for other digits due to the same reason. Hence, the explanations help us in determining why a point lies in class \(c_{i}\) and not in classes \(c_{-i}\).
Figure 3: The average Jaccard scores for \(10\) randomly sampled test instances across various numbers of surrogate samples of the best variant of CLIMAX (CE-GMM-CLIMAX) as compared to the three state-of-the-art methods.
Figure 2: Average Jaccard scores for \(10\) randomly sampled test instances across varying surrogate samples, for variants of CLIMAX.
complexity. Explanation scores are then provided using a logistic regression module, and we propose two variants in this direction. As compared to other perturbation-based techniques, CLIMAX explains why a point is in class \(c_{i}\) and also provides information about why it is not in the remaining classes \(c_{-i}\). CLIMAX gives access to the explainer to all class probabilities, helping in providing the contrastive scores. We observe that CLIMAX is able to produce more stable, faithful, and contrastive results as compared to LIME across different modalities of data. CLIMAX provides an important insight as the inherent task which is being explained is classification-based. Hence, CLIMAX is a well-rounded extension of LIME for black-box classifiers. In the future, we would like to provide uncertainty estimates for our explanations. This would help in checking the fidelity of the explanations better.
## 7 Additional Results and Discussions
In this section, we demonstrate the efficacy of the proposed CLIMAX framework on publicly-available datasets. In particular, we are interested in establishing the contrastive capability and investigating the attributes such as stability (consistency) and sample efficiency in repeated explanations. We employ tabular, textual, and image datasets and consider different black-box models for an explanation.
#### 7.0.1 TSNE Plots: CLIMAX Surrogate Data
To evaluate the quality of the surrogate dataset generated by CLIMAX using GMM Sampling, we plot to TSNE plots in the case of MNIST image dataset. In Fig. 5, the index sample belonged to class 2 and we see that the GMM sampling occurs from 6 main clusters. This implies that the digit 2 is similar to six other digits, and that the sampling for the surrogate data is uniform. Similarly, in Fig. 6, the index sample belonged to class 3 and we see that the sampling occurs from seven classes. If we look at both the remaining classes, which are dissimilar to the digits in question, are sampled uniformly from each of the similar clusters. Hence, we conclude that the surrogate sampling via GMM is not only effective, but also explainable. In comparison to methods that incorporate autoencoders and other black-box data generation mechanisms [4, 30], we find our technique to be more transparent and trustworthy.
#### 7.0.2 CLIMAX on Text Datasets
To showcase CLIMAX's ability to provide robust textual explanations, we employed the information retrieval based tf-idf (term frequency-inverse document frequency) framework.We first extract features from the data using the tf-idf method. We compare our method with LIME in Figure 7. We see that even on longer paragraphs of text, CLIMAX maintains its contrastive capability, as compared to LIME.
### CLIMAX for images Vs CEM [4]
In the case of CEM, the classification boundary is exploited well in terms of the regions that are Pertimently Positive and Pertimently Negative. However, the ambiguity in the sub-parts of an image due to overlap in the PP and the PN regions makes the classification of a digit uncertain. For instance, common regions in the digit 0 have been marked as pertinent positive and pertinent negative. It is not clear how the en-user is supposed to interpret these areas. In particular, joint analysis of pertinent positive and pertinent negative regions
Figure 5: This image shows the tSNE plots for the GMM sampling to cover for the oversampling of minority classes while sampling for the Digit 2. As we can see, the GMM samples well across the classes where the digits look similar to 2, and doesn’t sample points from classes like 7, which are totally dissimilar. Hence, unlike Random Oversampling, where all classes get oversampled to the same sample size, GMM does this optimally. We reduce the imbalance, while maintaining the quality of the surrogate data, which is of utmost importance.
Figure 6: This image shows the tSNE plots for the GMM sampling to cover for the oversampling of minority classes while sampling for the Digit 3. As shown in 5, GMM samples smartly from the given \(y\) values. The only difference is that, for 2, the number of classes the GMM considered for oversampling were 6 and for 3 it creates 7 clusters. This is because 3, as a digit, is similar to 7 other digits from different angles.
First, to point out the obvious: While #4 would clearly be a highly subjective issue, one would be hard pressed to point to another book of the OT (or for that matter the NT) that doesn't, on some issues, in some way, fail one or more of the first three of these tests. Second, one factor the Deuterconconanicals share is the lateness of their composition. I don't recall the exact dating of all of the books, but most -if not all- were written after the latest of the canonical books (i.e. Daniel). Furthermore, while the Deuterconanonical may or may not have been originally written in Greek, they are clearly deeply _Hellenistic_ in nature. Both of these features probably figured heavily in the rejection of these books from the various canons.
These may not be strict and uniformly applicable criteria by which to judge the canonicity of these books, but, as these discussions have shown, I think the one thing we can see is that there _are_ no purely objective standards for determining canonicity.
Figure 7: Explanations for CLIMAX and LIME for the same instance of the 20 Newsgroups dataset.
Figure 8: We compare CLIMAX with LIME and the Contrastive Explanations Method (CEM). We do so as CLIMAX and CEM both aim to provide more contrastive results. If we look at the explanation masks of LIME, we can see that it attributes an unnecessarily large region for an explanation, even when it is provided with a larger surrogate sample size. In the case of CEM, the Pertinent Positive (PP) regions and the Pertinent Negative (PN) regions do bring out a contrastive flavour, but CLIMAX shows ambiguity in regions where multiple digits look similar, making it more visually trustworthy.
are not possible. Moreover, comparison across different digits is also not possible. CLIMAX does not face such challenges. On one hand, within the same digit, it clearly marks the regions that it is certain (in pink for all digits) and uncertain (in grey for all digits). Furthermore, we can also analyse across digits, where if one area is marked positive for a certain digit, then that same area would be marked neutral for many other digits. This nature is visible across all CLIMAX explanations.
### CLIMAX for images vs LIME[19]
We compare CLIMAX with LIME, the most popular post-hoc explainable AI method, which set the foundation for such methods in Fig. 8. We see that often LIME is not able to distinguish between regions encapsulated within a digit and the digit boundary itself. This is mainly because LIME does not take as input all class probabilities, and employ any decision boundary aware mechanism. The contrastive nature of the explanations is evident here as well, where CLIMAX tends to indicate neutral regions which it is not sure about. However, LIME does not capture such contrast.
|
2304.08954 | Plat closures of spherical braids in $\mathbb{R}P^3$ | We define plat closure for spherical braids to obtain links in
$\mathbb{R}P^3$ and prove that all links in $\mathbb{R}P^3$ can be realized in
this manner. Given a spherical braid $\beta$ of $2n$ strands in $\mathbb{R}P^3$
we associate a permutation $h_{\beta}$ on $n$ elements called \textit{residual
permutation}. We prove that the number of components of the plat closure link
of a spherical braid $\beta$ is same as the number of disjoint cycles in
$h_{\beta}$. We also present a set of moves on spherical braids in the same
spirit as the classical Markov moves on braids. The completeness of this set of
moves to capture the entire isotopy classes of the plat closure links is still
to be explored. | Rama Mishra, Visakh Narayanan | 2023-04-18T12:46:18Z | http://arxiv.org/abs/2304.08954v3 | # Plat closures of spherical braids in \(\mathbb{R}P^{3}\)
###### Abstract.
We develop a method for costructing links in \(\mathbb{R}P^{3}\) as plat closures of spherical braids. This method is a generalization of the concept of "plats" in \(S^{3}\). We prove that any link in \(\mathbb{R}P^{3}\) can be constructed in this manner. We also develop a set of moves on spherical braids in the same spirit as the classical Markov moves on braids and show that two spherical braids can have isotopic plat closures if and only if they are related by a finite sequence of these moves. We introduce the notion of a new kind of permutation (called _residual permutation_) associated to a spherical braid in \(\mathbb{R}P^{3}\) and prove that the number of disjoint cycles in this residual permutation of a spherical braid is same as the number of components of the plat closure link of this braid.
## 1. **Introduction**
The discovery of quantum invariants has revolutionized the study of low dimensional topology. For example, in case of classical knots, the long standing problem of the trefoil and its mirror image being not equivalent, was solved by the Jones polynomial, one of the first quantum invariants. The discovery and computation of these invariants have been facilitated by the intimate relationship between Artin's braid group and classical knots.
As knot theory grows it becomes important to understand the different "ramifications" of it such as studying knots and links inside other three manifolds. The real projective 3-space, being one of the closest cousins of \(S^{3}\), it is natural to consider the knot theory it admits as the next candidate. But the important lesson we learn from classical knot theory is that, a knot and the space around it cannot be separated from each other. Their topological features are closely connected. Hence the complexity of the ambient manifold raises natural challenges in its knot theory.
In this paper we introduce a braid theory for the knots and links in real projective 3-space. Joan Birman [1] introduced the notion of braid group of an arbitrary manifold. In which case, the standard Artin's braid group will be the braid group of the plane. The braid group of the 2-sphere is also of particular importance. We will refer to the elements of this group as "spherical braids" and the elements of Artin's braid group as "classical braids". The \(n\)-string braid group of any manifold may be described as the group of motions of \(n\) special points in it. Birman then goes on to define the concept of plats in \(S^{3}\)[1] as a different "closure" of braids than the standard closure [2], and proves that every classical link is isotopic to a plat. Here we develop the notion of plats in \(\mathbb{R}P^{3}\) using spherical braids and show that _every link in \(\mathbb{R}P^{3}\) is isotopic to a plat (Theorem 3.2)_ defined in this way. We refer to it as "plat closure" of a spherical braid. For the sake of completeness, clear
definitions of these are provided in the Section 2.
J.W. Alexander [2], proved that every classical link is isotopic to the closure of some classical braid. This representation is not unique as many braids may close to give isotopic links. Andrei Markov defined a set of moves on classical braids, which are now known as "Markov moves", and showed that two braids can have isotopic closures if and only if they are related by finitely many Markov moves [1]. In Section 5, we develop some moves on spherical braids, in the spirit of classical Markov moves. These moves generate the equivalence relation of spherical braids under which belonging to the same equivalence class is a necessary and sufficient condition for two braids to have isotopic plat closure links (Theorem 5.1).
Another beautiful feature of a classical braid is the permutation that it defines. The number of disjoint cycles in the permutation of the classical braid is equal to the number of components in the closure link. At the end of Section 5, we define a permutaion which we refer to as "residual permutation", assosiated to a spherical braid in \(\mathbb{R}P^{3}\). We prove that the number of disjoint cycles in the residual permutation of a spherical braid matches with the number of components in its plat closure (Theorem 5.2).
**Organization of the paper:** Section 2, provides definitions and terminologies which will be used throughout the paper. In Section 3, we include discussions on plat closures of spherical braids in \(\mathbb{R}P^{3}\) and prove that every link in \(\mathbb{R}P^{3}\) is isotopic to plat closure of some sherical braid. In Section 4, we study some properties of the braid group of the 2-sphere and give a convenient presentation for the group \(B_{n}(S^{2})\). Section 5 introduces a set of moves on spherical braids (M-moves) and includes the proof of the result that: "two braids can have isotopic closures if and only if they are M-equivalent". And towards the end we define the notion of a residual permutation of a spherical braid and prove that the number of components in the closure link is same as the number of disjoint cycles in this permutation.
## 2. Spherical braids
Let \(\Sigma\) be a manifold of arbitrary dimension and let \(n\) be a positive integer. Consider a set of \(n\) distinct points \(X:=\{p_{1},p_{2},...,p_{n}\}\subset\Sigma\). These are thought of as special \(n\) points in space. Consider an isotopy,
\[F:\Sigma\times[0,1]\to\Sigma,\]
such that, \(F_{0}=Id_{\Sigma}\) and \(F_{1}\) is a diffeomorphism of \(\Sigma\) mapping \(X\) to itself. Then we may represent the isotopy in the space \(\Sigma\times I\) by considering the map,
\[\overline{F}:\Sigma\times I\to\Sigma\times I,\] \[(q,t)\mapsto(F_{t}(q),t)\]
defined by it. Notice that the image of the set \(X\times I\) under \(\overline{F}\) is a collection of paths each one starting at some \((p_{i},0)\) and ending at some \((p_{j},1)\).
**Definition 2.1**.: _The topological pair \((\Sigma\times I,\overline{F}(X\times I))\) is refered to, as an \(n\)**braid of \(\Sigma\)**._
Consider the case when \(\Sigma=S^{2}\). Each \(n\)-braid of \(S^{2}\) is represented by a set of \(n\) paths in \(S^{2}\times I\) such that each path starts at a point of \(S^{2}\times\{0\}\) and ends at \(S^{2}\times\{1\}\) and intersects each of the sections \(S^{2}\times\{i\}\) at a unique point transvesally. We may interpret the strip \(S^{2}\times I\) as a 2+1 dimensional spacetime where the \(I\) direction represents the flow of time. If we do not allow points of \(S^{2}\) to move back in time their world lines intersect each "spacelike" sphere in the strip, i.e., spheres of the form \(S^{2}\times\{t\}\) at a unique point, just like the strings of braids. Or in other words, the projection map \(f:S^{2}\times I\to I\), is monotonic when restricted to each of the strings. Let \(\alpha\) and \(\beta\) be two such motions of points in a set \(X\subset S^{2}\). Then clearly we can define a new motion by composing them, that is performing \(\beta\) after \(\alpha\), which we will call \(\alpha\beta\). As a braid, it is defined as the braid obtained by gluing the \(S^{2}\times\{1\}\) of the strip containing \(\alpha\) to the \(S^{2}\times\{0\}\) of the strip containing \(\beta\) matching the indices properly and then rescaling the newly formed strip. This defines a multiplication of braids. The set of isotopy classes of braids forms a group under this operation. We describe this group in detail in the Section 4.
Let \(B\) be the 3-ball. We know that, \(\partial(S^{2}\times I)\approx S^{2}\amalg S^{2}\). Notice that \(S^{3}\) can be obtained by gluing boundaries of \(B\amalg B\) and \(S^{2}\times I\). Classically the plats in \(S^{3}\) were constructed [1] by considering a spherical braid in this strip and certain simple tangles (which we discuss in the next section) in both the balls. We wish to discuss a generalization of this construction. Let \(M\) denote the mapping cylinder of the canonical two sheeted covering map \(S^{2}\to\mathbb{R}P^{2}\). Notice that, \(\partial M\approx S^{2}\). By gluing the boundaries of \(M\amalg B\) and \(S^{2}\times I\) we can obtain a copy of \(\mathbb{R}P^{3}\). We introduce a different set of tangles in \(M\). When we say "braids in \(\mathbb{R}P^{3}\)", we mean the braids in \(S^{2}\times I\) region in some splitting of \(\mathbb{R}P^{3}\) of this type. Now by considering a braid in \(S^{2}\times I\) and gluing its boundary with the special tangles in \(B\) and \(M\) we can form a collection of linked knotted curves. We would refer to these as the "projective plat closure", or simply, plat closure of the spherical braid in \(\mathbb{R}P^{3}\).
Without loss of generality, we may assume that the special points are lying on an equatorial circle, \(C\) of \(S^{2}\). Then we can project every braid into an annulus, in a projective plane in \(\mathbb{R}P^{3}\). Here we are using the same projection which were used in [3] and [4]. Thus we can represent such braids by a diagram drawn on the annulus. Refer to Figure 1. The diagram of a composition \(\alpha\beta\) will appear as keeping the diagram of \(\beta\) "inside" the diagram of \(\alpha\).
## 3. Plat closures in \(\mathbb{R}P^{3}\).
Here, we would consider "closures" of these braids in \(\mathbb{R}P^{3}\). Choose an equator \(C\) for a \(\partial B\) and let \(D\) be the flat disk in \(B\) with boundary \(C\). Let \(A^{n}\) represent the tangle in \(B\) formed by \(n\) unknotted unlinked arcs neatly embedded in \(B\) lying on \(D\). We will call them as internal tangles. Refer to Figure 2.
Notice that the boundary of the tangle \(A^{n}\) is formed by \(2n\) points on \(\partial B\). We refer to certain special tangles in \(M\), which appear in all links in \(\mathbb{R}P^{3}\) as "residual tangles". We may define them as follows: notice that \(H_{1}(M,\partial M)\approx\frac{\mathbb{Z}}{2\mathbb{Z}}\), every arc in \(M\) with its two boundary points in \(\partial M\), will represent a class in \(H_{1}(M,\partial M)\).
Residual tangles can be described as tangles formed by a collection of unknotted unlinked arcs each representing the \(\bar{1}\) in \(H_{1}(M,\partial M)\). We also require that all the arcs in the tangle are lying in a single flat Mobius band in \(M\). Figure 3 demonstrates the first few examples. More properties of these are studied in [4].
The boundary of the residual \(n-\)tangle is composed of \(2n\) points. Now consider gluing the boundaries of \(M\) with a residual \(n\)-tangle and \(B\) with an internal
Figure 3. Residual tangles
Figure 2. Internal tangles
\(n-\)tangle, using a diffeomorphism, \(f:\partial M\to\partial B\), which sends \(2n\) points on the boundary of \(T^{n}\) to the \(2n\) points on the boundary of \(A^{n}\). By identifying \(\partial M\) and \(\partial B\) with \(S^{2}\), we can find an isotopy \(H:S^{2}\times I\to S^{2}\), of \(f\) to the identity map of \(S^{2}\). By representing the image of \(\partial T^{n}\) under each of the maps, \(h_{t}(x):=H(x,t)\) on the sphere \(S^{2}\times\{t\}\) in \(S^{2}\times I\), we can obtain a braid in \(S^{2}\times I\). Then the arcs in the internal tangle and residual tangle will join the boundary points of the braid. Thus we get a link in \(\mathbb{R}P^{3}\). We will refer to this "closure" of braids as **projective plat closure**.
The following lemma will be used to prove our first theorem below. For the proof of the lemma, please refer to Corollary 3.3 in [4].
**Lemma 3.1**.: _Given any link \(K\) in \(\mathbb{R}P^{3}\), there exists a separating sphere, which will split \(\mathbb{R}P^{3}\) into two pieces a ball \(B\) and a mapping cylinder \(M\) such that \(K\cap M\) is a residual tangle._
**Theorem 3.2**.: _Every link in \(\mathbb{R}P^{3}\) is isotopic to the projective plat closure of a braid._
**Proof of the theorem:** Let \(K\) be a link in \(\mathbb{R}P^{3}\). Let \(S\subset\mathbb{R}P^{3}\) be the separating sphere provided by the lemma. Let \(B\) and \(M\) denote the ball and the mapping cylinder in the corresponding splitting respectively. The part of \(K\) inside \(M\) is already a residual tangle, say \(T^{n}\). Let \(B^{\prime}\subset B\) be a smaller closed ball with the same center. Refer to Figure 5. Notice that the region outside \(B^{\prime}\) in \(B\) homeomorphic to \(S^{2}\times I\) with \(S^{2}\times 0\) mapped to \(\partial B^{\prime}\) and \(S^{2}\times 1\) mapped to \(\partial B\). We will refer to this region as "**strip**" in what follows.
We can move the knot isotopically, so that all the crossings appear in the projection of the strip in the diagram.
Now the tangle inside \(B^{\prime}\) is just a collection of untwisted unlinked arcs all whose boundaries are on \(\partial B^{\prime}\). The tangle inside the strip now has many arcs all of whose boundaries lie on \(\partial B\) and \(\partial B^{\prime}\). The arcs may also be knotted. Refer to Figure 6. There are three types of arcs in the strip based on where their boundary points are placed. Both the boundary points of an arc may be on \(\partial B^{\prime}\). We will call them "type 1" arcs. The arcs with both the boundary points on \(\partial B\) will be called "type 2" arcs. The arcs with one boundary point on \(\partial B\) and another on \(\partial B^{\prime}\) will be called
Figure 4. A plat representation of affine trefoil
"type 3" arcs.
The projection,
\[f:S^{2}\times I\to I\]
on the strip can be restricted to the arcs. We will call this restriction, the "height function" on the tangle. We shall denote this function also by \(f\). Note that if \(f\) has any point of inflection on an arc, the arc can be isotopically moved inorder to remove the inflection point and make \(f\) monotonic locally. Hence, in what follows we will always assume that \(f\) has no points of inflection on the tangle and extrema will mean either maxima or minima. Clearly on a type 1 arc, there exists atleast one maximum point for \(f\). Similarly type 2 arcs has to have atleast one minimum point. If any of these arcs are knotted, they will contain more extrema points of \(f\). Type 3 arcs which are not knotted may be isotopically moved so that \(f\) is monotonic on the modified arc. Now all the extrema points may be removed from the strip by
Figure 5. A residual tangle in \(M\) and a generic tangle inside \(B\)
Figure 6. After pushing all the crossings to the strip.
moving them inside \(B^{\prime}\). Refer to Figure 7. It is easy to see that these operations can be done isotopically in \(\mathbb{R}P^{3}\). And once all the extremum points are removed, \(f\) will be monotonic on all the arcs in the strip.
Let \(\gamma\) be an equator for \(\partial B^{\prime}\). It is easy to see that, we can isotopically move the link so that the tangle in \(B^{\prime}\) is an internal tangle with all the boundary points on \(\gamma\). That is, all the boundary points on \(\gamma\) are connected to their immediate neighbour. Refer to Figure 8.
Figure 8. An example.
Figure 7. Transfering the extremum points to \(B^{\prime}\).
Now it is easy to see that the tangle inside the strip is a braid. The residual tangle in \(M\) and the internal tangle in \(B^{\prime}\) are "closing" this braid into a projective plat closure. Hence we are done.
Figure 11. The corresponding braid
Figure 10. A plat representation for affine trefoil
Figure 9. The braid in the above example.
## 4. The braid group of \(S^{2}\)
It is easy to see that the composition of motions as defined in section 2, defines a product of braids of the 2-sphere. There is always a "identity braid", which corresponds to all points being at rest. For every motion, there is a "inverse" motion which is obtained by reversing the direction of time. Now if we consider the set of all motions of a fixed finite set, say of \(n\) points, there is an equivalence relation induced by ambient isotopy relative to the boundary of the strip. The set of isotopy classes clearly forms a group, which is called as the braid group of the sphere. The classical Artin braid group may be described similarly as the braid group of the plane. Refer to [1].
The braid group of \(S^{2}\) is very similar [1], to the Artin braid group, which is the braid group of \(\mathbb{R}^{2}\). Here for the sake of simplicity of notation, we will denote the \(n-\)string braid group of \(S^{2}\) as \(B_{n}\). It should not be confused with the Artin braid group. Since for the purposes of this paper, the only braids we use are from the braid group of \(S^{2}\), there is no chance of confusion.
Let \(C\) be an equator for \(S^{2}\) as before. We may assume that the boundary points of each string lie on the boundary circles of \(C\times I\). That is we are thinking of every braid as motion of finitely many special points on \(C\). If \(p_{1},p_{2},...,p_{n}\) are these special points on \(C\), we number both the points \((p_{i},0)\) and \((p_{i},1)\) by \(i\). We choose to index them in a clock wise order. Also it is helpful to think of the indices as elements of \(\frac{\mathbb{Z}}{n\mathbb{Z}}\). For keeping the notation simple, we will denote the class of a number say \(i+n\mathbb{Z}\), also as just \(i\). Refer to the Figure 9.
We can describe the generators of \(B_{n}\) as follows. Consider the braid formed by a crossing between the \(i^{th}\) string and the \(i+1^{th}\) string and connecting every other special point to the other point with the same index in the trivial way. Refer to Figure 10. Clearly there are two such braids as shown in the diagram and they are inverses of each other in \(B_{n}\). We denote them as \(\sigma_{i}\) and \(\sigma_{i}^{-1}\). Notice that since we have \(n+1=1\), we also have \(\sigma_{n}\) and \(\sigma_{n}^{-1}\) which have a crossing between the \(n^{th}\) string and the \(1^{st}\) string. Clearly we have the following presentation of \(B_{n}\).
\[\left.\begin{array}{c}\sigma_{i}\sigma_{j}=\sigma_{j}\sigma_{i},\ i-j>2,\\ B_{n}\approx\left\langle\sigma_{1},\sigma_{2},...,\sigma_{n}|\begin{array}{c} \sigma_{i}\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1},\\ \sigma_{1}\sigma_{2}...\sigma_{n-2}\sigma_{n-1}^{2}\sigma_{n-2}...\sigma_{2} \sigma_{1}=1,\\ \sigma_{n}=\sigma_{1}^{-1}\sigma_{2}^{-1}...\sigma_{n-2}^{-1}=\sigma_{n-1}^{- 1}\sigma_{n-2}^{-1}...\sigma_{2}^{-1}.\end{array}\right\rangle\]
## 5. Moves on braids
As usual in the classical case, the same link may be represented by plat closures of multiple braids. We will now explore the equivalence relation between the braids which have isotopic closures.
The diagramatic moves in \(\mathbb{R}P^{3}\), as described in [3] are as shown in Figure 11. _Two links in \(\mathbb{R}P^{3}\) are isotopic if and only if their diagrams can be transformed from one to the other by a finite sequence of the moves \(\Omega_{1},\Omega_{2},\Omega_{3},\Omega_{4}\) and \(\Omega_{5}\)._ That is, as sets, the set of equivalence classes of diagrams induced by above described moves is in one to one correspondence with the set of isotopy classes of links.
Suppose \(\beta:=\sigma_{i_{1}}\sigma_{i_{2}}...\sigma_{i_{k}}\) is a braid. Notice that the move \(\Omega_{2}\) applied on \(\beta\) is
Figure 12. Generating braids of \(B_{n}\)
Figure 13. Moves of projective diagrams
equivalent to deleting a pair of the form \(\sigma_{i_{i}}\sigma_{i_{l}}^{-1}\) from the word representing \(\beta\). And performing a \(\Omega_{3}\) move on \(\beta\) is equivalent replacing a sub-word of the form \(\sigma_{i}\sigma_{i+1}\sigma_{i}\) with a word \(\sigma_{i+1}\sigma_{i}\sigma_{i+1}\) for an arbitrary \(i\). Now we know this is already a relation in the braid group \(B_{k}\). Clearly the inverse of both these processes can also be described in a similar manner. That is, the moves \(\Omega_{2}\) and \(\Omega_{3}\) are already incorporated in the group structure of the braid group.
Consider the following moves on braids. Suppose \(k=2n\) is even. Let \(\beta=\sigma_{i_{1}}\sigma_{i_{2}}...\sigma_{i_{m}}\) be a braid in \(B_{k}\). Then there will be exactly \(n\) odd classes, i.e, \((2l+1)+k\mathbb{Z}\) in \(\sfrac{\mathbb{Z}}{k\mathbb{Z}}\).
Let \(M_{0}^{i}\) be a move as follows,
\[M_{0}^{i}:\beta\sigma_{i-1}\sigma_{i}\longleftrightarrow\beta \sigma_{i-1}^{-1}\sigma_{i}^{-1},\text{if $i$ is odd}.\] \[\overline{M_{0}^{i}}:\beta\sigma_{i-1}^{-1}\sigma_{i}^{-1} \longleftrightarrow\beta\sigma_{i-1}\sigma_{i},\text{if $i$ is even}.\]
Refer to Figure 12. It is easy to see that the plat closures of two braids which are related by a move of this type are equivalent by two \(\Omega_{2}\)-moves.
For an odd \(i\), let \(M_{1}^{i}\) be the move,
\[M_{1}^{i}:\beta\longleftrightarrow\beta\sigma_{i},\] \[\overline{M_{1}^{i}}:\beta\longleftrightarrow\beta\sigma_{i}^{-1}.\]
Notice that the internal tangle for a plat has strings connecting every odd class \(i\) with \(i+1\). Then the diagrams of plat closures of two braids which are related by an \(M_{1}^{i}\) move are related by an \(\Omega_{1}\) move. Thus the braids have isotopic plat closures.
For any \(3\leq l\leq k+2\), define
\[\alpha_{l}:=\sigma_{2}\sigma_{3}\sigma_{4}...\sigma_{l-2}\sigma_{l- 1}\sigma_{l-2}^{-1}\sigma_{l-3}^{-1}...\sigma_{2}^{-1},\] \[\overline{\alpha_{l}}:=\sigma_{2}^{-1}\sigma_{3}^{-1}\sigma_{4}^ {-1}...\sigma_{l-2}^{-1}\sigma_{l-1}^{-1}\sigma_{l-2}\sigma_{l-3}...\sigma_{2}.\]
Figure 16. \(M_{2}\)-moves
Figure 17. A typical case of applying the map \(e:B_{8}\hookrightarrow B_{12}\)
certain family braids in \(B_{k+4}\).
Then we define,
\[M_{3}^{l}:\beta \longleftrightarrow\alpha_{l}e(\beta),\] \[\overline{M_{3}^{l}}:\beta \longleftrightarrow\overline{\alpha_{l}}e(\beta).\]
Which is a move relating braids of \(B_{k}\) and \(B_{k+4}\). Refer to Figure 16. Each instance of performing the above move and obtaining a braid \(\beta^{\prime}\) from a \(\beta\in B_{k}\), the diagram of their plat closure of \(\beta\) change by an \(\Omega_{4}\) move after an \(\Omega_{1}\) move. Thus clearly, the plat closures of braids which are related by these moves are isotopic.
There is another move which also changes the index of the braid. Suppose one string in a braid is isotopically moved to form a pair of maxima and minima of the projection, \(f:S^{2}\times I\to I\). Notice that then its no longer monotonic on this string. We may bring the extrema points into the ball region by isotopically moving them. Refer to Figure 19 and 20 where it is demonstrated for the cases when half of the braid index is odd or even. We would refer to this as \(M_{4}\) and \(\overline{M_{4}}\) moves. When \(n\) is odd, the move is just an application of the map, \(e:B_{k}\to B_{k+4}\) defined above. For an even \(n\), \(\overline{M_{4}}\) clealy defines a map from \(B_{k}\) to \(B_{k+4}\). But if \(\beta\) is a \(k\)-braid, the form of the braid \(M_{4}(\beta)\) depends on the form of the braid \(\beta\) and writing a closed expression may not be possible, just like the map \(e\) mentioned above.
Notice that all the moves described above with a name of the type \(M_{i}\) always comes with a pair \(\overline{M_{i}}\). In each case, it is obvious to see that each one is just another version of the other. In what follows we will drop the overlines and refer to the moves as just \(M_{i}\) since all what we are saying is applicable to both with some obvious modifications. We would refer to the set of all the operations as just, \(M\)-moves. If a braid \(\beta\) can be turned into another braid \(\beta^{\prime}\) by a finite sequence of \(M\)-moves, we will say \(\beta\) and \(\beta^{\prime}\) are \(M\)**-equivalent**.
Figure 18. \(M_{3}^{10}\)-move performed on a braid in \(B_{8}\) resulting in \(B_{12}\)
**Theorem 5.1**.: _The projective plat closures of two braids are isotopic if and only if they are \(M\)-equivalent._
**Proof:** As mentioned earlier, we know that [3], isotopic links have diagrams which are equivalent under \(\Omega\)-moves. Notice that after performing each of the \(M\)-moves on a braid, the closure links before and after are isotopic. This is because the diagrams are equivalent under \(\Omega\)-moves. Thus it is obvious to see that braids which are \(M\)-equivalent have isotopic plat closures. Hence it is enough to prove the "only if" part. That is, it is enough to show that, two braids have plat closures with diagrams which are equivalent under \(\Omega\) moves if and only if they are \(M\)-equivalent. Let \(k=2n\) such that \(\alpha,\beta\in B_{k}\). Let \(L\) and \(L^{\prime}\) be the closure links of \(\alpha\) and \(\beta\) respectively. And let \(D\) and \(D^{\prime}\) be the diagrams of \(L\) and \(L^{\prime}\) on a projective plane, \(P\) in \(\mathbb{R}P^{3}\). Without loss of generality, we may assume that \(P\) is divided into regions, \(B\), \(A\) and \(A^{\prime}\) which are homeomorphic to a 2-ball, annulus and a Mobius band respectively. This splitting is such that the part of the diagram in \(B,A\) and \(A^{\prime}\) are projections of internal tangles, braids and residual tangles of the corresponding links. We have a finite sequence, \(R_{1},R_{2},...,R_{m}\) of \(\Omega\) moves, i.e, each \(R_{i}\) is some \(\Omega_{j}\), so that we have,
\[D^{\prime}\approx R_{1}R_{2}...R_{m}(D).\]
Figure 19. When \(n\) is even
Figure 20. When \(n\) is odd
We may consider the effect of each \(\Omega\)-move on the corresponding braid. Notice that the crossings in any knot diagram can be assumed to be contained only in \(A\). Clearly the braids correponding to \(D\) and \(\Omega_{2}(D)\) represent same elements in the braid group since the move just corresponds to removing or inserting a word \(\sigma_{i}\sigma_{i}^{-1}\) in the word for the braid of \(D\). Similarly \(\Omega_{3}\) move corresponds to the replacement of a word of the type \(\sigma_{i+1}\sigma_{i}\sigma_{i+1}\) with the word \(\sigma_{i}\sigma_{i+1}\sigma_{i}\) or vice versa. Thus in the braid group these words represent the same element. Thus performing \(\Omega_{2}\) and \(\Omega_{3}\) moves on a plat closure, will fix the braid itself and hence its \(M\)-equivalence class.
Let \(\beta\) be a \(k\)-braid and \(\beta^{\prime}=M_{3}^{l}(\beta)\) for some \(l\). Then, it is clear that the diagrams of the plat closures of \(\beta\) and \(\beta^{\prime}\) are \(\Omega\)-equivalent, since we may turn the latter diagram to the former by a \(\Omega_{4}\) then a sequence of \(\Omega_{2}\) moves depending on \(l\), and then an \(\Omega_{1}\) move. Similarly if \(\beta^{\prime}=\overline{M_{3}}^{l}\), one has the same procedure where all the crossings are reversed. Now suppose one peforms a \(\Omega_{1}\) move on a diagram \(D\) and the new diagram after arranging it to look like a plat closure, as in the procedure given in the proof of Theorem 2.2, is say \(D^{\prime}\). Then it is easy to see that the braids corresponding to \(D\) and \(D^{\prime}\) are equivalent under one of the following, sequence of \(M\)-moves.
1. a single \(M_{1}^{i}\) move for some \(i\)
2. a single \(M_{3}^{l}\) move for some \(l\)
3. an \(M_{3}^{l}\) move and a sequence of \(M_{2}\) moves
Thus each \(\Omega_{1}\) move, when performed on the diagram of plat closure of a braid \(\beta\), we will obtain a new plat closure which has the corresponding braid obtained from \(\beta\) by one of the above mentioned combinations of \(M\)-moves. Thus \(\Omega_{1}\) move fixes the \(M\)-equivalence class of the braid when performed on its plat closure.
Notice that an \(\Omega_{4}\) move cannot be performed on a plat closure without disturbing the plat structure. Hence this move always has to be accompanied by other moves inorder to maintain the plat structure. If one performs an \(\Omega_{5}\)-move on the diagram of a plat closure, then it is equivalent to perform an \(M_{2}\) move on the corresponding braid.
Thus, every instance of \(\Omega\)-moves performed on plat closure diagrams producing plat closure diagrams, can be equivalently described by performing \(M\)-moves on the corresponding braids. Thus if two braids have isotopic plat closures, then they have to be \(M\)-equivalent. Hence we are done. Refer to Figure 21.
Now there are certain natural questions one would like to ask about the plat representations of links in projective space. For example, by looking at the braid, can we predict the number of components of the closure link? We try to answer this question here.
The indexing on the boundary points of a braid, \(\beta\in B_{k}\), gives a bijection,
\[f_{\beta}:\mathbb{Z}\diagup_{k\mathbb{Z}}\rightarrow\mathbb{Z}\diagup_{k \mathbb{Z}},\]
which we choose to be the one sending the indices of points on \(C\times\{0\}\) to the indices of points on \(C\times\{1\}\). This is also the projective analogue of the permutation
Figure 21. An example of applying the moves
associated to a classical braid. We also consider another permutaion,
\[g:\mathbb{Z}\diagup_{k\mathbb{Z}}\rightarrow\mathbb{Z}\diagup_{k\mathbb{Z}}\]
defined as follows,
\[g(i) =i+1\text{, if i is an odd class,}\] \[=i-1\text{, if i is an even class.}\]
Since \(k=2n\) is even, this is a well defined permutation of \(\mathbb{Z}\diagup_{k\mathbb{Z}}\). Let \(G\) denote the group \(\mathbb{Z}\diagup_{k\mathbb{Z}}\). Notice that \(n\) is an order 2 element in \(G\) and let \(H\) denote the subgroup generated by \(n\). Then we have,
\[\diagup_{H}\approx\mathbb{Z}\diagup_{n\mathbb{Z}}.\]
For brevity of notation, we will denote the point \(p_{i}\) on both \(C\times\{0\}\) and \(C\times\{1\}\) by simply \(i\). We may assume that, the points on both \(C\times\{0\}\) and \(C\times\{1\}\) are arranged symmetrically like numbers on a clock. Then the points \(i\) and \(i+n\) are diametrically opposite. Thus they belong to the same coset in \(\diagup_{H}\). We will denote by \([i]\) the coset where the element \(i\) belongs. Consider the permutation \(f_{\beta}^{-1}gf_{\beta}\). Notice that this induces a permutation,
\[h_{\beta}:\frac{G}{H}\rightarrow\frac{G}{H}.\]
We call this the **residual permutation** of \(\beta\).
**Theorem 5.2**.: _The number of components in the plat closure link of a braid \(\beta\) is same as the number of disjoint cycles in its residual permutation._
**Proof:** Notice that, the point \(f_{\beta}^{-1}gf_{\beta}(i)\) is connected to the point \(i\) by the arc formed by the string of \(\beta\) connecting \(i\) to \(f_{\beta}(i)\) followed by the string in the internal tangle connecting \(f_{\beta}(i)\) to \(g(f_{\beta}(i))\) and then by the string of beta connecting \(g(f_{\beta}(i))\) to \(f_{\beta}^{-1}(g(f_{\beta}(i)))\). Also notice that each coset in \(\diagup_{H}\) has two points on \(C\times\{0\}\) which are connected by one string in the residual tangle.
Now suppose (\([i_{1}]\)\([i_{2}]\)... \([i_{l}]\)) is a disjoint cycle decomposition of \(h_{\beta}\). Choose any element \(j_{1}\) in the coset \([i_{1}]\). Then \(j_{1}\) and \(j_{2}:=f_{\beta}^{-1}(g(f_{\beta}(j_{1})))\) by an arc as described above. And \(j_{2}\) is an element of \([i_{2}]\) by definition. We can follow this arc from \(j_{2}\) through the string of residual tangle to the point \(j_{2}^{\prime}:=j_{2}+n\in[i_{2}]\). If \([i_{2}]=[i_{1}]\), in which case \(h_{\beta}\) fixes the \([i_{1}]\), then this \(j_{2}^{\prime}=j_{1}\) and we have a closed loop. Otherwise, we may start again from the point by the above method to the point \(j_{3}:=f_{\beta}^{-1}(g(f_{\beta}(j_{2}^{\prime})))\). And again by following the string of the residual tangle, we can reach the point \(j_{3}^{\prime}:=j_{3}+n\). Then, if \(j_{3}^{\prime}=j_{1}\), we have a closed loop and the cycle was a transposition (\([i_{1}]\)\([i_{2}]\)). Similarly by following this procedure we can see that when we start from the point \(j_{1}\) and reach the end of the cycle at the point \(j_{l}^{\prime}\), we obtain a knot in the closure link. It is easy to see that if we had chosen to begin at the other element \(j_{1}+n\in[i_{1}]\) then we would be moving through the same knot, but in the opposite direction. Also if we had represented the cycle with a different ordering of points, then we will again be on the same knot, but starting and ending at a different point.
Thus every cycle in the disjoint cycle decomposition of \(h_{\beta}\) corresponds to a knot in the plat closure of \(\beta\). That is, disjoint cycles in the residual permutation of \(\beta\) and the knots in the plat closure of \(\beta\) are in one to one correspondence. Hence we are done.
Another natural question, one would like to ask is about the nature of the link formed by closing a braid. Like, its homological properties, affineness and so on. The following theorem studies the conditions for the link to be affine.
**Theorem 5.3**.: _Let \(L\) be the closure of an \(k=2n\) braid \(\beta=\sigma_{i_{1}}\sigma_{i_{2}}...\sigma_{i_{l}}\). Then \(L\) is affine if and only if there exists an even class \(j\in\mathbb{Z}/\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
appropriate ring, for example \(\mathbb{C}[x,x^{-1}]\)), invariant under \(M\)-moves. This trace will be a link invariant.
|
2301.13214 | The extended "stellar halo" of the Ursa Minor dwarf galaxy | Stellar candidates in the Ursa Minor (UMi) dwarf galaxy have been found using
a new Bayesian algorithm applied to \textit{Gaia} EDR3 data. Five of these
targets are located in the extreme outskirts of UMi, from $\sim5$ to 12
elliptical half-light radii (r$_h$), where r$_h$(UMi) $= 17.32 \pm 0.11$
arcmin, and have been observed with the GRACES high resolution spectrograph at
the Gemini-Northern telescope. Precise radial velocities ($\sigma_{\rm{RV}} <
2$ km s$^{-1}$) and metallicities ($\sigma_{\rm{[Fe/H]}} < 0.2$ dex) confirm
their memberships of UMi. Detailed analysis of the brightest and outermost star
(Target~1, at $\sim12$ r$_h$), yields precision chemical abundances for the
$\alpha$- (Mg, Ca, Ti), odd-Z (Na, K, Sc), Fe-peak (Fe, Ni, Cr), and
neutron-capture (Ba) elements. With data from the literature and APOGEE DR17,
we find the chemical patterns in UMi are consistent with an outside-in star
formation history that includes yields from core collapse supernovae,
asymptotic giant branch stars, and supernovae Ia. Evidence for a knee in the
[$\alpha$/Fe] ratios near [Fe/H] $\sim-2.1$ indicates a low star formation
efficiency similar to that in other dwarf galaxies. Detailed analysis of the
surface number density profile shows evidence that UMi's outskirts have been
populated by tidal effects, likely as a result of completing multiple orbits
around the Galaxy. | Federico Sestito, Daria Zaremba, Kim A. Venn, Lina D'Aoust, Christian Hayes, Jaclyn Jensen, Julio F. Navarro, Pascale Jablonka, Emma Fernández-Alvar, Jennifer Glover, Alan W. McConnachie, André-Nicolas Chené | 2023-01-30T19:00:02Z | http://arxiv.org/abs/2301.13214v2 | # The extended "stellar halo" of the Ursa Minor dwarf galaxy
###### Abstract
Five stars in the extreme outskirts (from \(\sim 5\) to \(\sim 12\) elliptical half-light radii, r\({}_{h}\)) of the Ursa Minor (UMi) dwarf galaxy have been identified as potential new members using a Bayesian algorithm applied to _Gaia_ EDR3 data. These targets were observed with the GRACES spectrograph, resulting in precise radial velocities and metallicities that confirm their association with UMi. For the brightest and outermost star (Target 1, at \(\sim 12\) r\({}_{h}\)), the chemical abundances of \(\alpha\)- (Mg, Ca, Ti), odd-Z (Na, K, Sc), Fe-peak (Fe, Ni, Cr), and neutron-capture process (Ba) elements have also been determined. We also discuss data from the literature and from APOGEE DR17. We find the chemical patterns in UMi are consistent with a star formation history that includes yields from core collapse supernovae, asymptotic giant branch stars, and supernovae Ia. Evidence for a knee in the [\(\alpha\)/Fe] ratios near [Fe/H] \(\sim-2.1\) indicates a low star formation efficiency similar to that in other dwarf galaxies. Given the distance of Target 1 from the centre of UMi (R\(\sim\)4.5 kpc), we show that UMi has a more extended structure than previously thought. This "stellar halo" around UMi could be a secondary feature resulting from tidal stripping after multiple orbits around the Galaxy, or maybe a primary UMi feature due to early hierarchical accretion activity or to strong gravitational fluctuations prompted by feedback in the early star formation phase. Also consistent with observations is a late-time merger-free scenario where outside-in star formation is accompanied by gradual supernovae Ia enrichment.
keywords: stars: abundances - stars: Population II - galaxies : formation - galaxies: dwarf - galaxies: individual: Ursa Minor - galaxies: evolution
## 1 Introduction
Dwarf satellites of the Milky Way (MW) are amongst the oldest and most metal-poor galaxies known (e.g., Tolstoy et al., 2009). They are at the low-mass end of the hierarchical formation process, just massive enough to form very metal-poor stars (VMP, [Fe/H] \(\leq-2.0\), Simon, 2019). The mass of faint dwarf galaxies is dominated by dark matter (e.g., Simon, 2019). In fact, their dynamical mass-to-light ratios (M/L) can exceed 1000. They remain one of the best targets for studies seeking to understand the properties of dark matter and early events in the formation our Galaxy (e.g., Bullock and Boylan-Kolchin, 2017).
Hierarchical formation of \(\Lambda-\)Cold Dark Matter (\(\Lambda-\)CDM) cosmology (e.g., White and Rees, 1978; Frenk et al., 1988; Navarro et al., 1997) predicts that haloes grow from the accretion of smaller systems. Therefore, galaxies should possess an extended stellar halo built from disrupted systems. A stellar halo is clearly observed in large galaxies as the Milky Way, but it remain elusive and poorly studied in dwarf galaxies (e.g., Deason et al., 2022, and references therein). One reason is because the fraction of mass assem
bled through mergers is reduced at the dwarf galaxy mass scales, while'smooth' accretion dominates at this regime (e.g., Genel et al., 2010). Second, while at the Milky Way-size the stellar mass to halo mass ratio is well modeled, on the dwarf size is not the case (e.g., Moster et al., 2013, and references therein).
Given their shallow gravitational potential, faint dwarf galaxies are extremely susceptible to internal processes, such as star formation and the subsequent stellar feedback (e.g., El-Badry et al., 2018); and external, such as mergers (e.g., Deason et al., 2014), ram pressure stripping (e.g., Grebel et al., 2003) and stirring (e.g., Kazantzidis et al., 2011), tidal interaction (e.g., Fattahi et al., 2018), and reionization (e.g., Wheeler et al., 2019). All of these processes may act to influence their individual morphologies (e.g., Higgs et al., 2021, and references therein). Signatures of these gravitational interactions will be most evident in the outskirts of the dwarf galaxy, where accreted remnants can show up as an excess of stars over and above expectations from a simple single-component model (akin to a stellar halo in a more massive galaxy).
Stars in the extreme outskirts of dwarf galaxies, and yet which are not clearly associated with prominent tidal tails, have been discovered only relatively recently. Chiti et al. (2021) spectroscopically identified member stars up to \(\sim\)9 half-light radii (\(r_{h}\)), or physical distances up to 1 kpc, away from the centre of the faint dwarf galaxy, Tucana II. Dynamical analysis, as well as chemical abundances, were used to distinguish between a tidal origin, where stars were removed from the main body due to tidal effects with the MW, and an accreted, i.e., dwarf-dwarf merger, origin. The stars identified by Chiti et al. (2021) were found to be extremely metal deficient compared to the main body, suggesting that the outskirts had a different origin from the bulk of stars in Tucana II, perhaps due to an early merger with a low-mass, metal-poor companion. Recently, Longeard et al. (2022) analysed the chemo-dynamical properties of Bootes I, suggesting that the system could have been more massive than nowadays and that tidal stripping is largely affecting the satellite.
Inspired by Chiti et al. (2021), we have examined other MW satellites to help constrain the frequency of such stellar halos around dwarfs. McConnachie and Venn (2020b, a) developed a Bayesian method, and updated by Jensen et al. (prep), to estimate the probability that a star in the vicinity of a dwarf galaxy is a member of the dwarf, using the full astrometric and photometric data from _Gaia_ EDR3 (Gaia Collaboration et al., 2021). Jensen et al. (prep) report that only a few dwarf galaxies out of nearly 60 examined suggest evidence for an extended stellar halo. The systems already in the literature include Tucana II, as examined by Chiti et al. (2021), Sculptor (Sestito et al. prep), and also Coma Berenices, Ursa Major I, and Bootes I (see also Longeard et al., 2022), recently analysed by Waller et al. (2023).
Waller et al. (2023) showed that stars in Coma Berenices have been polluted by supernovae type Ia, in contrast to previous views of this system. Waller et al. (2023) discussed that the chemistry of the outermost stars in these systems is consistent with their formation in the central regions, then moving them to their current locations through tidal stripping and/or supernova feedback, although in the case of Bootes I the lower metallicities and lack of strong carbon enrichment of its outermost stars could also be evidence of a late dwarf-dwarf merger. Although the detailed and precise chemical abundance analysis a firmer conclusion on the origin of the outermost stars is hard to pinpoint.
In this work, we use this Bayesian algorithm to search for member stars in the outermost regions of the dwarf galaxy Ursa Minor (UMi). We make use of recent updates to the algorithm by Jensen et al. (prep), which allow for the presence of a secondary, extended component (i.e., an outer stellar halo). Previously, Piatek et al. (2005) has suggested the presence of tidal effects on the substructure of UMi.
Ursa Minor is historically a well-studied system. Some controversies remain regarding the star formation history (SFH) and its efficiency. For example, Carrera et al. (2002) suggested that up to \(\sim\)95 per cent of UMi stars are older than 10 Gyr, invoking an episodic SFH at early times. This is based on studies of its colour-magnitude diagram (e.g., Mighell and Burke, 1999; Bellazzini et al., 2002). Other models interpreted the chemical properties of UMi as due to extended SFH, from 3.9 and 6.5 Gyr (Ikuta and Arimoto, 2002; Ural et al., 2015). Kirby et al. (2011, 2013) matched the wide metallicity distribution function (MDF) of UMi with a chemical evolution model that includes infall of gas. On the other hand, Ural et al. (2015) developed three chemical evolution models, showing that winds from supernovae are needed to describe UMi's MDF, especially to reproduce stars at higher metallicities. The authors underline that winds help to explain the absence of gas at the present time. In agreement with Ikuta and Arimoto (2002), their models use an extended low-efficiency SFH duration (5 Gyr, Ural et al., 2015).
Ural et al. (2015) argued that is not easy to discern if the [\(\alpha\)/Fe] displays a plateau up to [Fe/H]\(\sim-2.0\), or whether it shows a gradual decrease. However, they conclude that a slow decrease is present above this metallicity, pointing to the contribution of supernovae type Ia (SNe Ia). On the other hand, Cohen and Huang (2010) noted that a very short duration of star formation (\(\sim 2\) Gyr) implies that SNe Ia did not have enough time to contribute to the chemical evolution of UMi. More recent studies discovered that SNe Ia can occur in the very first 2 Gyr of the Universe (e.g., Maoz et al., 2012, 2014; de los Reyes et al., 2020; Kobayashi et al., 2020). The \(\Lambda-\)CDM cosmological zoom-in simulations developed by Revaz and Jablonka (2018) that incorporate gas cooling found that the star formation and chemical evolution of UMi can be explained. In particular, when SNe Ia and II events are taken into account with thermal blastwave-like feedback (Revaz and Jablonka, 2018, and references therein), then they can reproduce the observed distribution in metallicity, [Mg/Fe], and the radial velocity dispersion with a short star formation of only 2.4 Gyr.
In this paper, we present a chemo-dynamical investigation of stars in the extreme outskirts of UMi observed with high-resolution GRACES spectrograph at Gemini North/CFHT. Our results combined with spectroscopic results for additional stars in the literature are used to discuss the extended chemical and dynamical evolution of UMi. The target selection, the observations, and the spectral reduction are reported in Section 2. Stellar parameters are inferred in Section 3. The model atmosphere and chemical abundance analysis for Target 1 are reported in Section 4 and 5, respectively. Section 6 describes the measure
ment of [Fe/H] using Ca Triplet lines for Target 2-5. The inference of the orbital parameters of UMi is described in Section 7. The chemo-dynamical properties of Ursa Minor are discussed in Section 8.
## 2 Data
### Target selection
Using the Bayesian algorithm from McConnachie & Venn (2020), with updates from Jensen et al. (prep), we have searched for stars that inhabit the extended stellar halo of Ursa Minor. Briefly, this algorithm provides the probability for any star in _Gaia_ to be a member of a given MW satellite or to belong to the MW halo. The total likelihood is a function of the position of the star on the sky, on the colour-magnitude diagram, and in proper motion space (thus, no radial velocity or metallicity information is used). This algorithm has proved useful to identify new members in the extreme outskirt of some ultra-faint dwarf galaxies (Waller et al., 2023; Sestito et al. prep) and performs excellently to remove Milky Way foreground contamination (Jensen et al. prep).
In this work, we further validate this identification method by examining the extreme outskirts of UMi. All stars with a high probability (\(>80\%\)) of being associated to UMi, and at a distance greater than 5 half-light radii (\(\gtrsim 85\) arcmin or \(\gtrsim 900\) pc) from the centre of the dwarf, were selected. This included five red giants with magnitudes in the range \(17.4\leq G\leq 18.3\) mag in the _Gaia_ EDR3 G band. The brightest target is also the farthest in projection, reaching an extreme distance of 11.7 half-light radii from the centre of UMi. Our other four targets, at a distance of \(5.2-6.3\)\(r_{h}\), are also listed as highly likely UMi candidates by Qi et al. (2022, with a probability \(>90\) percent). The main properties of UMi and our five targets are reported in Tables 1 and 2, respectively.
The position of our five candidates together with other known UMi members are shown in Figure 1 in projected sky coordinates, on the colour-magnitude diagram, and in proper motion space. This figure shows that even if the candidates are located far from the centre of UMi, the algorithm is very efficient in selecting new candidate members in the very outskirts of the system. We gather UMi members from Spencer et al. (2018), Pace et al. (2020), and APOGEE data release 17 (DR17, Abdurro'uf et al., 2022) and then cross-match with Gaia EDR3 to retrieve coordinates, proper motion, and photometry. When we examine the APOGEE DR17 targets, we have applied our selection algorithm to select the stars with high membership probability (\(>70\) %) and with high signal-to-noise in their spectra (SNR \(>70\)). Surprisingly, two stars from APOGEE DR17 have an elliptical distance of \(\sim 7\)\(r_{h}\). We note that the [Fe/H] values for these two stars are at the edge of the metallicity grid of APOGEE; thus, while their radial velocity measurements are precise, their true [Fe/H] could be lower, in turn affecting their [X/Fe] ratios.
### GRACES observations
Targets were observed with the Gemini Remote Access to CFHT ESPaDOnS Spectrograph (GRACES, Chene et al., 2014; Pazder et al., 2014) using the 2-fibre (object+sky) mode with a resolution of R\(\sim 40000\). GRACES consists a 270-m optical fibre that connects the Gemini North telescope to the Canada-France-Hawaii Telescope ESPaDOnS cross-dispersed high resolution echelle spectrograph (Donati et al., 2006). The targets were observed within the GN-2022A-Q-128 program (P.I. F. Sestito).
For the brightest target (Target 1, G\(=17.4\) mag), which is also the farthest one from the centre (\(\sim 11.7\)\(r_{h}\)), we achieved a spectrum with SNR per resolution element of \(\sim 30\) at the Ba ii 6141 A region. This spectrum has sufficient SNR to measure the abundances for additional elements, specifically the \(\alpha-\) (Mg, Ca, Ti), odd\(-\)Z (Na, K, Sc), Fe\(-\)peak (Fe, Cr, Ni), and neutron\(-\)capture process (Ba) elements across the entire GRACES spectral coverage. We refer to this observational set-up as the "high-SNR mode". For the remaining four targets, which have distances from 5 \(-\) 7 \(r_{h}\), a SNR per resolution element of \(\sim 20\) in the Ca ii T region (\(\sim\)8550 A) was obtained for precise radial velocities and metallicities. In this "low-SNR mode", the metallicities are derived from the equivalent width (EW) of the NIR Ca ii T, as described in Section 6. Observing information is summarized in Table 3, including the signal-to-noise ratio measured at the Mg i b, Ba ii 614nm, and Ca ii T regions.
### Spectral reductions
The GRACES spectra were first reduced using the Open source Pipeline for ESPaDOnS Reduction and Analysis
\begin{table}
\begin{tabular}{l r r} \hline Property & Value & Reference \\ \hline \(\alpha\) & 227.2854 deg & (b) \\ \(\delta\) & 67.2225 deg & (b) \\ \hline [Fe/H] & \(-2.13\pm 0.01\) & (b) \\ RV & \(246.9\pm 0.1\) km s\({}^{-1}\) & (b) \\ \(\sigma\)V & \(9.5\pm 1.2\) km s\({}^{-1}\) & (b) \\ D\({}_{\odot}\) & 76 \(\pm\) 10 kpc & (a) \\ ellipticity & \(0.55\pm 0.01\) & (b) \\ \(\phi\) & \(50\pm 1\) deg & (b) \\ r\({}_{\rm h}\) & \(17.32\pm 0.11\) arcmin & (b) \\ r\({}_{\rm h,plummer}\) & \(382\pm 53\) pc & (b) \\ r\({}_{\rm h,plummer}\) & 407 pc & (d) \\ \(\mu_{\alpha\rm cos}\delta\) & \(-0.124\pm 0.004\) mas yr\({}^{-1}\) & (c) \\ \(\mu_{\delta}\) & \(0.078\pm 0.004\) mas yr\({}^{-1}\) & (c) \\ M\({}_{\rm dyn}\)(\(\leq r_{\rm half}\)) & \(9.5\times 10^{6}\) M\({}_{\odot}\) & (a) \\ Mass density & \(0.35\) M\({}_{\odot}\) pc\({}^{-3}\) & (e) \\ L & \(0.29\times 10^{6}\) L\({}_{\odot}\) & (e) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Galactic parameters of Ursa Minor. The coordinates \(\alpha,\delta\), the mean metallicity, the mean radial velocity, the velocity dispersion, the heliocentric distance D\({}_{\odot}\), the ellipticity, the position angle \(\phi\), and the half-light radius r\({}_{\rm h}\) in arcmin and pc, the mean proper motion from Gaia EDR3, the dynamical mass, the mass density, and the luminosity are reported with the respective references. (a) refers to McConnachie (2012), (b) to McConnachie & Venn (2020), (c) to McConnachie & Venn (2020), (d) to Qi et al. (2022), and (e) to Mateo (1998).
(OPERA, Martioli et al., 2012) tool, which also corrects for heliocentric motion. Then the reduced spectra were post-processed following an updated procedure of the pipeline described in Kielty et al. (2021). The latter pipeline allows us to measure the radial velocity of the observed star, to co-add multiple observations, to check for possible radial velocity variations, to correct for the motion of the star, and to eventually re-normalise the flux. This procedure also improves the signal-to-noise ratio in the overlapping spectral order regions without downgrading the spectral resolution. Radial velocities are reported in Table 4.
This procedure failed for one of the spectral orders of Target 1 covering the Mg i b region for reasons that we could not overcome within the scope of this project. We therefore extracted the data for Target 1 ourselves using DRAGraces1 IDL code (Chen et al., 2021).
Footnote 1: [https://github.com/AndreNicolasChene/DRAGRACES/releases/tag/v1.4](https://github.com/AndreNicolasChene/DRAGRACES/releases/tag/v1.4)
The final spectra for all five targets near the Na i Double (left) and in the NIR Ca ii Triplet (right) regions are shown in Figure 2. The quality of the spectra indicates that the adopted exposure time were sufficient for the requested science, i.e., chemical abundances for Target 1, and [Fe/H] and RV only for Targets 2\(-\)5.
## 3 Stellar parameters
Given the low SNR of our spectra, we use the InfraRed flux method (IRFM) from Gonzalez Hernandez & Bonifacio (2009) with photometry from _Gaia_ EDR3 to find the ef
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline Target & source id & \(\alpha\) & \(\delta\) & \(\xi\) & \(\eta\) & \(r_{\rm ell}\) & \(P_{\rm sat}\) & G & BP\(-\)RP & A\({}_{\rm V}\) \\ & & (deg) & (deg) & (deg) & (deg) & (r\({}_{h}\)) & & (mag) & (mag) & (mag) \\ \hline Target 1 & 1647329728514964480 & 234.45303 & 69.29204 & 2.53226 & 2.21888 & 11.67 & 0.80 & 17.39 & 1.29 & 0.08 \\ Target 2 & 1693464785444020224 & 224.67731 & 67.35983 & \(-\)1.00378 & 0.15842 & 6.34 & 0.97 & 17.83 & 1.19 & 0.06 \\ Target 3 & 1693573430936780032 & 226.08983 & 67.77965 & \(-\)0.45214 & 0.56153 & 5.55 & 0.96 & 17.91 & 1.19 & 0.05 \\ Target 4 & 1669324938936435200 & 224.50756 & 66.21361 & \(-\)1.12033 & \(-\)0.98413 & 5.17 & 0.94 & 18.25 & 1.17 & 0.06 \\ Target 5 & 1645948119139534336 & 230.43949 & 68.29581 & 1.16629 & 1.10328 & 5.60 & 0.92 & 18.29 & 1.17 & 0.06 \\ \hline \end{tabular}
\end{table}
Table 2: The _Gaia_ EDR3 source ID, the coordinates \((\alpha,\delta)\), the projected coordinates \((\xi,\eta)\), the elliptical radius distance \(r_{\rm ell}\) in \(r_{h}\) unit, the probability to be a member from Jensen et al. (prep), the _Gaia_ EDR3 photometry G and BP\(-\)RP, and the reddening A\({}_{\rm V}\) from Schlafly & Finkbeiner (2011) are reported for each target.
Figure 1: Ursa Minor seen through _Gaia_ EDR3. All the panels: Target 1 is marked with a red diamond, while black diamonds are Target 2–5. Magenta circles are UMi literature stars from Spencer et al. (2018) and Pace et al. (2020). Blue squares are UMi stars selected from APOGEE DR17. MW foreground stars are marked with grey small dots. These are selected from _Gaia_ EDR3 in the direction of UMi and within the field of view of the \(\eta-\xi\) panel. Left panel: Projected sky coordinates and projected distance from UMi centre. The orange ellipses denotes the elliptical distances from UMi centre of 3, 5, 7, and 11 \(r_{h}\). The arrow points in the direction of UMi proper motion. Central panel: Colour-magnitude diagram. Dark green dashed lines is a Padova isochrone at [Fe/H] \(=-2.0\) and age of 12 Gyr (Bressan et al., 2012). Right panel: Proper motion space.
\begin{table}
\begin{tabular}{l c c c c c} \hline Target & \(t_{\rm exp}\) & N\({}_{\rm exp}\) & SNR & SNR & SNR & Obs. date \\ & (s) & & 0Mg i b & 0Ba ii & 0Ca iiT & YY/MM/DD \\ \hline Target 1 & 14400 & 6 & 9 & 27 & 37 & 22/06/18 \\ Target 2 & 1800 & 1 & 5 & 12 & 17 & 22/03/14 \\ Target 3 & 1800 & 1 & 1 & 6 & 8 & 22/03/14 \\ Target 4 & 2400 & 1 & 2 & 6 & 11 & 22/06/17 \\ Target 5 & 2400 & 1 & 1 & 5 & 10 & 22/06/17 \\ \hline \end{tabular}
\end{table}
Table 3: Total exposure time, number of exposures, signal-to-noise ratio (SNR) measured at the Mg i 518nm, Ba ii 614nm, and Ca ii 850nm regions, and the observation dates are reported for each target. The SNR is defined as the ratio between the median flux and its standard deviation in given spectral region.
fective temperatures, adopting the Mucciarelli et al. (2021) colour-temperature relationship for giants. The input parameters are the _Gaia_ EDR3 (BP \(-\) RP) de-reddened colour and a metallicity estimate. The 2D Schlafly & Finkbeiner (2011) map2 has been used to correct the photometry for extinction3. As input metallicities, we adopt the value \(\rm[Fe/H]=-2.0\pm 0.5\), compatible with the metallicity distribution in UMi.
Footnote 2: [https://irsa.ipac.caltech.edu/applications/DUST/](https://irsa.ipac.caltech.edu/applications/DUST/)
Footnote 3: To convert from the E(B-V) map to _Gaia_ extinction coefficients, the \(\rm A_{V}/E(B-V)=3.1\)(Schultz & Werner, 1975) and the \(\rm A_{G}/A_{V}=0.85926\), \(\rm A_{B}/A_{V}=1.06794\), \(\rm A_{B}/A_{V}=0.65199\) relations (Margio et al., 2008; Evans et al., 2018) are used.
Surface gravities were found using the Stefan-Boltzmann equation4. This step required the effective temperature, the distance of the object, the _Gaia_ EDR3 G de-reddened photometry, and the bolometric corrections on the flux (Andrae et al., 2018) as input. A Monte Carlo algorithm has been applied to the input parameters with their uncertainties to estimate the total uncertainties on the stellar parameters. The input quantities were randomised within 1\(\sigma\) using a Gaussian distribution, except for the stellar mass. The latter is treated with a flat prior from 0.5 to 0.8 \(\rm\,M_{\odot}\), which is consistent with the mass of long-lived very metal-poor stars. The mean uncertainty on the effective temperature is \(\sim 94\) K, while on the surface gravity it is \(\sim 0.08\) dex. This method has been shown to provide reliable stellar parameters suitable for spectroscopic studies of very metal-poor stars (e.g., Kielty et al., 2021; Sestito et al., 2023; Waller et al., 2023). The stellar parameters are reported in Table 4.
Footnote 4: \(L_{*}=4\pi R_{*}^{2}\sigma T_{*}^{4}\); the radius of the star can be calculated from this equation, then the surface gravity is inferred assuming the mass.
## 4 Model atmosphere analysis
In this Section, we describe the model atmospheres, the method, and the atomic data for our spectral line list adopted to determine detailed chemical abundances for Target 1.
\begin{table}
\begin{tabular}{l c c c c} \hline Target & RV & \(\rm T_{eff}\) & log g & [Fe/H] \\ & (km s\({}^{-1}\)) & (K) & & \\ \hline Target 1 & \(-256.91\pm 0.05\) & \(4604\pm 94\) & \(1.15\pm 0.08\) & \(-2.09\pm 0.09\) \\ Target 2 & \(-265.26\pm 1.89\) & \(4771\pm 93\) & \(1.43\pm 0.07\) & \(-2.79\pm 0.15\) \\ Target 3 & \(-218.78\pm 1.82\) & \(4760\pm 100\) & \(1.45\pm 0.08\) & \(-2.67\pm 0.08\) \\ Target 4 & \(-245.63\pm 1.78\) & \(4795\pm 85\) & \(1.60\pm 0.07\) & \(-2.85\pm 0.10\) \\ Target 5 & \(-247.29\pm 1.63\) & \(4814\pm 100\) & \(1.61\pm 0.08\) & \(-2.30\pm 0.20\) \\ \hline \end{tabular}
\end{table}
Table 4: Stellar parameters of the five targets. [Fe/H] for Target 1 is from Fe i and Fe ii lines, while for the other stars is from Ca ii Triplet lines.
Figure 2: GRACES spectra for the five new UMi member stars. Left panel: Na i Doublet region. Chemical abundance ratios are measurable only for Target 1 given the low SNR of Targets 2–5. Right panel: The second component of the Ca ii Triplet. This spectral line is used to infer [Fe/H] (see Section 6).
### Model atmospheres
Model atmospheres are generated from the MARCS5 models (Gustafsson et al., 2008; Plez, 2012); in particular, we selected the OSMARCS spherical models as Target 1 is a giant with log(g)\(<3.5\). An initial model atmosphere was generated using the derived stellar parameters, a metallicity [Fe/H] \(=-2.0\), and microturbulence velocity scaled by the surface gravity from the calibration by Mashonkina et al. (2017) for giants.
Footnote 5: [https://marcs.astro.uu.se](https://marcs.astro.uu.se)
### The lines list and the atomic data
Spectral lines were selected from our previous analyses of very metal-poor stars in the Galactic halo and other nearby dwarf galaxies observed with GRACES (Kielty et al., 2021; Sestito et al., 2023; Waller et al., 2023). Atomic data is taken from linemake6(Placco et al., 2021), with the exception of K i lines taken from the National Institute of Standards and Technology (NIST, Kramida et al., 2021)7.
Footnote 6: [https://github.com/vmplacco/linemake](https://github.com/vmplacco/linemake)
Footnote 7: NIST database at [https://physics.nist.gov/asd](https://physics.nist.gov/asd)
### Spectral line measurements
Spectral line measurements are made using spectrum synthesis, broadened with a Gaussian smoothing kernel of FWHM \(=0.15\), which matches the resolution of the GRACES 2-fibre mode spectra) in a four-step process: (1) the synthesis of the [Fe/H] lines in our initial line list (see above) is carried out using an initial model atmosphere and invoking the MOOG3 spectrum synthesis program (Sneden, 1973; Sobeck et al., 2011); (2) a new [Fe/H] is determined by removing noisy lines; (3) the model atmosphere is updated with the new [Fe/H] as metallicity; (4) the chemical abundances are derived using the updated model atmosphere and our full line list. The final chemical abundance is given by the average measurement in case of multiple spectral lines.
### Checking the stellar parameters
Excitation equilibrium in the line abundances of Fe i is a check on the quality of the effective temperature. For Target 1, the slope in A(Fe i) \(-\) Excitation potential (EP) from the linear fit has a value of \(-0.027\) dex eV\({}^{-1}\). This value is smaller than the dispersion in the measurements of the chemical abundances (\(\sim 0.2\) dex) over the range in EP (\(\sim\)4 eV). Thus, we conclude our effective temperature estimates are sufficient from the IRFM.
Ionization balance between Fe i \(-\) Fe ii is widely used as a sanity check on the surface gravity estimates (e.g., Mashonkina et al., 2017). However, Karovicova et al. (2020) have strongly advised against using this method for very metal-poor giants. They used interferometric observations of metal-poor stars to find radii, and subsequently precise stellar parameters for a set of metal-poor benchmark stars. With their stellar parameters, they have found that deviations in Fe i \(-\) Fe ii can reach up to \(\sim 0.8\) dex. This effect is the strongest in very metal-poor cool giants (e.g., [Fe/H]\(<-2.0\), log(g)\(<3\), and T\({}_{\rm eff}\lesssim 5500\) K), such as UMi Target 1 (see Table 4). If we examine A(Fe i) and A(Fe ii) in UMi Target 1, we find they differ by only \(1.43\sigma\) or \(0.16\pm 0.11\) dex. This value is consistent with ionization equilibrium, and also within the range in the discrepancies found by Karovicova et al. (2020) for cool giants. For these reasons, we refrain from tuning the surface gravity based on the Fe lines.
## 5 Chemical abundance analysis
This section describes the chemical abundances that we determine from the spectrum of Target 1. This includes an application of non-local thermodynamic equilibrium corrections, and a comparison with other UMi members and MW halo stars in the literature.
### \(\alpha-\)elements
\(\alpha\)-elements are primarily formed in the cores of massive stars and during the explosive phases of core-collapse supernovae (e.g., Timmes et al., 1995; Kobayashi et al., 2020). There are only three \(\alpha\)-elements which produce measurable lines in our GRACES spectrum of Target 1; Mg, Ca, Ti. The A(Mg i) is from two lines of the Mg i Triplet (\(\lambda\lambda 5172.684,5183.60\)A) and the weaker 5528.405A line. The A(Ca i) is inferred from 13 spectral lines, from 5588A to 6500 A. Up to 12 and 9 lines of Ti i and Ti ii are useful to infer A(Ti i) and A(Ti ii), respectively (Lawler et al., 2013; Wood et al., 2013). The first row of panels in Figure 3 display the [Mg, Ca, Ti/Fe] ratios as a function of the [Fe/H]. Both the LTE and NLTE analysis are reported (see Section 5.5). Since both Ti i and Ti ii lines are present in the spectrum, [Ti/Fe] is the average weighted by the number of lines of each species.
To highlight the strong Mg lines in UMi Target 1, we compare it to the metal-poor benchmark giant HD 122563 ([Fe/H]=\(-2.7\), Lind et al. (2022); Kielty et al. (2021)) in Figure 4.
### Odd-Z elements
Odd-Z elements are excellent tracers of metal-poor core-collapse supernovae due to the odd-even effect in the predicted yields (Heger & Woosley, 2010; Nomoto et al., 2013; Kobayashi et al., 2020; Ebiinger et al., 2020). Three odd-Z elements are observable in our spectrum of Target 1; Na, K, Sc. A(Na i) is measurable from the spectral lines of the Na i Doublet (\(\lambda\lambda 5889.951,5895.924\) A). K i is observable with two lines at \(\lambda\lambda 7664.899,7698.965\)A (Falke et al., 2006; Trubko et al., 2017). These lines are very close to water vapour lines of the Earth's atmosphere; however, the radial velocity for Target 1 places these lines in clear windows. Sc is measured from only one Sc ii line at \(\lambda\lambda 5526.785\)A (Lawler et al., 2019). The abundances of K and Sc have been measured with the synth configuration in MOOG, taking into account hyperfine splitting effects for Sc. The second row of panels of Figure 3 shows [Na, K, Sc/Fe] (LTE for all and also NLTE for Na).
### Fe-peak elements
Fe-peak elements are important tracers of stellar evolution. At early times, they were produced primarily in core collapse supernovae (e.g., Heger & Woosley, 2010), and then later in supernova Ia events (e.g., Nomoto et al., 2013). The Fe-peak elements observable in our GRACES spectra include Fe, Cr and Ni. The A(Fe i) is from 29 lines, while A(Fe ii) is from only 3 lines. Our final [Fe/H] values are the average measurements weighted by the number of lines per star. A(Cr i) is measured from 3 spectral lines (\(\lambda\lambda 5296.691,5345.796,5409.783\)A, Sobeck et al., 2007), while Ni i is found from four lines (\(\lambda\lambda 5476.904,5754.656,6586.31,6643.63\) A, Wood et al., 2014). The left and centre panels of the third row of Figure 3 show [Cr/Fe] (LTE and NLTE) and [Ni/Fe] (LTE) as a function of [Fe/H].
### Neutron-capture process elements
Neutron-capture elements are primarily synthesised through two main channels, the rapid and the slow neutron captures processes. If the neutron capture timescale is shorter than the \(\beta^{-}\) decay time, then rapid-process elements are formed. Conditions where this is most likely to happen are found in core collapse supernovae and neutron-star mergers. Otherwise, as in the stellar atmospheres of AGB stars, where neutron fluxes are lower and have weaker energies, then the beta-decay timescale is shorter, leading to the production
Figure 3: Chemical abundances for stars in UMi. Target 1 is marked with a red diamond (LTE) and with an orange diamond (NLTE). UMi stars from the high-resolution observations from literature are denoted with magenta diamonds. The literature compilation is from Shetrone et al. (2001), Sadakane et al. (2004), Cohen & Huang (2010), Kirby & Cohen (2012), and Ural et al. (2015) and it is in LTE. Grey open circles mark MW halo stars compiled from Aoki et al. (2013), Yong et al. (2013), Kielty et al. (2021), and Buder et al. (2021). The black cross at the corner of each panel represents the typical uncertainty on the UMi literature chemical abundances.
via the slow-neutron capture processes. The only neutron-capture process element present in our GRACES spectra is Ba, with two Ba ii lines (\(\lambda\lambda 6141.73,6496.91\) A. To infer the A(Ba ii), MOOG has been run with the synthetic configuration to account for the hyperfine structure corrections. Bottom right panel of Figure 3 displays [Ba/Fe] (LTE and NLTE) as a function of [Fe/H].
### NLTE corrections
The elemental abundances in the atmospheres of very metal-poor stars are affected by departures from Local Thermodynamic Equilibrium (LTE). Thus, the statistical equilibrium solutions need to be corrected for radiative effects (non-LTE effects, or "NLTE"), which can be large for some species. To correct for NLTE effects in Fe (Bergemann et al., 2012) and Na i (Lind et al., 2012), we adopted the results compiled in the INSPECT9 database. The NLTE corrections for Mg i (Bergemann et al., 2017), Ca i (Mashonkina et al., 2017), Ti i and Ti ii (Bergemann, 2011), and Cr i (Bergemann & Cecutti, 2010) are from the MPIA webtool database10. For Ba ii lines, we adopted the NLTE corrections from Mashonkina & Belyaev (2019), also available online11.
Footnote 9: [http://inspect-stars.com](http://inspect-stars.com)
Footnote 10: [http://nlte.mpia.de](http://nlte.mpia.de)
Footnote 11: [http://www.inasan.ru/~lima/pristine/ba2/](http://www.inasan.ru/~lima/pristine/ba2/)
### Uncertainty on the chemical abundances
The uncertainty on element X is given by \(\sigma_{\rm A(X)}=\delta_{\rm A(X)}/\sqrt{\rm N_{X}}\) if the number of the measured spectral lines is N\({}_{\rm X}>5\), or \(\sigma_{\rm A(X)}=\delta_{\rm A(Fe\ \textsc{i})}/\sqrt{\rm N_{X}}\) otherwise. Given the SNR across the observed combined spectrum of Target 1, the uncertainty on the chemical abundance ratios is in the range \(0.10\leq\sigma_{\rm[X/Fe]}\leq 0.24\). This range for the uncertainty is compatible with the ones measured by Kielty et al. (2021) and Waller et al. (2023), in which they use a similar observational setup with GRACES to study chemical abundances of very metal-poor giant stars.
### Elemental abundance compilation from the literature
UMi is an interesting and nearby dwarf galaxy that has had extensive observations of stars in its inner regions. We have gathered the elemental abundance results from optical high-resolution observations of stars in Ursa Minor from the literature. This compilation is composed of 21 stars in total, including Shetrone et al. (2001, 4 stars), Sadakane et al. (2004, 3 stars), Cohen & Huang (2010, 10 stars), Kirby & Cohen (2012, 1 star), and Ural et al. (2015, 3 stars). All of these studies provide 1D LTE chemical abundances.
We also compare the chemistry of the stars in UMi with those in the MW halo from a compilation including Aoki et al. (2013); Yong et al. (2013); Kielty et al. (2021); Buder et al. (2021). The stars from Buder et al. (2021) are from the third data release of GALactic Archaeology with HERMES (GALAH, De Silva et al., 2015; Buder et al., 2021) collaboration. We select GALAH stars to be in the halo, with reliable metallicities (flag_fe = 0), chemical abundances (flag_X_fe = 0), and stellar parameters (flag_sp = 0).
Figure 4: Mg i 5528Å region. The Mg-rich spectrum of Target 1 (black solid line) is compared with the standard VMP star HD122563 (black dashed line, [Fe/H] \(\sim-2.7\), [Mg/Fe] \(\sim+0.3\) Lind et al. 2022; Kielty et al. 2021) and three synthetic spectra with [Mg/Fe] \(=+0.5,+0.8,+1.0\) (light blue, yellow, and pink shaded areas, respectively). Synthetic spectra have been generated using the _synth_ mode in MOOG (Sneden, 1973) with the line list from linemake (Placco et al., 2021). Model atmosphere are from MARCS (Gustafsson et al., 2008; Plez, 2012). The synthetics are created at the same resolution of GRACES and with the stellar parameters and metallicity as Target 1.
\begin{table}
\begin{tabular}{l r r r r} \hline Ratio & LTE & \(\sigma\) & N\({}_{\rm lines}\) & NLTE \\ & (dex) & (dex) & & (dex) \\ \hline
[Fe/H] & \(-2.09\) & \(0.09\) & \(29\)\(+3\) & \(-1.98\) \\
[Mg/Fe] & \(0.86\) & \(0.20\) & \(3\) & \(0.75\) \\
[Ca/Fe] & \(0.12\) & \(0.11\) & \(13\) & \(0.07\) \\
[Ti/Fe] & \(0.21\) & \(0.12\) & \(12\)\(+9\) & \(0.27\) \\
[Na/Fe] & \(-0.44\) & \(0.24\) & \(2\) & \(-0.82\) \\
[K/Fe] & \(0.40\) & \(0.10\) & \(2\) & \(--\) \\
[Sc/Fe] & \(0.15\) & \(0.10\) & \(1\) & \(--\) \\
[Cr/Fe] & \(-0.06\) & \(0.24\) & \(3\) & \(0.14\) \\
[Ni/Fe] & \(-0.04\) & \(0.18\) & \(4\) & \(--\) \\
[Ba/Fe] & \(-1.00\) & \(0.15\) & \(2\) & \(-1.13\) \\ \hline \end{tabular}
\end{table}
Table 5: Chemical abundances of Target 1. The LTE and NLTE ratios are reported together with the \(\sigma\) and the number of lines for each measured species. For Fe and ti we report the number of lines relative to both the neutral and the single-ionised states.
Metallicities from the NIR Ca ii T lines
For our UMi Targets 2-5 observed in low-SNR mode, metallicities are derived from the NIR Ca ii T lines. We follow the method described in Starkenburg et al. (2010) with some minor modifications. Starting with their Equation A.1:
\[\mathrm{[Fe/H]}=a+b\cdot\mathrm{Mv}+\mathrm{c}\cdot\mathrm{EW}_{2+3}+\mathrm{d} \cdot\mathrm{EW}_{2+3}^{-1.5}+\mathrm{e}\cdot\mathrm{EW}_{2+3}\cdot\mathrm{Mv}, \tag{1}\]
where \(\mathrm{Mv}\) is the absolute V magnitude of the star, \(\mathrm{EW}_{2+3}\) is the sum of the equivalent width of the Ca ii \(\lambda\lambda 8542.09,8662.14\) A lines, and \(a,b,c,d\) are the coefficients listed in Table A.1 of Starkenburg et al. (2010). \(\mathrm{Mv}\) is derived converting the _Gaia_ EDR3 magnitudes to the Johnson-Cousins filter following the relation from Riello et al. (2021, see their Table C.2 for the coefficients) and adopting a heliocentric distance of \(76\pm 10\) kpc (e.g., McConnachie, 2012). Our minor modification is due to the fact that the third component of our Ca ii T spectra is contaminated by sky lines. Therefore, \(\mathrm{EW}_{2+3}\) is derived assuming that the EW ratio between the second and the third Ca ii T lines is \(\mathrm{EW}_{8542}/\mathrm{EW}_{8662}=1.21\pm 0.03\), in agreement with Starkenburg et al. (2010, see their Figure B.1). The EW of the Ca ii 8542 A line is measured using the splot routine in IRAF (Tody, 1986, 1993), fitting the line with multiple profiles. The median and the standard deviation have been adopted as final values for the EW and its uncertainty. We perform a Monte Carlo test with \(10^{6}\) randomisations on the heliocentric distance, the \(\mathrm{EW}_{8542}/\mathrm{EW}_{8662}\) ratio, and the de-reddened magnitudes assuming a Gaussian distribution. The final [Fe/H] and its uncertainty are the median and the standard deviation from the randomisations, respectively.
Although Starkenburg et al. (2010) proved that this metallicity calibration is reliable and compatible with high-resolution studies, we use Target 1 to check for a possible offset in [Fe/H]. Given the different SNR between Target 1 (\(\sim 35\) at Ca ii T) and the other targets (\(\sim 8-15\) at Ca ii T), the spectrum of Target 1 has been degraded to match the SNR of the other targets. Its metallicity from Ca ii T is \(\mathrm{[Fe/H]}_{\mathrm{CoT}}=-2.34\pm 0.26\), compatible within \(0.9\sigma\) with the metallicity inferred from Fe lines (\(\mathrm{[Fe/H]}=-2.09\pm 0.09\)). The SNR of the Ca ii T region in the observed spectra is sufficient to obtain an uncertainty on the metallicity in the range \(0.08\leq\sigma_{\mathrm{[Fe/H]}}\leq 0.20\).
Table 4 reports the inferred metallicities together with the stellar parameters and radial velocities. Figure 5 displays the metallicities and radial velocities of our targets and known UMi members (Spencer et al., 2018; Pace et al., 2020, APOGEE DR17) as a function of their elliptical distances (left panels); the [Fe/H] vs. RV space and their histograms (central and right panels). The five targets have metallicities and radial velocities compatible with the UMi distributions, therefore we identify them as new members of UMi.
## 7 Orbital parameters
In this section, we want to test the gravitational potential so far used for kinematical studies in the disk and the halo of the Milky Way (e.g., Sestito et al., 2019, 2020; Lucchesi et al., 2022). We make use of Galpy12(Bovy, 2015) to infer the pericentric, apocentric, and galactocentric distances of Ursa Minor. The choice on the isolated gravitational potential and on all the other assumptions (e.g., distance and motion of the Sun etc.), the orbital integration time, and the derivation of the uncertainties mirror the method fully described in Sestito et al. (2019). The code is run on the sample of stars from Spencer et al. (2018), Pace et al. (2020), and our five new targets. The system's orbital parameters are obtained from the median of the sample. The uncertainties on the system parameters are derived dividing the dispersion by the square root of the number or stars in the sample. The inferred quantities are compared with the values from the literature (Li et al., 2021; Battaglia et al., 2022; Pace et al., 2022), in which a variety of MW gravitational potentials were adopted. In particular, Li et al. (2021) make use of four isolated MW gravitational potential, one NFW dark matter halo (PNFW) and three with Einasto profiles (PEHM, PEIM, and PELM). Battaglia et al. (2022) adopted two isolated MW profiles (LMW and HMW) and one perturbed by the the passage of the Large Magellanic Cloud (PMW). Pace et al. (2022) used two gravitational potentials, one in which the MW is isolated (MW), and the other perturbed by the LMC (MW+LMC). Both Battaglia et al. (2022) and Pace et al. (2022) make use of NFW dark matter profiles.
Footnote 12: [http://github.com/jobovy/galpy](http://github.com/jobovy/galpy)
Figure 6 displays our results for the apocentric (red shaded area), pericentric (green shaded area), and Galactocentric (blue shaded area) distances in comparison with values from the aforementioned gravitational potentials from the literature. For the literature, we report the pericentric (green diamonds) and apocentric (red diamonds) distances. The Galactocentric position of UMi is closer to the apocentre, while the blue arrow indicates the system is moving towards its pericentre.
The inferred orbital parameters are in broad agreement with the results from the variety of gravitational potentials adopted in the literature so far. In particular, the apocentre (\(R_{\mathrm{apo}}=92.67^{+2.17}_{-0.41}\) kpc) is similar to the ones inferred assuming a more massive dark matter halo, such as the PEHM from Li et al. (2021), HMW from Battaglia et al. (2022), or MW+LMC and MW from Pace et al. (2022). While the pericentric distance (\(R_{\mathrm{peri}}=57.23^{+0.48}_{-0.83}\) kpc) is very different from the one inferred with the PMW from Battaglia et al. (2022), PEIM and PELM from Li et al. (2021). The pericentre variation is narrower among different potentials, although we can observe our inference is much less in agreement with HMW and PMW from Battaglia et al. (2022).
## 8 Discussion
In this section, we discuss the membership of the five targets observed with GRACES and the chemo-dynamical properties of the dwarf galaxy, Ursa Minor.
### The stellar halo of a dwarf: five new far outlying members of UMi
Radial velocities and metallicities for five new targets were measured above, where [Fe/H] is from Fe i and Fe ii lines in case of Target 1 (see Section 5), yet [Fe/H] is inferred through the NIR Ca ii Triplet lines for Target 2-5 (see Section 6). Figure 5 clearly shows that these five stars lie within the distributions in metallicity and radial velocity of Ursa Minor. The chemical properties of Target 1 shown in Figure 3 support its membership to UMi. The slightly higher [Mg/Fe] and the low [Ba/Fe] are interesting, however their values are in agreement with at least one other member of UMi.
To further exclude the possibility that these 5 targets are halo interlopers, we run the Besancon simulation (Robin et al., 2003, 2017) of the MW halo. We select all the stellar particles in the direction of UMi. Out of the 300 stellar particles produced, only 21 inhabit the same proper motion region as in Figure 1. Within this sample, only 4 stellar particles lie in the highest RV range of UMi, \(-250<\) RV \(<-210\) km s\({}^{-1}\). All of them have [Fe/H] \(>-1.5\), and 2 with [Fe/H] \(>-1.0\). The latter 2 stellar particles are outside the metallicity range of UMi. While the former two stellar particles, however, have a photometry that differs by 2 magnitudes in the G band from UMi stars at the same colour BP \(-\) RP. Therefore, none of our target, or more in general, known UMi members are reproduced by the Besancon MW halo simulation. This is another indication that Target 1-5 are not foreground stars, but rather new UMi members.
Previously, it was shown that UMi is more elongated (\(\epsilon_{\rm UMi}=0.55\)) than other classical satellites (\(\epsilon<0.45\), Munoz et al., 2018). The most distant member had been located near \(\sim 5.5r_{h}\). With our results, Ursa Minor extends out to a projected elliptical distance of \(\sim 12r_{h}\), or \(\sim 4.5\) kpc (projected) from its centre. This distance is close to the tidal radius inferred by Pace et al. (2020), \(5-6\) kpc.
Errani et al. (2022) analysed the dynamical properties of many satellites of the MW in terms of their dark matter content and distribution. The authors show that the dynamical properties of UMi are compatible with \(\Lambda-\)CDM model if tidal stripping effects are taken into account. The finding of a member at \(\sim 12r_{h}\) the multiple apocentric and pericentric passages reinforce the idea that UMi is strongly dominated by tidal stripping. In fact, as shown in the left panel of Figure 1, the proper motion of UMi is almost parallel to the semi-major axis of the system. In addition, supernovae feedback can play a role in pushing members to the extreme outskirt of their host galaxy. These scenarios have also been proposed to explain the extended structure of Tucana II ultra-faint dwarf galaxy (Chiti et al., 2021). The authors discuss a third possible scenario which involves mergers of UFDs. We discuss and rule out the merger hypothesis for UMi in Section 8.6.
Figure 5: Distribution of UMi stars. Left panels: Radial velocities (top) and metallicities (bottom) as a function of the elliptical distance. Central panel: distribution of UMi stars in the [Fe/H] vs. RV space. Corner plots: histograms of the RV (top) and metallicities (right) distributions of UMi star. Target 1 is marked with a red diamond, while Target 2–5 are displayed with black diamonds. Magenta dots are the compilation of stars from Spencer et al. (2018) and Pace et al. (2020). Blue squares are UMi members selected from APOGEE DR17.
### Contributions from Supernovae Type Ia
The contribution of SNe Ia in UMi is still under debate (e.g., Ural et al., 2015, and references therein). The flat distribution in the \(\alpha-\) and Fe\(-\)peak elements shown in Figure 3 are consistent with no contributions from SN Ia, with the exception for the most metal-rich star, COS171 (Cohen and Huang, 2010). While this lone star might draw the eye to the conclusion of a possible \(\alpha-\)knee, i.e., the rapid change in the slope of the \(\alpha-\)elements from a plateau to a steep decrease, it is really the [Na, Ni/Fe] (and likely [Ti, Sc/Fe]) ratios that favour the steep decrease, and suggest contributions from SN Ia. In support, McWilliam et al. (2018) re-analysed COS171 showing that its [Mn, Ni/Fe] ratios do indicate SN Ia contributions, but from sub-Chandrasekhar-mass degenerate stars, i.e., \(\sim 0.95\,\rm M_{\odot}\).
Alternatively, one of the more metal-rich star, COS347 (\(\rm[Fe/H]=-1.63\), Sadakane et al., 2004), is slightly enriched in Mg, Ca, Ti, and Na compared to the stars at the same metallicity. This may suggest that at higher metallicities there is a large scatter in chemical abundance ratios, rather than a decrease with metallicity as expected from enrichment by SN Ia.
To investigate more thoroughly the contribution of SNe Ia above \(\rm[Fe/H]\gtrsim-2\), we explore APOGEE DR17 (Abdurro'uf et al., 2022). The selection of UMi members from this dataset is described in Section 2.1. We choose Mg and O as amongst the most reliable species13. Spectral lines of O are well-measured in the infrared (APOGEE) spectra, while in the optical they are hard to measure (e.g., weak lines, [O i]\(\lambda\lambda 6300,6363\) A) or strong lines also suffer from large NLTE effects (e.g., the O i T \(\lambda\lambda 7772,7774,7775\) A). The optical and APOGEE chemical abundance results are shown in Figure 7, and compared with MW halo stars from APOGEE and GALAH (optical, Buder et al., 2021). With the addition of reliable [O/Fe] from APOGEE, the presence of a plateau to \(\rm[Fe/H]\lesssim-2.1\) and then a steeper decrease, i.e., a knee, is more clearly seen. This decrease, now observed in several \(\alpha\)-elements, indicates contributions from SNe Ia. A deeper analysis of the APOGEE spectra in terms of the chemo-dynamical analyses of dwarf galaxies is currently under investigation, Shetrone et al. (2023, in prep.). This study will also quantify any offsets between optical and infrared measurements, as seen in Figure 7 for [Mg/Fe].
Footnote 13: [https://www.sdss4.org/dr17/irspec/abundances](https://www.sdss4.org/dr17/irspec/abundances)
The metallicity at which the knee occurs (\(\rm[Fe/H]_{knee}\)), is correlated with the time when SNe Ia begin to contribute to the chemical evolution of a galaxy. This time is also dependent on the star formation efficiency, which is expected to be lower in dwarf galaxies (e.g., Matteucci, 2003; Tolstoy et al., 2009). Recently, Theler et al. (2020) discussed that the slope of the knee-decrease is governed by the balance between the amount of metals ejected by SNe Ia vs. SNe II. Therefore, a smaller slope indicates an extended star formation rather than a sharply quenching galaxy (Theler et al., 2020). On the theoretical side, Revaz and Jablonka (2018) developed cosmological zoom-in simulations that are able to reproduce most of the observable quantities of dwarf galaxies, e.g., velocity dispersion profiles, star formation histories, stellar metallicity distributions, and [Mg/Fe] abundance ratios. Similarly, the FIRE simulations (e.g., Hopkins et al., 2014) have been used to (a) reproduce the star formation histories of the MW satellites (Escala et al., 2018), and (b) reproduce the properties and numbers of ultra-faint dwarf galaxies (Wheeler et al., 2015). These models suggest that a higher \(\rm[Fe/H]_{knee}\) is attained when the star formation is more efficient and the system can retain the metals. Given the value of \(\rm[Fe/H]_{knee}\sim-2.1\), then the low star formation efficiency of UMi appears to be similar to measurements in other dwarf galaxies (e.g., Reichert et al., 2020; Tolstoy et al., 2009; Simon, 2019), and much less efficient than in the MW, where \(\rm[Fe/H]_{knee}\sim-0.5\), (e.g., Venn et al., 2004; Haywood et al., 2013; Buder et al., 2021; Recio-Blanco et al., 2022).
### Presence of rapid- and slow-neutron capture processes
To examine the contributions from SNe II in UMi, we examine the distribution in [Ba/Mg] vs. [Mg/H] in Figure 8. At very low-metallicities, if Ba is produced by the r-processes (see the review by Cowan et al., 2021, and references therein), then a tight and flat distribution will be visible, i.e., a Ba-floor, also shown in Mashonkina et al. (2022). This seems to be the case for UMi stars with [Mg/H]\(<-2.0\), including Target 1. A spread in [Ba/Mg] that is significantly larger than a \(3\sigma\) error, and subsequent rise from a presumed Ba-floor,
Figure 6: Orbital parameters for Ursa minor. The green, red, and blue vertical bands are the pericentric (\(R_{\rm peri}=57.23^{+0.48}_{-0.83}\) kpc), apocentric (\(R_{\rm apo}=92.67^{+2.17}_{-0.41}\) kpc), and Galactocentric distances (\(R_{\rm GC}=77.55^{+0.02}_{-0.03}\) kpc) inferred in this work. To infer the orbital parameters, we use the Spencer et al. (2018); Pace et al. (2020) compilation. Vertical lines are their median values, while shaded area are the interval between the 0.16 and 0.84 quantiles. The blue horizontal arrow departing from the vertical line of the Galactocentric distance represents the direction of the Galactocentric radial velocity. Pericentric and apocentric distances from the literature are represented by green and red points, respectively. Tick labels in the y axis indicate the studies from which the parameters have been taken: the L21 potentials are from Li et al. (2021), the B22 are from Battaglia et al. (2022), and the P22 are from Pace et al. (2022).
is interpreted as Ba contributions from metal-poor asymptotic giant branch stars (AGB), via slow neutron-captures (s-process, e.g., Pignatari et al., 2008; Cescutti & Chiappini, 2014). This chemical behaviour is also visible in the bottom panel of Figure 8, in which we report the [Ba/Fe] vs. [Fe/H] (as in Figure 3) as a check that our interpretation is not biased by measurements of Mg.
Based on an overabundance of [Y/Ba] observed in UMi stars at very low metallicities, [Fe/H]\(<-2.5\), Ural et al. (2015) have suggested that there are also contributions from spinstars (e.g., Cescutti et al., 2013) at the earliest epochs. Spinstars are fast rotating massive stars (25-40 \(\,{\rm M}_{\odot}\)) that produce s-process elements from neutron rich isotopes in their atmospheres (e.g., Cescutti & Chiappini, 2014). Unfortunately, our GRACES spectra are insufficient (SNR too low for the weak Y ii lines) to determine an abundance for [Y/Ba], including our spectrum of Target 1.
### No trace of pair-instability supernovae
Pair-instability supernovae (PISNe) are produced by very metal-poor, very massive stars (\(>120\,{\rm M}_{\odot}\)), predicted to be amongst the first stars. PISN produce a strong odd-even effect in the yields, with no neutron-capture process elements above the mass cut (Takahashi et al., 2018). The odd-even effect leads to a high [Ca/Mg] and low [Na/Mg] (green shaded area in Figure 9). Yields of PISNe, coupled with other SNe II predicted from a normal initial mass function, have been estimated by Salvadori et al. (2019), and are shown by a slightly higher [Na/Mg] ratio (red shaded area Figure 9). There is no trace of PISNe, nor PISNe + SNe II, yields in Ursa Minor.
### The Chemistry of Target 1
The detailed chemistry of Target 1 may provide a glimpse into the early star formation events in UMi. It stands out in [Ba/Mg] with unusually low Ba for a stars in UMi or the MW (see Figure 8). It also appears to be lower in [Na/Mg] and [Ca/Mg] than the other stars in UMi and the MW; see Figure 9. This is partially due to the higher [Mg/Fe] compared to other UMi stars. These low abundances relative to Mg in combination with the little amount of Ba even at relatively higher metallicities ([Fe/H]\(\sim-2\)) have been found in some stars of Coma Berenices (Frebel & Bromm, 2012), Segue 1 (Frebel et al., 2014), Hercules (Koch et al., 2008, 2013; Francois et al., 2016), and in the Milky Way (e.g., Sitnova et al., 2019; Kielty et al., 2021; Sestito et al., 2022). This particular chemical pattern has been interpreted as contribution from only one or a few low-mass core-collapse SNe II (CCSNe), known as the "one-shot" model (Frebel & Bromm, 2012). We explore a variety of core collapse supernovae yields to compare to our chemical abundances in Target 1 to test this "one shot" model hypothesis.
Various yields of SNe II are on the market. We choose to compare the chemistry of Target 1 against the widely used faint SNe II yields from Nomoto et al. (2013) and the recent ones from Ebinger et al. (2020). We included this additional comparison as the yields from Nomoto et al. (2013) are predicted only up to proton number 32, whereas the yields from Ebinger et al. (2020) reach heavier elements up
Figure 7: UMi chemical abundances from APOGEE DR17 (Abdurro’uf et al., 2022). Blue squares are stars from APOGEE with high SNR (\(>70\)) and very likely to be UMi members (Psat\(>70\) percent) according to our algorithm. UMi stars from the literature are marked with magenta squares, while magenta triangles denote their upper limits. Target 1 is marked with a red (LTE) and orange (NLTE) diamond. Cyan open circles are MW stars from APOGEE with high SNR (\(>70\)) and good Gaia EDR3 parallax measurements (\(\varpi/\delta_{\varpi}>15\)). Grey open circles are MW stars from GALAH (Buder et al., 2021) selected as in Figure 3. Typical uncertainties are denoted with blue and magenta crosses for APOGEE (infrared NLTE) and literature stars (high-resolution optical LTE), respectively. An offset in [Mg/Fe] between the optical LTE and infrared NLTE measurements is under investigation by the APOGEE team (Shetrone et al., 2023, in prep.).
to proton number 60. Another difference is how the energy of the supernovae explosion is parametrized. While Nomoto et al. (2013) fixed the energy to the value of \(10^{51}\) erg, this is treated as a free parameter by Ebinger et al. (2020), in which it spans from 0.2 to 2.0 \(\times 10^{51}\) erg, and varies with the progenitor mass. Both of them uses non-rotating models. The spatial symmetry of the explosion is also modelled differently. Nomoto et al. (2013) employed the so-called mixing and fallback model, which implies the presence of polar jets and fallback materials around the equatorial plane. On the other hand, Ebinger et al. (2020) adopted spherical symmetry. When comparing the yields from Nomoto et al. (2013) with Target 1, the chemistry of this star is well described by pollution from a low-mass faint CCSNe (\(\sim 30\,\mathrm{M}_{\odot}\)). Alternatively, we are not able to reproduce the chemistry of Target 1 when comparing to the yields from Ebinger et al. (2020). Their predictions at all masses are higher than our observations for the majority of elements, and we cannot reproduce their strong odd-even effect, with the exception of [Ba/Mg]. This is the only ratio we can reproduce adopting a progenitor mass \(25\leq\mathrm{M}_{\mathrm{prog}}\leq 30\,\mathrm{M}_{\odot}\).
As Target 1 is very far from the UMi central body, we suggest it may have formed just after the contributions from
Figure 8: Top panel: [Ba/Mg] vs. [Mg/H] space. Bottom panel: [Ba/Fe] vs. [Fe/H] as in Figure 3. Target 1 is denoted with a red (LTE) and a orange (NLTE) diamond. Literature UMi stars (magenta diamonds) are from Shetrone et al. (2001), Sadakane et al. (2004), Cohen and Huang (2010), Kirby and Cohen (2012), and Ural et al. (2015). Literature MW halo compilation (grey open circles) from Aoki et al. (2013), Yong et al. (2013), Kielty et al. (2021), and Buder et al. (2021). The black cross at the upper left corner represents the typical uncertainty on the UMi literature chemical abundances.
Figure 9: PISNe yields space. Target 1 is marked with a red and a orange diamond when LTE and NLTE, respectively. The green band is the region of stars polluted by PISNe alone (Takahashi et al., 2018). The red zone is the locus in which the stars would have been polluted by PISNe and SN II as in Salvadori et al. (2019). For the latter case, we show the yields relative to a PISNe to SN II ratio between 0.5 and 0.9 (see Figure 6 from Salvadori et al. 2019). Literature UMi stars (magenta diamonds) from Shetrone et al. (2001), Sadakane et al. (2004), Cohen and Huang (2010), Kirby and Cohen (2012), and Ural et al. (2015). Literature MW halo compilation (grey open circles) from Aoki et al. (2013), Yong et al. (2013), Kielty et al. (2021), and Buder et al. (2021). The black cross at the corner represents the typical uncertainty on the UMi literature chemical abundances.
low-mass SN II and was exiled by supernova feedback and/or tidal forces by pericentric passage(s) with the Galaxy. A deeper analysis of chemistry (heavy elements) of the newly discovered members in the APOGEE survey, i.e., those located between the central body and Target 1 and, more generally the kinematical characterisation of the UMi halo, could help to clarify this picture.
### Outside-in star formation vs. late-time merger
Pace et al. (2020) measured radial velocities and metallicities of likely UMi members selected from _Gaia_ DR2 within 2 half-light radii. They interpreted the spatial distribution of the stars as composed of two populations with different chemo-dynamical properties. A more metal-rich (\(\rm[Fe/H]=-2.05\pm 0.03\)) kinematically colder (\(\rm\sigma_{RV}=4.9\pm 0.8\,km\ s^{-1}\)) and centrally concentrated (\(r_{h}=221\pm 17\) pc) population. And a metal-poor hotter and more extended (\(\rm[Fe/H]=-2.29\pm 0.05\), \(\rm\sigma_{RV}=11.5\pm 0.9\,km\ s^{-1}\), \(r_{h}=374\pm 49\) pc) population. Pace et al. (2020) discussed that the two metallicity distributions in UMi are much closer than in other dwarf spheroidal galaxies (dSphs) found so far.
Benitez-Llambay et al. (2016) and Genina et al. (2019) proposed that dwarf-dwarf mergers are the cause of the multiple populations in dSphs. Therefore, Pace et al. (2020) concluded that UMi underwent a late-time merger event between two dwarfs with very similar chemical and physical properties. However, Genina et al. (2019) also pointed out that kinematic and spatial information alone are insufficient to disentangle the formation mechanisms of multi-populations. Additional evidence from precise chemical abundances and star formation histories are needed, data that was not included in the study by Pace et al. (2020).
In this paper, we propose an alternative scenario to explain the chemo-dynamical properties of the two populations in Ursa Minor. An outside-in star formation history can also be used to describe the properties of low mass systems, such as dwarf galaxies (Zhang et al., 2012). Briefly, the extended metal-poor population (\(\rm[Fe/H]\lesssim-2.0\)) formed everywhere in the dwarf, such that the relatively younger stars populate the centre of the galaxy at times when SNe Ia begin to contribute (e.g., Hidalgo et al., 2013; Benitez-Llambay et al., 2016). This enhances the metallicity only in the central region, giving the galaxy a non-linear metallicity gradient.
In support of our simpler interpretation, the distributions in the chemical elements over a wide range in metallicity suggests a common path amongst the stars in UMi. UMi stars are polluted by low mass CCSNe (e.g., their low [Ba/Fe, Mg] and [Na, Ca/Mg]), they show a SNe Ia knee at \(\rm[Fe/H]\sim-2.1\) and a contribution from AGB is also visible in the more metal-rich stars, and they display a low dispersion in [Ca/Mg] from star to star over 2 dex in metallicity.
Furthermore, Revaz & Jablonka (2018) used a cosmological zoom-in simulation to show that the kinematics in UMi are consistent with secular heating in the central region of the satellite without invoking late-time mergers. Thus, a more simple scenario of outside-in star formation is consistent with the chemical, structural, and kinematic properties of UMi, and we suggest these do not necessarily require a late-time merger event.
Figure 10: Chemistry of Target 1 in the CCSne yields space. Top panel: EMP (\(\rm[Fe/H]=-3.0\)) CCSNe yields from Nomoto et al. (2013). Central panel: UMP (\(\rm[Fe/H]=-4.0\)) CCSNe from Ebinger et al. (2020) in the proton number range as top panel. Bottom panel: same as the central panel but for all the species predicted by Ebinger et al. (2020). The legend indicates the model’s name, in which the number is the progenitor’s mass in \(\rm M_{\odot}\) at its ZAMS. The darker the line, the heavier the mass. Progenitor masses for models from Ebinger et al. (2020) are predicted up to \(\rm 30\,M_{\odot}\), while Nomoto et al. (2013) modeled the yields up to \(\rm 100\,M_{\odot}\).
## 9 Conclusions
A new Bayesian algorithm was used to find new members in the very extreme outskirts of the ultra faint dwarf galaxy, Ursa Minor. Five targets were selected for high-resolution spectroscopy with GRACES at Gemini North. For all five stars, we determine precise radial velocities and metallicities; for the brightest and farthest target in projection (Target 1), the higher SNR of our GRACES spectrum also permitted a detailed chemical abundance analysis. With the use of data from th eliterature and APOGEE DR17, we find that:
1. The Bayesian algorithm is very efficient in finding new members, even at very large elliptical distances. All five candidates are new members of UMi, according to their radial velocities and metallicities (see Figure 5).
2. Ursa Minor extends at least out to a projected elliptical distance of \(\sim 12r_{h}\), which corresponds to \(\sim 4.5\) kpc for an adopted distance of 76 kpc.
3. The orbital properties of UMi indicate that the system has recently passed apocentre and it is moving towards pericentre (see Figure 6). Tidal stripping is one scenario that can explain UMi's elongated shape.
4. The chemical properties of Target 1 (see Figure 3), the most distant member discovered so far, are compatible with the overall distribution of the known UMi members from high-resolution spectral analysis.
5. The low [Ca, Na/Mg] and the low [Ba/Fe] of Target 1 suggest that the star formed in an environment polluted by low-mass supernovae type II (M\({}_{\rm prog}\sim 30\,\rm M_{\odot}\), see Figures 9 and 10). The star is likely exiled by supernovae feedback or tidal forces.
6. Looking at all the UMi stars with high-resolution chemical analyses, including those from APOGEE DR17, we conclude there is evidence of pollution by supernovae type Ia. There is a knee at \(\rm[Fe/H]_{\rm knee}\sim-2.1\) in the [Mg, O, Na, Ni/Fe] distributions (see Figures 3 and 7).
7. Ursa Minor is also clearly polluted by supernovae type II and AGB stars given the distribution of [Ba/Mg, Fe] as a function of [Mg, Fe/H] (see Figure 8).
8. There is no trace of yields from pair-instability supernovae, either alone or combined with type II (see Figure 9).
9. The chemo-dynamical properties of UMi can be explained by an outside-in star formation and the following SNe Ia enrichment. We propose this as a simpler scenario than a late-time merger event between two very similar systems.
10. We have found two new UMi members at a distance of \(\sim 7r_{h}\) in APOGEE DR17 (Section 2.1 and Figure 1). As their metallicities are at the edge of the APOGEE grid (\(\sim-2.4\)), their true [Fe/H] may be lower and their chemical ratios might be affected.
In the very near future, the Gemini High resolution Optical SpecTrograph (GHOST, Pazder et al. 2016) will be operative at Gemini South. It will cover a wider spectral region than GRACES, especially towards the blue where many spectral lines of heavy elements are found. In synergy with _Gaia_ satellite and the powerful Bayesian algorithm for target selections, it should be possible to discover a plethora of new members in the centre and extreme outskirts of this and many other ultra-faint and classical dwarf galaxies to study their star formation histories. This will be a giant leap forward for detailed studies of low mass systems, and both observational and theoretical near field cosmological investigations.
## Acknowledgements
We acknowledge and respect the lok\({}^{w}\)apan peoples on whose traditional territory the University of Victoria stands and the Songhees, Esquimalt and WSANEC peoples whose historical relationships with the land continue to this day.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Manuakea has always had within the Native Hawaiian community. We are very fortunate to have had the opportunity to conduct observations from this mountain.
We want to thank the supporter astronomers, Joel Roediger and Hyewon Suh, for their help during Phase II and the observational runs.
FS thanks the Dr. Margaret "Marmie" Perkins Hess postdoctoral fellowship for funding his work at the University of Victoria. KAV, LDA, and JG thank the National Sciences and Engineering Research Council of Canada for funding through the Discovery Grants and CREATE programs. DZ thanks the Mitacs Globalink program for summer funding. The authors thanks the International Space Science Institute (ISSI) in Bern, Switzerland, for funding the "The Early Milky Way" Team led by Else Starkenburg.
Based on observations obtained through the Gemini Remote Access to CFHT ESPaDOnS Spectrograph (GRACES), as part of the Gemini Program GN-2022A-Q-128. ESPaDOnS is located at the Canada-France-Hawaii Telescope (CFHT), which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawai'i. ESPaDOnS is a collaborative project funded by France (CNRS, MENESR, OMP, LATT), Canada (NSERC), CFHT and ESA. ESPaDOnS was remotely controlled from the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia a Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).
This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. De
partment of Energy Office of Science, and the Participating Institutions.
SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France (Wenger et al., 2000). This work made extensive use of TOPCAT (Taylor, 2005).
## Data Availability
GRACES spectra will be available at the Gemini Archive web page [https://archive.gemini.edu/searchform](https://archive.gemini.edu/searchform) after the proprietary time. The data underlying this article are available in the article and in its online supplementary material.
|
2310.02458 | Cuspidality criterion for symmetric powers of automorphic
representations of GL(2) over function fields | Given a cuspidal automorphic representation of GL(2) over a global function
field, we establish a comprehensive cuspidality criterion for symmetric powers.
The proof is via passage to the Galois side, possible over function fields
thanks to the Langlands correspondence of L. Lafforgue and additional results
of G. Henniart and B. Lemaire. Our work is guided by the number fields results
of Kim-Shahidi and Ramakrishnan. | Luis Lomeli, Javier Navarro | 2023-10-03T22:03:28Z | http://arxiv.org/abs/2310.02458v2 | # Cuspidality criterion for symmetric powers
###### Abstract.
Given a cuspidal automorphic representation of \(\operatorname{GL}(2)\) over a global function field, we establish a comprehensive cuspidality criterion for symmetric powers. The proof is via passage to the Galois side, possible over function fields thanks to the Langlands correspondence of L. Lafforgue and additional results of G. Henniart and B. Lemaire. Our work is guided by the number fields results of Kim-Shahidi and Ramakrishnan.
Key words and phrases:Langlands Correspondence; Function Fields; Symmetric Powers
## Introduction
Ever since the early beginnings of the Langlands Program, symmetric powers of a cuspidal representation \(\pi\) of \(\operatorname{GL}_{2}(\mathbb{A}_{F})\) over a global field \(F\) with ring of adeles \(\mathbb{A}_{F}\) have earned a special place in the theory of automorphic forms and representations; Langlands himself noted that their automorpy implies the Ramanujan Conjecture for general linear groups, to say the least. The literature on the subject is very general and spread over a diverse set of articles. We here have the particular interest of establishing a complete cuspidality criterion for \(\operatorname{Sym}^{n}\!\pi\) over a global function field.
Gelbart and Jacquet established the adjoint lift of \(\pi\) to \(\operatorname{GL}_{3}(\mathbb{A}_{F})\) in the generality of a global field in [2], namely \(\operatorname{Ad}\pi=\operatorname{Sym}^{2}\!\pi\otimes\omega_{\pi}^{-1}\), where \(\omega_{\pi}\) is the central character of \(\pi\). Then \(\operatorname{Sym}^{2}\!\pi\) is automorphic and can be written as an isobaric sum. The cuspidality criterion for the symmetric square is already present in their work, in particular, \(\operatorname{Sym}^{2}\!\pi\) is cuspidal if and only if \(\pi\) is dihedral.
In the number field case, the automorphy of \(\operatorname{Sym}^{3}\!\pi\) was established by Kim and Shahidi in [10], where it is also first noted that it is cuspidal unless \(\pi\) is dihedral or tetrahedral. They obtain further breakthroughs in [7, 9]; Kim proving the automorphy of \(\operatorname{Sym}^{4}\!\pi\), then in Kim and Shahidi one can find a thorough cuspidality criterion including the observation that \(\operatorname{Sym}^{4}\!\pi\) is non-cuspidal if \(\pi\) is dihedral, tetrahedral or octahedral. Assuming all \(\operatorname{Sym}^{n}\!\pi\) modular, and still in characteristic zero, Ramakrishnan provides the criterion for \(\operatorname{Sym}^{5}\!\pi\) and \(\operatorname{Sym}^{6}\!\pi\) as well as noting that this is enough for higher symmetric powers [15].
In contrast to the literature, we work with \(\ell\)-adic instead of complex representations on the Galois side, and tackle the function field case. In fact, from this point onwards, \(F\) denotes a global function field of characteristic \(p\) and \(\ell\) is a prime different from \(p\). In this scenario, the global Langlands correspondence is a landmark result of V. Drinfeld for \(\operatorname{GL}_{2}\)[1] and L. Lafforgue for \(\operatorname{GL}_{n}\)[11]. In particular, as a consequence of their work, we know that \(\operatorname{Sym}^{n}\pi\) is automorphic in positive characteristic. Thanks to the available machinery over function fields, our proofs are unconditional and we provide a comprehensive criterion including icosahedral representations.
Our main results on the automorphic side of the Langlands correspondence are summarized in Theorem 5.1, which provides a cuspidality criterion for the symmetric powers. In order to phrase the criterion in a succint way, we let \(M\) be the maximal power such that \(\operatorname{Sym}^{M}\pi\) is cuspidal, writing \(M=\infty\) in case every symmetric power of \(\pi\) is cuspidal.
**Automorphic Criterion.**_Let \(\pi\) be a cuspidal representation of \(\operatorname{GL}_{2}(\mathbb{A}_{F})\). Then \(\operatorname{Sym}^{6}\pi\) is cuspidal if and only if \(M=\infty\). If \(\operatorname{Sym}^{6}\pi\) is non-cuspidal, then \(\pi\) admits the following classification._
_M=1:_ \(\pi\) _is dihedral._
_M=2:_ \(\pi\) _is tetrahedral._
_M=3:_ \(\pi\) _is octahedral._
_M=5:_ \(\pi\) _is icosahedral._
_Additionally, \(\operatorname{Sym}^{4}\pi\) is cuspidal if and only if \(\operatorname{Sym}^{5}\pi\) is as well._
However, the proof of our automorphic criterion is via passage to the Galois side. Possible, thanks to the work of L. Lafforgue [11] as expanded to \(\ell\)-adic representations by Henniart and Lemaire in [3], results that we summarize in SS 5. Given a cuspidal \(\pi\) of \(\operatorname{GL}_{2}(\mathbb{A}_{F})\) corresponding to an irreducible 2-dimensional \(\ell\)-adic Galois \(\sigma\) via the global Langlands correspondence
\[\pi\longleftrightarrow\sigma,\]
the automorphic representation \(\operatorname{Sym}^{n}\pi\) is cuspidal if and only if \(\operatorname{Sym}^{n}\sigma\) is irreducible. With this in mind, our main result on the Galois side is a reducibility criterion, where \(M\) in this setting denotes the maximal irreducible symmetric power. It is Theorem 4.9, whose contents are as follows.
**Galois criterion.**_Let \(\sigma\) be an irreducible 2-dimensional \(\ell\)-adic Galois representation, then the following are equivalent:_
1. \(\sigma\) _has open kernel._
2. _There exists an integer_ \(n\geq 2\) _such that_ \(\operatorname{Sym}^{n}\sigma\) _is reducible._
3. \(\operatorname{Sym}^{6}\sigma\) _is reducible._
_Specifically, we have the following classification when these properties are met._
_M=1:_ \(\sigma\) _is dihedral._ _M=2:_ \(\sigma\) _is tetrahedral._ _M=3:_ \(\sigma\) _is octahedral._ _M=5:_ \(\sigma\) _is icosahedral._
_Additionally, \(\operatorname{Sym}^{5}\sigma\) is reducible if and only if \(\operatorname{Sym}^{4}\sigma\) is as well._
Let us make a few comments around the statement of the criterion. First, from general representation theoretical results paired with properties of \(L\)-functions, we deduce that the heart of the irreducibility criteria is contained in the cases of \(n=2,3,4\) and \(6\). We next prove that if \(\operatorname{Sym}^{n}\sigma\) is reducible then there exists an open subgroup \(H\) of \(G_{F}\) such that the restriction of \(\sigma\) to \(H\) is reducible, say \(\sigma_{H}=\mu_{1}\oplus\mu_{2}\) for \(\ell\)-adic characters \(\mu_{1}\) and \(\mu_{2}\). Now, from observations made by Henniart-Lemaire on \(\ell\)-adic representations [3], \(\mu_{1}\) and \(\mu_{2}\) have open kernels. We then conclude that \(\sigma\) has open kernel. In particular, the image \(J\) of \(\sigma\) in \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\), considering \(\sigma\) as a homomorphism into \(\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\), is a finite group. In this case, \(\sigma\) is defined to be dihedral, tetrahedral, octahedral, or icosahedral, according to \(J\) being one of these finite groups, and observe that it cannot be abelian due to the irreducibility of \(\sigma\). We note that this classification depends only on the isomorphism class of \(\sigma\).
Let us now present the contents of the article in a more detailed fashion. We begin with a section on the preliminaries. In particular, we recall the representation theoretic Clebsch-Gordan formulas in SS 1.2. Let us interject at this point and mention that we additionally include an Appendix, where we gather results on Artin \(L\)-functions and reducibility, and formulate a useful property on subrepresentations. With these results in mind, we proceed in SS 2 to prove two general useful lemmas concerning the reducibility of symmetric powers. First, if \(\operatorname{Sym}^{N}(\sigma)\) is reducible for some positive integer \(N\), then it is reducible hence forwards for \(n\geq N\). This tells us that the maximal irreducible symmetric power \(M\), when it exists as a positive integer, is indeed determined by the fact that \(\operatorname{Sym}^{M}(\sigma)\) is irreducible, while \(\operatorname{Sym}^{M+1}(\sigma)\) is reducible. When \(M\) is finite, the second lemma further reduces to \(M<6\). However, there are representations for which \(M=\infty\), i.e., every \(\operatorname{Sym}^{n}(\sigma)\) is irreducible, cf. Igusa [5].
For \(n<6\), the irreducibility criteria for \(\operatorname{Sym}^{n}(\sigma)\) are examined in SS 3. As we shall see by passing to the automorphic side in SS 5, our cuspidality criteria align with the well established results of Gelbart-Jacquet over global fields and Kim-Shahidi over number fields. For general \(n\), we are very much influenced by the number fields approach of Ramakrishnan.
Specifically, for the case of \(n=2\), we show that \(\operatorname{Sym}^{2}(\sigma)\) is reducible if and only if \(\sigma\cong\sigma\otimes\chi\) for some non-trivial quadratic character \(\chi\). In this case, we find that \(H=\ker\chi\) leads to \(\sigma_{H}\) being reducible. Additionally, the image of \(H\) in \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) is an abelian index-two subgroup of \(J\), implying that \(J\) must be dihedral.
Moving on to the case of \(n=3\), we establish that if \(\sigma\) is non-dihedral, then \(\operatorname{Sym}^{3}(\sigma)\) is reducible if and only if \(\operatorname{Sym}^{2}(\sigma)\cong\operatorname{Sym}^{2}(\sigma)\otimes\mu\) for some non-trivial cubic character \(\mu\). In this situation, if \(H^{\prime}=\ker\mu\), then \(\sigma_{H^{\prime}}\) is dihedral, leading to the existence of an index-two subgroup \(H\) of \(H^{\prime}\) such that \(\sigma_{H}\) is reducible. We observe that the image of \(H^{\prime}\) in \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) is an index-three dihedral subgroup of \(J\), then \(J\) is tetrahedral.
Finally, for \(n=4\), we prove that if \(\sigma\) is neither dihedral nor tetrahedral, then \(\operatorname{Sym}^{4}(\sigma)\) is reducible if and only if there exists a non-trivial quadratic character \(\chi\) such that \(\operatorname{Sym}^{3}(\sigma)\cong\operatorname{Sym}^{3}(\sigma)\otimes\chi\). In this case, the kernel \(H^{\prime}=\ker\chi\) results in \(\sigma_{H^{\prime}}\) being tetrahedral. Consequently, there exists an index-six subgroup \(H\) of \(H^{\prime}\) such that \(\sigma_{H}\) is reducible. In this case, the group \(J\) is necessarily octahedral. It is noteworthy that in each of these cases, the image of \(H\) in \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) is an abelian group appearing in a subnormal series with abelian factors of \(J\), which reflects the solvability of the dihedral, tetrahedral and octahedral groups.
The non-solvable icosahedral case requires a separate treatment. We explore this case in SS 4 and take the opportunity to gather the remaining reducibility Galois criteria that involve \(\operatorname{Sym}^{6}(\sigma)\). There we assume \(\sigma\) is not solvable polyhedral, i.e., neither dihedral, tetrahedral nor octahedral. We begin by proving that \(\operatorname{Sym}^{6}(\sigma)\) is reducible if and only if \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{\prime}\), for another irreducible 2-dimensional \(\ell\)-adic representation \(\sigma^{\prime}\). The existence of the representation \(\sigma^{\prime}\) was expected in characteristic \(p\) since it appears in the characteristic zero case in the literature, cf. [8], [15] and [18]. We then prove, and this is crucial, that \(\operatorname{Sym}^{3}(\sigma)\cong\operatorname{Sym}^{3}(\widetilde{\sigma})\), where \(\widetilde{\sigma}=\sigma^{\prime}\otimes\xi\), for some character \(\xi\). Furthermore, for suitable bases, there exists an isomorphism \(\operatorname{Sym}^{3}(\sigma)\xrightarrow{\sim}\operatorname{Sym}^{3}( \widetilde{\sigma})\) of the form \(\operatorname{Sym}^{3}(g)\), for some \(g\in\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\). Then, using that the homomorphism \(\operatorname{Sym}^{3}\colon\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{ \ell})\to\operatorname{GL}_{4}(\overline{\mathbb{Q}}_{\ell})\) has finite kernel, we deduce that there exists an open subgroup \(H^{\prime}\) (which shall be denoted by \(H\) in SS 4) of \(G_{F}\) such that \(\sigma_{H^{\prime}}\cong\widetilde{\sigma}_{H^{\prime}}\). From the relation \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{\prime}\), we conclude that \(\operatorname{Sym}^{5}(\sigma_{H^{\prime}})\) is reducible, obtaining in this way an open subgroup \(H\) of \(G_{F}\) such that \(\sigma_{H}\) is reducible.
We conclude with a passage from irreducible Galois \(\sigma\) to cuspidal automorphic \(\pi\) via the global Langlands correspondence of L. Lafforgue [11], incorporating the results of Henniart-Lemaire [3]. We do this in SS 5, after setting up the preliminaries. This enables us to obtain the Automorphic Criterion mentioned above, as well as more delicate cuspidality criteria for \(\operatorname{Sym}^{n}(\pi)\) corresponding in turn to the Galois treatment for the cases when \(n<6\) and the separate discussion involving the case of \(n=6\) and icosahedral representations.
### Acknowledgments
We thank Guy Henniart for mathematical communications. The first author is grateful to the Institut des Hautes Etudes Scientifiques for the hospitality provided during a summer visit in 2023, when this article was finalized.
He was supported in part by FONDECYT Grant 1212013. The second author was partially supported by USM PIIC Initiation to Scientific Research Program.
## 1. Preliminaries
We let \(F\) be a global function field of characteristic \(p\) and fix a separable closure \(\overline{F}\). We denote by \(\mathbb{A}_{F}\) the ring of adeles of \(F\). We also fix a prime number \(\ell\neq p\) and an algebraic closure \(\overline{\mathbb{Q}}_{\ell}\) of the \(\ell\)-adic numbers \(\mathbb{Q}_{\ell}\). We write \(G_{F}\) and \(\mathcal{W}_{F}\) for the absolute Galois group \(\operatorname{Gal}(\overline{F}/F)\) and the Weil group of \(F\) corresponding to \(\overline{F}\), respectively. We assume all separable field extensions \(E/F\) lie inside \(\overline{F}\).
Let \(\tau\) be either a homomorphism \(G_{F}\to\operatorname{GL}(V)\) or \(\mathcal{W}_{F}\to\operatorname{GL}(V)\), where \(V\) is a \(\overline{\mathbb{Q}}_{\ell}\)-vector space endowed with the topology induced from \(\overline{\mathbb{Q}}_{\ell}\). We say that \(\tau\) is an \(\ell\)-adic representation of \(G_{F}\), resp. of \(\mathcal{W}_{F}\), if it is continuous and unramified at almost every place of \(F\). If \(H\) is a subgroup of \(G_{F}\), we use \(\tau_{H}\) to denote the restriction \(\tau|_{H}\). If \(E/F\) is a separable field extension, we write \(\tau_{E}\) for the restriction \(\tau_{G_{E}}\).
We can view a finite-dimensional \(\ell\)-adic representation \(\tau\colon G_{F}\to\operatorname{GL}(V)\) as a continuous homomorphism \(\tau\colon G_{F}\to\operatorname{GL}_{m}(\overline{\mathbb{Q}}_{\ell})\), where \(m\) is the dimension of \(V\), after choosing a basis. For every integer \(n\geq 1\), we have the \(n\)-th symmetric power map
\[\operatorname{Sym}^{n}\colon\operatorname{GL}_{m}(\overline{\mathbb{Q}}_{ \ell})\to\operatorname{GL}_{N}(\overline{\mathbb{Q}}_{\ell}),\text{ where }N=\left(\begin{array}{c}m+n-1\\ n\end{array}\right).\]
It is well known, since Chevalley, that if \(\tau\) is semisimple then \(\operatorname{Sym}^{n}(\tau)\) is semisimple.
Additionally, we introduce the following notation:
\[A^{n}(\tau)=\operatorname{Sym}^{n}(\tau)\otimes\omega_{\tau}^{-1},\text{ where }\omega_{\tau}=\det(\tau).\]
In the particular case when \(m=2\), we shall write \(\operatorname{Ad}(\tau)\) instead of \(A^{2}(\tau)\),
since it is the adjoint map from \(\operatorname{GL}_{2}\) to \(\operatorname{GL}_{3}\) of Gelbart-Jacquet [2]. We further observe that
\[A^{1}(\tau)=\tau\otimes\omega_{\tau}^{-1}\cong\tau^{\vee}, \tag{1.1}\]
where \(\tau^{\vee}\) denotes the contragradient representation of \(\tau\).
Let us recall the Clebsch-Gordan formulas for symmetric powers of \(\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\). Start by taking \(\operatorname{Sym}^{1}=\operatorname{Id}\). Next, if \(g\in\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\) and \(i\), \(m\) are non-negative integers such that \(i\leq m/2\), then
\[\operatorname{Sym}^{i}(g)\otimes\operatorname{Sym}^{m-i}(g)\cong\bigoplus_{j=0 }^{i}\operatorname{Sym}^{m-2j}(g)\otimes\det(g)^{j}.\]
Here the isomorphism is obtained via matrix conjugation.
We work with a given \(2\)-dimensional irreducible \(\ell\)-adic Galois representation \(\sigma\). When viewed as a homomorphism \(\sigma\colon G_{F}\to\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\), we let proj be the canonical projection from \(\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\) to \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\).
Let \(J\) be the image \(\operatorname{proj}(\sigma(G_{F}))\). In contrast to the complex finite-dimensional case, where a continuous homomorphism \(G_{F}\to\operatorname{GL}_{m}(\mathbb{C})\) always has open kernel, this may not be the case for \(\ell\)-adic representations. In particular, \(J\) may or may not be finite.
In the case of finite image, we define \(\sigma\) to be dihedral, tetrahedral, octahedral, or icosahedral, according to \(J\) being one of these finite groups. We note that this classification depends only on the isomorphism class of \(\sigma\). One of our main results shows that \(J\) being finite is equivalent to \(\sigma\) having open kernel, and in turn equivalent to \(\operatorname{Sym}^{n}(\sigma)\) being reducible for some \(n\).
We adopt a similar terminology for \(\ell\)-adic representations of the Weil group. Namely, if \(\sigma\colon\mathcal{W}_{F}\to\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\) is an irreducible \(\ell\)-adic representation such that \(\operatorname{proj}(\sigma(\mathcal{W}_{F}))\) is finite, then we define \(\sigma\) to be dihedral, tetrahedral, octahedral, or icosahedral, depending on \(\operatorname{proj}(\sigma(\mathcal{W}_{F}))\). These definitions are in accordance to the complex case found in [12]. We point out that if \(\sigma\colon\mathcal{W}_{F}\to\operatorname{GL}_{2}(\mathbb{C})\) is a continuous irreducible representation then, as noted by Langlands in [_loc. cit._], the image \(\operatorname{proj}(\sigma(\mathcal{W}_{F}))\) is a finite subgroup of \(\operatorname{PGL}_{2}(\mathbb{C})\).
## 2. Reducibility Criteria for Symmetric Powers of Galois Representations: General Notions
In this section we address two general lemmas on the irreducibility of the symmetric powers of an irreducible two-dimensional \(\ell\)-adic representation
\[\sigma\colon G_{F}\to\operatorname{GL}(V).\]
Their proofs involve the general representation theoretic Clebsch-Gordan formulas recalled in SS 1.2, coupled with reducibility results obtained via properties of Artin \(L\)-functions summarized in Appendix A.
The first of the two lemmas allows us to make precise the notion of maximal irreducible symmetric power of \(\sigma\). We write \(M=\infty\) in case every symmetric power of \(\sigma\) is irreducible. Otherwise, it is the positive integer \(M\) for which \(\operatorname{Sym}^{M}(\sigma)\) is irreducible, while \(\operatorname{Sym}^{M+1}(\sigma)\) is reducible.
**Lemma 2.1**.: _Assume that \(\operatorname{Sym}^{N}(\sigma)\) is reducible for a given integer \(N>1\). Then \(\operatorname{Sym}^{n}(\sigma)\) is reducible for all \(n\geq N\)._
Proof.: We assume \(\operatorname{Sym}^{n}(\sigma)\) is reducible, and we know that it is semisimple, hence we can write \(\operatorname{Sym}^{n}(\sigma)=\sigma_{1}\oplus\sigma_{2}\), for subrepresentations \(\sigma_{1}\) and \(\sigma_{2}\). By Clebsch-Gordan, we have that
\[\sigma\otimes\operatorname{Sym}^{n+1}(\sigma)=\operatorname{Sym}^{n+2}( \sigma)\oplus\operatorname{Sym}^{n}(\sigma)\otimes\omega_{\sigma}.\]
Then \(\sigma_{i}\otimes\omega_{\sigma}\) is a subrepresentation of \(\sigma\otimes\operatorname{Sym}^{n+1}(\sigma)\) for \(i=1,2\).
If \(\operatorname{Sym}^{n+1}(\sigma)\) were irreducible, then \(\operatorname{Sym}^{n+1}(\sigma^{\vee})\) would be a subrepresentation of \(\sigma\otimes\sigma_{i}^{\vee}\otimes\omega_{\sigma}^{-1}\cong(\sigma \otimes\sigma_{i})^{\vee}\) for each \(i=1,2\), by Property (A.2). In particular,
\[\dim\sigma\otimes\sigma_{i}=2\dim\sigma_{i}\geq\dim\operatorname{Sym}^{n+1}( \sigma)=n+2,\]
for \(i=1,2\), but
\[\dim\sigma_{1}+\dim\sigma_{2}=\dim\operatorname{Sym}^{n}(\sigma)=n+1.\]
Therefore, \(\operatorname{Sym}^{n+1}(\sigma)\) must also be reducible.
An interesting fact is the existence of representations with \(M=\infty\) and whose corresponding image \(J\) in \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) is infinite. For instance, for \(\ell\)-adic representations associated with certain elliptic curves defined over \(F\), Igusa proved in [5] an analogous statement to the well known Serre's open image theorem [16].
The second lemma significantly reduces the number of cases required in the reducibility criterion.
**Lemma 2.2**.: _If \(M\) is finite, then \(M=1,2,3\) or \(5\)._
Proof.: If \(\Pi\) is an irreducible subrepresentation of \(\operatorname{Sym}^{M+1}(\sigma)\) then, by Clebsch-Gordan, it is a subrepresentation of \(\sigma\otimes\operatorname{Sym}^{M}(\sigma)\). Since \(\operatorname{Sym}^{M}(\sigma)\) is irreducible, then \(\operatorname{Sym}^{M}(\sigma^{\vee})\) is a subrepresentation of \(\sigma\otimes\Pi^{\vee}\), by (A.2). In particular, we have that
\[\dim(\sigma\otimes\Pi^{\vee})=2\dim\Pi\geq M+1.\]
This last inequality, together with \(\dim\operatorname{Sym}^{M+1}(\sigma)=M+2\), forces a decomposition into irreducible representations
\[\operatorname{Sym}^{M+1}(\sigma)=\Pi_{1}\oplus\Pi_{2}\text{ for }M>1,\]
where
\[\dim\Pi_{2}-\dim\Pi_{1}=\left\{\begin{array}{ll}1&\text{if $M$ is odd,}\\ 0&\text{if $M$ is even.}\end{array}\right.\]
If \(M\) is odd, then \(\dim(\sigma\otimes\Pi_{2}^{\vee})=M+3\). By the semisimplicity of \(\sigma\otimes\Pi_{2}^{\vee}\), we have that
\[\sigma\otimes\Pi_{2}^{\vee}=\operatorname{Sym}^{M}(\sigma^{\vee})\oplus\tau\]
for some \(2\)-dimensional representation \(\tau\). Now, since \(\Pi_{2}\) is irreducible and \(\tau\) is a subrepresentation of \(\sigma\otimes\Pi_{2}^{\vee}\), then \(\Pi_{2}\) is a subrepresentation of \(\sigma\otimes\tau^{\vee}\), again by (A.2). This implies in particular that
\[\dim\Pi_{2}=\frac{M+3}{2}\leq 4\implies M\leq 5.\]
If \(M\) is even, then \(\dim(\sigma\otimes\Pi_{1}^{\vee})=\dim(\sigma\otimes\Pi_{2}^{\vee})=M+2\). In this case, there exists a character \(\mu\) such that
\[\sigma\otimes\Pi_{1}^{\vee}=\operatorname{Sym}^{M}(\sigma^{\vee})\oplus\mu.\]
Applying once again (A.2), we deduce that \(\sigma^{\vee}\) is a subrepresentation of \(\Pi_{1}^{\vee}\otimes\mu^{-1}=(\Pi_{1}\otimes\mu)^{\vee}\). Since \(\Pi_{1}\otimes\mu\) is irreducible, then \(\sigma\cong\Pi_{1}\otimes\mu\); in particular, \(\dim\Pi_{1}=2\) and then we necessarily have \(M=2\) in this case.
Notice that in the proof of Lemma 2.2 we also allow the case of \(M=1\), which is an actual possibility as we will see in Theorem 3.1. In fact, at the end of our study we will be able to conclude the same for \(n=2\), \(3\) and \(5\).
## 3. Reducibility Criteria for Symmetric Powers of
Galois Representations: \(\operatorname{Sym}^{n}(\sigma)\), \(n<6\)
The cases treated in this section are well established over number fields. We are particularly influenced by [2, 14, 15] for the symmetric square, [10, 9, 7] for the symmetric cube and fourth, and [15] for the symmetric fifth. In contrast to the literature, we work with \(\ell\)-adic instead of complex representations, and tackle the case of a function field \(F\) of characteristic \(p\), with \(\ell\neq p\). Specifically, all of the representations that we will encounter in this section and the next are over the field \(\overline{\mathbb{Q}}_{\ell}\).
### Symmetric Square
In this case, the following theorem and its corollary present the reducibility criterion.
**Theorem 3.1**.: _The following are equivalent_
1. \(\operatorname{Sym}^{2}(\sigma)\) _is reducible._
2. \(\sigma\) _is dihedral._
3. \(\sigma\cong\sigma\otimes\chi\) _for some non-trivial character_ \(\chi\)_._
_In this case, \(\sigma\) has open kernel._
Proof.: Let us suppose first that \(\sigma\) is dihedral, meaning that \(\operatorname{proj}(\sigma(G_{F}))\) is finite and dihedral. Since the dihedral groups have an index two subgroup which is cyclic, we have that \(\sigma(H)\) is cyclic modulo the center of \(\sigma(G_{F})\), for some index two subgroup \(H\) of \(G_{F}\). In particular, \(\sigma(H)\) is abelian. By Schur's lemma, \(\sigma_{H}\) is reducible. Note that \(\sigma\) being irreducible, the representation \(\sigma_{H}\) is semisimple. Actually, if \(c\in G_{F}\setminus H\), then \(\sigma_{H}\cong\mu\oplus\mu^{c}\), for some character \(\mu\) of \(H\), where
\[\mu^{c}(h):=\mu(chc^{-1}),\quad h\in H.\]
Let \(E/F\) be the quadratic extension associated to \(H\), so that \(H=G_{E}\). We observe that \(\sigma\) being unramified almost everywhere, the character \(\mu\) is unramified at almost every place of \(E\). Thus, \(\mu\) is an \(\ell\)-adic character of \(G_{E}\). By Lemme IV.2.7 of [3], \(\mu\) has open kernel. Since \(\sigma_{E}\cong\mu\oplus\mu^{c}\) and \(E/F\) is finite, we have that \(\sigma\) has open kernel. Therefore, \(\sigma\) is actually a complex representation.
Now, by Frobenius reciprocity
\[\operatorname{Hom}_{G_{E}}(\sigma_{E},\mu)=\operatorname{Hom}_{G_{F}}(\sigma, \operatorname{Ind}_{E}^{F}\mu).\]
Since \(\sigma\) is irreducible, the above being not zero implies that \(\sigma\cong\operatorname{Ind}_{E}^{F}\mu\). If \(\chi\) is the only non-trivial character of \(G_{F}\) which is trivial on \(H\), then
\[\sigma\otimes\chi\cong\operatorname{Ind}_{E}^{F}\mu\otimes\chi\cong \operatorname{Ind}_{E}^{F}(\mu\otimes\chi_{E})=\operatorname{Ind}_{E}^{F}(\mu) \cong\sigma,\]
and (iii) is satisfied. Now suppose that (iii) holds. By comparing determinants, we have that \(\chi^{2}=1\). Then the kernel of \(\chi\) is \(G_{E}\), for some quadratic extension \(E/F\). If \(\varphi\colon\sigma\to\sigma\otimes\chi\) is an isomorphism, then \(\varphi\) is not a scalar, since \(\chi\) is not trivial. Thus, by taking restriction to \(G_{E}\) we get a non-scalar isomorphism
\[\varphi_{E}\colon\sigma_{E}\longrightarrow\sigma_{E}.\]
By Schur's lemma, we have that \(\sigma_{E}\) is reducible. As above, we have that \(\sigma_{E}\cong\mu\oplus\mu^{c}\) for some character \(\mu\) of \(G_{E}\), where \(c\) is the non-trivial element of \(G_{F}/G_{E}\cong\operatorname{Gal}(E/F)\). Again this implies that \(\sigma\) is complex and, in particular, \(\operatorname{proj}(\sigma(G_{F}))\) is finite. That \(\sigma\) is dihedral follows then from the fact that the dihedral groups are the only finite non-abelian subgroups of \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) with index two abelian subgroups.
Now let us prove that (the equivalent) (ii) and (iii) imply (i). Let \(E/F\) be a quadratic extension and \(\mu\) a character of \(G_{E}\) such that \(\sigma=\operatorname{Ind}_{E}^{F}\mu\). In this case, for a suitable basis of \(V\), \(\sigma\) is as follows
\[\sigma\colon g\longmapsto\left\{\begin{array}{ll}\left(\begin{array}{cc}\mu (g)&0\\ 0&\mu(cgc^{-1})\end{array}\right)&\text{if $g\in G_{E}$},\\ &\\ \left(\begin{array}{cc}0&\mu(cg)\\ \mu(gc^{-1})&0\end{array}\right)&\text{if $g\not\in G_{E}$}.\end{array}\right. \tag{3.1}\]
From which we have that
\[\mathrm{Sym}^{2}(\sigma)\colon g\longmapsto\left\{\begin{array}{cc}\left( \begin{array}{cc}\mu^{2}(g)&0&0\\ 0&\mu(g)\mu(cgc^{-1})&0\\ 0&0&\mu^{2}(cgc^{-1})\end{array}\right)&\mbox{if $g\in G_{E}$,}\\ \\ \left(\begin{array}{cc}0&0&\mu^{2}(cg)\\ 0&\mu(cg)\mu(gc^{-1})&0\\ \mu^{2}(gc^{-1})&0&0\end{array}\right)&\mbox{if $g\not\in G_{E}$.} \end{array}\right.\]
From the above we readily see that \(\mathrm{Sym}^{2}(\sigma)\) is reducible. Actually, it has the one-dimensional subrepresentation given by
\[\eta\colon g\longmapsto\left\{\begin{array}{cc}\mu(g)\mu(cgc^{-1})=\det( \sigma(g))&\mbox{if $g\in G_{E}$,}\\ \\ \mu(cg)\mu(gc^{-1})=-\det(\sigma(g))&\mbox{if $g\not\in G_{E}$.}\end{array}\right.\]
We observe that \(\eta=\omega_{\sigma}\chi\), where \(\omega_{\sigma}=\det\circ\,\sigma\) and \(\chi\) is the quadratic character associated to \(E\). Here we obtain
\[\mathrm{Sym}^{2}(\sigma)\cong\omega_{\sigma}\chi\oplus\mathrm{Ind}_{E}^{F}\mu ^{2}. \tag{3.2}\]
Finally, let us suppose that (i) holds. Since \(\mathrm{Sym}^{2}(\sigma)\) is semisimple and \(3\) -dimensional, it must have a one-dimensional summand, say \(\eta\). By Clebsch-Gordan
\[\sigma\otimes\sigma=\mathrm{Sym}^{2}(\sigma)\oplus\omega_{\sigma}, \tag{3.3}\]
we have that \(\eta\) is a subrepresentation of \(\sigma\otimes\sigma\). By (A.2), this implies that \(\sigma^{\vee}\) is a subrepresentation of \(\sigma\otimes\eta^{-1}\). Then, by irreducibility, we get
\[\sigma\otimes\eta^{-1}\cong\sigma^{\vee}\cong\sigma\otimes\omega_{\sigma}^{-1},\]
from which we infer that \(\sigma\cong\sigma\otimes\eta\omega_{\sigma}^{-1}\). We observe that \(\eta\neq\omega_{\sigma}\), since otherwise, from (3.3) the \(L\)-function \(L(s,\sigma\otimes\sigma\otimes\eta^{-1})=L(s,\sigma\otimes\sigma^{\vee})\) would have a non-simple pole at \(s=1\). The latter is not possible since \(\sigma\) is irreducible, see (A.1). Thus, by letting \(\chi=\eta\omega_{\sigma}^{-1}\), we have that (iii) holds.
The following result is a general observation, beginning at \(n=2\) because of Theorem 3.1 and is valid thereafter due to Lemma 2.1. In terms of the maximal irreducible symmetric power, this happens exactly when \(M=1\).
**Corollary 3.2**.: _The representation \(\sigma\) is dihedral if and only if \(\mathrm{Sym}^{n}(\sigma)\) is reducible for \(n\geq 2\)._
### Symmetric Cube
Given that \(\operatorname{Sym}^{3}(\sigma)\) is reducible when \(\sigma\) is dihedral by Corollary 3.2, we complete the reducibility criterion in this case by restricting ourselves to non-dihedral representations in the following theorem.
**Theorem 3.3**.: _Suppose that \(\sigma\) is non-dihedral. Then \(\operatorname{Sym}^{3}(\sigma)\) is reducible if and only if there exists some non-trivial character \(\mu\) such that_
\[\operatorname{Sym}^{2}(\sigma)\cong\operatorname{Sym}^{2}(\sigma)\otimes\mu;\]
_the condition being equivalent to \(\sigma\) being tetrahedral. When this is the case \(\sigma\) has open kernel and we have the decomposition_
\[A^{3}(\sigma)\cong\sigma\otimes\mu\oplus\sigma\otimes\mu^{2}.\]
Proof.: Let us suppose first that \(\operatorname{Sym}^{2}(\sigma)\cong\operatorname{Sym}^{2}(\sigma)\otimes\mu\) for some non-trivial character \(\mu\). By Clebsch-Gordan
\[\sigma\otimes\operatorname{Sym}^{2}(\sigma)\cong\operatorname{Sym}^{3}( \sigma)\oplus\sigma\otimes\omega_{\sigma}. \tag{3.4}\]
Now, \(\sigma\otimes\operatorname{Sym}^{2}(\sigma)\cong\sigma\otimes\operatorname{ Sym}^{2}(\sigma)\otimes\mu\) by hypothesis, so the right-hand side of (3.4) is equivalent to its twist by \(\mu\), then
\[\operatorname{Sym}^{3}(\sigma)\oplus\sigma\otimes\omega_{\sigma}\cong( \operatorname{Sym}^{3}(\sigma)\oplus\sigma\otimes\omega_{\sigma})\otimes\mu \cong\operatorname{Sym}^{3}(\sigma)\otimes\mu\oplus\sigma\otimes\omega_{ \sigma}\mu.\]
Since \(\sigma\) is non-dihedral and \(\mu\) is non-trivial, by Theorem 3.1 we have that \(\sigma\otimes\omega_{\sigma}\ncong\sigma\otimes\omega_{\sigma}\mu\). Thus, \(\sigma\otimes\omega_{\sigma}\mu\) must be a factor of \(\operatorname{Sym}^{3}(\sigma)\). Hence, \(\operatorname{Sym}^{3}(\sigma)\) is reducible.
Now let us assume that \(\operatorname{Sym}^{3}(\sigma)\) is reducible. Let \(\tau\) be a subrepresentation of \(\operatorname{Sym}^{3}(\sigma)\). By (3.4), \(\tau\) is a subrepresentation of \(\sigma\otimes\operatorname{Sym}^{2}(\sigma)\). Since \(\operatorname{Sym}^{2}(\sigma)\) is irreducible, then it is a subrepresentation of \(\sigma^{\vee}\otimes\tau\). In particular, \(\tau\) is not one-dimensional. Thus, we have that
\[\operatorname{Sym}^{3}(\sigma)=\tau_{1}\oplus\tau_{2},\]
for some irreducbile two-dimensional representations \(\tau_{1}\) and \(\tau_{2}\). Since \(\sigma^{\vee}\otimes\tau_{1}\) is semisimple 4-dimensional, and \(\operatorname{Sym}^{2}(\sigma)\) is irreducible 3-dimensional, then there exists a character \(\eta_{1}\) such that
\[\sigma^{\vee}\otimes\tau_{1}=\operatorname{Sym}^{2}(\sigma)\oplus\eta_{1}. \tag{3.5}\]
This implies that \(\sigma\) is a subrepresentation of \(\tau_{1}\otimes\eta_{1}^{-1}\) and then \(\sigma\cong\tau_{1}\otimes\eta_{1}^{-1}\) by irreducibility, i.e., \(\tau_{1}\cong\sigma\otimes\eta_{1}\).
Similarly, we have that \(\tau_{2}\cong\sigma\otimes\eta_{2}\) for some character \(\eta_{2}\). We obtain then
\[\operatorname{Sym}^{3}(\sigma)\cong\sigma\otimes\eta_{1}\oplus\sigma\otimes \eta_{2}. \tag{3.6}\]
Now, by the identity \(\tau_{1}\cong\sigma\otimes\eta_{1}\) we get
\[\sigma^{\vee}\otimes\tau_{1}\cong\sigma^{\vee}\otimes\sigma\otimes\eta_{1} \cong\operatorname{Sym}^{2}(\sigma)\otimes\eta_{1}\omega_{\sigma}^{-1} \oplus\eta_{1}.\]
Thus, by (3.5) we obtain
\[\operatorname{Sym}^{2}(\sigma)\cong\operatorname{Sym}^{2}(\sigma)\otimes\eta_{1} \omega_{\sigma}^{-1}.\]
We observe that neither \(\eta_{1}\) nor \(\eta_{2}\) can be equal to \(\omega_{\sigma}\), since the pole at \(s=1\) of \(L(s,\sigma\otimes(\sigma\otimes\omega_{\sigma})^{\vee}\otimes\operatorname{Sym }^{2}(\sigma))\) is simple, by dimension reasons. Also, \(\eta_{1}\neq\eta_{2}\), since the pole at \(s=1\) of \(L(s,\sigma\otimes\tau_{i}^{\vee}\otimes\operatorname{Sym}^{2}(\sigma))\) is simple, for \(i=1,2\). Thus, if we let \(\mu_{i}=\eta_{i}\omega_{\sigma}^{-1}\) for \(i=1,2\), then \(\mu_{1}\neq\mu_{2}\) are non-trivial characters such that
\[\operatorname{Sym}^{2}(\sigma)\cong\operatorname{Sym}^{2}(\sigma)\otimes\mu_ {1}\cong\operatorname{Sym}^{2}(\sigma)\otimes\mu_{2} \tag{3.7}\]
and
\[A^{3}(\sigma)\cong\sigma\otimes\mu_{1}\oplus\sigma\otimes\mu_{2}. \tag{3.8}\]
The first part of the theorem is proved, since \(\mu_{1}\) and \(\mu_{2}\) are not trivial.
We observe that (3.7) implies in particular that \(\mu_{1}\) has order three. If \(E/F\) is the cubic extension which corresponds to \(\mu_{1}\), then similarly as in SS 3.1 above, (3.7) implies that \(\operatorname{Sym}^{2}(\sigma_{E})\) is reducible. Note that since \(E/F\) is cyclic of order three and \(\sigma\) is two-dimensional irreducible, then \(\sigma_{E}\) is irreducible. By Theorem 3.1, \(\sigma_{E}\) is dihedral. In particular, \(\operatorname{proj}(\sigma(G_{F}))\) is finite. It must be tetrahedral, since \(A_{4}\) is the unique non-abelian finite subgroup of \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) with a dihedral index three subgroup.
Now, by taking contragradient of both sides of (3.8), we have that
\[A^{3}(\sigma)\otimes\omega_{\sigma}^{-1}\cong\sigma\otimes\omega_{\sigma}^{-1 }\mu_{1}^{-1}\oplus\sigma\otimes\omega_{\sigma}^{-1}\mu_{2}^{-1},\]
i.e.,
\[A^{3}(\sigma)\cong\sigma\otimes\mu_{1}\oplus\sigma\otimes\mu_{2}\cong\sigma \otimes\mu_{1}^{-1}\oplus\sigma\otimes\mu_{2}^{-1}.\]
If \(\sigma\otimes\mu_{1}\cong\sigma\otimes\mu_{1}^{-1}\), then \(\sigma\otimes\mu_{1}^{2}\cong\sigma\), which is not possible since \(\mu_{1}^{2}\) is non-trivial and \(\sigma\) is non-dihedral. Then, necessarily \(\sigma\otimes\mu_{1}\cong\sigma\otimes\mu_{2}^{-1}\). The latter implies that \(\mu_{1}=\mu_{2}^{-1}\), since \(\sigma\) is non-dihedral. If we take, for instance, \(\mu=\mu_{2}\), then \(A^{2}(\sigma)\cong A^{2}(\sigma)\otimes\mu\) and, since \(\mu_{1}=\mu_{2}^{-1}=\mu_{2}^{2}\), (3.8) becomes
\[A^{3}(\sigma)\cong\sigma\otimes\mu\oplus\sigma\otimes\mu^{2}.\]
Theorem 3.3 is now completed.
Similar to Corollary 3.2, the following result is a consequence of Theorem 3.3 and Lemma 2.1, together with the definition of \(M\).
**Corollary 3.4**.: _The representation \(\sigma\) is tetrahedral if and only if \(M=2\)._
### Fourth and Fifth Symmetric Powers
If \(\operatorname{Sym}^{3}(\sigma)\) is reducible then, by Lemma 2.1, so is \(\operatorname{Sym}^{4}(\sigma)\). To complete the criterion it suffices to inspect the case when the symmetric cube is irreducible in the next theorem.
**Theorem 3.5**.: _Assume that \(\operatorname{Sym}^{3}(\sigma)\) is irreducible. Then \(\operatorname{Sym}^{4}(\sigma)\) is reducible if and only if there exists a non-trivial quadratic character \(\chi\) such that_
\[\operatorname{Sym}^{3}(\sigma)\cong\operatorname{Sym}^{3}(\sigma)\otimes\chi;\]
_the condition being equivalent to \(\sigma\) being octahedral. When this is the case \(\sigma\) has open kernel and_
\[A^{4}(\sigma)\cong\operatorname{Ind}^{F}_{E}(\omega_{\sigma_{E}}\mu)\oplus \operatorname{Sym}^{2}(\sigma)\otimes\chi,\]
_where \(E/F\) is the quadratic extension corresponding to \(\chi\) by class field theory and \(\mu\) is some cubic character of \(G_{E}\)._
Proof.: First, if \(\operatorname{Sym}^{3}(\sigma)\cong\operatorname{Sym}^{3}(\sigma)\otimes\chi\), then clearly \(A^{3}(\sigma)\cong A^{3}(\sigma)\otimes\chi\). By Clebsch-Gordan
\[\sigma\otimes A^{3}(\sigma)\cong A^{4}(\sigma)\oplus\operatorname{Sym}^{2}( \sigma). \tag{3.9}\]
Then we have
\[A^{4}(\sigma)\oplus\operatorname{Sym}^{2}(\sigma)\cong A^{4}(\sigma)\otimes \chi\oplus\operatorname{Sym}^{2}(\sigma)\otimes\chi.\]
Since \(\operatorname{Sym}^{2}(\sigma)\otimes\chi\not\cong\operatorname{Sym}^{2}(\sigma)\), we have that \(\operatorname{Sym}^{2}(\sigma)\otimes\chi\) is a factor of \(A^{4}(\sigma)\). In particular, \(\operatorname{Sym}^{4}(\sigma)\) is reducible.
Let us suppose now that \(\operatorname{Sym}^{4}(\sigma)\), and therefore \(A^{4}(\sigma)\), is reducible. Since \(A^{3}(\sigma)\) is assumed to be irreducible, then for every subrepresentation \(\tau\) of \(\sigma\otimes A^{3}(\sigma)\), \(A^{3}(\sigma)\) is a subrepresentation of \(\sigma^{\vee}\otimes\tau\). In particular, \(\sigma\otimes A^{3}(\sigma)\) cannot have one-dimensional subrepresentations. Thus, from (3.9) we get that
\[A^{4}(\sigma)\cong\tau\oplus\Pi, \tag{3.10}\]
for some irreducible representations \(\tau\) and \(\Pi\) of dimensions \(2\) and \(3\), respectively. By comparing dimensions, we deduce that there exists a two-dimensional (necessarily irreducible) representation \(\tau_{1}\) such that
\[\sigma^{\vee}\otimes\Pi\cong A^{3}(\sigma)\oplus\tau_{1}.\]
By (A.2), we have that \(\Pi\) is a subrepresentation of \(\sigma\otimes\tau_{1}\). By comparing dimensions once again, we see that there exists a character \(\eta\) such that
\[\sigma\otimes\tau_{1}\cong\Pi\oplus\eta. \tag{3.11}\]
Therefore, we must have that \(\tau_{1}\otimes\eta^{-1}\cong\sigma^{\vee}\), i.e., \(\tau_{1}\cong\sigma\otimes\chi\), where \(\chi=\eta\omega_{\sigma}^{-1}\).
Now, we have
\[\sigma\otimes\tau_{1}\cong\sigma\otimes(\sigma\otimes\chi)\cong \operatorname{Sym}^{2}(\sigma)\otimes\chi\oplus\omega_{\sigma}\chi.\]
Thus, by (3.11) we conclude that
\[\Pi\cong\operatorname{Sym}^{2}(\sigma)\otimes\chi. \tag{3.12}\]
Then, by (3.9), (3.10) and (3.12) above, we obtain
\[\sigma\otimes A^{3}(\sigma)\cong\tau\oplus\operatorname{Sym}^{2}(\sigma) \otimes\chi\oplus\operatorname{Sym}^{2}(\sigma). \tag{3.13}\]
We see from (3.13) that \(\chi\) is not trivial, since we are assuming that \(A^{3}(\sigma)\) is irreducible and then the pole at \(s=1\) of
\[L(s,\sigma\otimes A^{3}(\sigma)\otimes\operatorname{Sym}^{2}(\sigma)^{\vee})= L(s,(\sigma\otimes\operatorname{Sym}^{2}(\sigma)^{\vee})\otimes A^{3}(\sigma))\]
must be simple, by dimension reasons. By taking contragradient of both sides of (3.13), we have that
\[\sigma\otimes A^{3}(\sigma)\otimes\omega_{\sigma}^{-2}\cong\tau\otimes\omega_ {\tau}^{-1}\oplus\operatorname{Sym}^{2}(\sigma)\otimes\omega_{\sigma}^{-2} \chi^{-1}\oplus\operatorname{Sym}^{2}(\sigma)\otimes\omega_{\sigma}^{-2}.\]
Hence, we obtain
\[\tau\oplus\operatorname{Sym}^{2}(\sigma)\otimes\chi\oplus\operatorname{Sym}^ {2}(\sigma)\cong\tau\otimes\omega_{\tau}^{-1}\omega_{\sigma}^{2}\oplus \operatorname{Sym}^{2}(\sigma)\otimes\chi^{-1}\oplus\operatorname{Sym}^{2}( \sigma).\]
In particular, we have that \(\operatorname{Sym}^{2}(\sigma)\otimes\chi\cong\operatorname{Sym}^{2}(\sigma) \otimes\chi^{-1}\). This implies that \(\chi\) is a non-trivial quadratic character by Theorem 3.3, since we are assuming that \(A^{3}(\sigma)\) is irreducible.
Let \(E/F\) be the quadratic extension obtained from \(\chi\) via class field theory. By restricting to \(G_{E}\), we have from (3.13) that
\[\sigma_{E}\otimes A^{3}(\sigma_{E})\cong\tau_{E}\oplus\operatorname{Sym}^{2} (\sigma_{E})\oplus\operatorname{Sym}(\sigma_{E}). \tag{3.14}\]
This implies that \(A^{3}(\sigma_{E})\) is reducible. In fact, as we have observed, otherwise the function \(L(s,\sigma_{E}\otimes A^{3}(\sigma_{E})\otimes\operatorname{Sym}^{2}(\sigma_ {E})^{\vee})\) would have pole at \(s=1\) at most simple.
The representation \(\sigma\) being non-dihedral, \(\sigma_{E}\) is irreducible. Thus, by SSSS3.1-3.2, we have that \(\sigma_{E}\) has open kernel. Since \(E/F\) is finite, \(\sigma\) has open kernel. Now, the representation \(\sigma_{E}\) cannot be dihedral, since none of the groups \(A_{4}\), \(S_{4}\) and \(A_{5}\) has an index-two subgroup which is dihedral. Actually, we necessarily have that \(\sigma\) is octahedral and \(\sigma_{E}\) is tetrahedral. We then can apply Theorem 3.3. In particular, we have that
\[A^{3}(\sigma_{E})\cong\sigma_{E}\otimes\mu\oplus\sigma_{E}\otimes\mu^{2}, \tag{3.15}\]
for some character \(\mu\) of \(G_{E}\). Let \(s\) be the non-trivial element of \(G_{F}/G_{E}\cong\operatorname{Gal}(E/F)\). Clearly, \(s\) acts trivially on \(A^{3}(\sigma_{E})\), and cannot act trivially on its factors \(\sigma_{E}\otimes\mu\) and \(\sigma_{E}\otimes\mu^{2}\), since in this case \(A^{3}(\sigma)\) would be reducible. Then, we have that \((\sigma_{E}\otimes\mu)^{s}=\sigma_{E}\otimes\mu^{2}\), i.e., \(\mu^{s}=\mu^{2}\). This implies that \(A^{3}(\sigma)\cong\operatorname{Ind}_{E}^{F}(\sigma_{E}\otimes\mu)\). Since \(E/F\) is the extension defined by \(\chi\), we obtain
\[A^{3}(\sigma)\cong A^{3}(\sigma)\otimes\chi.\]
Now, from (3.15) and the fact that \(\operatorname{Sym}^{2}(\sigma_{E})\cong\operatorname{Sym}^{2}(\sigma_{E})\otimes\mu\) by Theorem 3.3 we have that
\[\sigma_{E}\otimes A^{3}(\sigma_{E}) \cong(\sigma_{E}\otimes\sigma_{E}\otimes\mu)\oplus\left(\sigma_{E }\otimes\sigma_{E}\otimes\mu^{2}\right)\] \[\cong\left(\operatorname{Sym}^{2}(\sigma_{E})\oplus\omega_{\sigma _{E}}\mu\right)\oplus\left(\operatorname{Sym}^{2}(\sigma_{E})\oplus\omega_{ \sigma_{E}}\mu^{2}\right).\]
By replacing the above in (3.14), we have
\[\tau_{E}\cong\omega_{\sigma_{E}}\mu\oplus\omega_{\sigma_{E}}\mu^{2}.\]
Therefore, \(\tau\cong\operatorname{Ind}_{E}^{F}(\omega_{\sigma_{E}}\mu)\) and we obtain the decomposition
\[A^{4}(\sigma)\cong\operatorname{Ind}_{E}^{F}(\omega_{\sigma_{E}}\mu)\oplus \operatorname{Sym}^{2}(\sigma)\otimes\chi.\]
We are done.
Analogous to the dihedral and tetrahedral cases, Corollaries 3.2 and 3.4 respectively, for octahedral representations we have the following result.
**Corollary 3.6**.: _The representation \(\sigma\) is octahedral if and only if \(M=3\)._
Given our results thus far, the following criterion for the symmetric fifth power is quickly obtained.
**Theorem 3.7**.: _The following are equivalent:_
_(i) \(\operatorname{Sym}^{5}(\sigma)\) is reducible._
_(ii) \(\operatorname{Sym}^{4}(\sigma)\) is reducible._
_(iii) \(\sigma\) is either dihedral, tetrahedral or octahedral._
Proof.: The equivalence of (i) and (ii) follows from Lemmas 2.1 and 2.2. And, from our prior results on \(\operatorname{Sym}^{n}(\sigma)\) for \(n=4,3,2\), Theorems 3.5, 3.3 and 3.1 respectively, we can infer that \(\operatorname{Sym}^{4}(\sigma)\) is reducible if and only if \(\sigma\) is either dihedral, tetrahedral or octahedral.
## 4. Reducibility Criteria for Symmetric Powers of
Galois Representations: The Icosahedral Case
The main aim of this section is to establish the remaining reducibility criteria involving the symmetric sixth, proving that if \(\operatorname{Sym}^{6}(\sigma)\) is reducible then \(\sigma\) has open kernel. Of particular interest are icosahedral represenations. We keep the notation of SSSS 2 and 3, where in particular all of our representations are \(\ell\)-adic.
In analogy to the previous cases, arguing with Corollaries 3.2, 3.4, and now including Corollary 3.6, it is enough to study the reducibility of \(\operatorname{Sym}^{6}(\sigma)\) when \(\operatorname{Sym}^{4}(\sigma)\) is irreducible.
**Theorem 4.1**.: _If \(\operatorname{Sym}^{4}(\sigma)\) is irreducible, then the following are equivalent._
1. \(\operatorname{Sym}^{6}(\sigma)\) _is reducible._
2. _There exists an irreducible two-dimensional representation_ \(\sigma^{\prime}\) _of_ \(G_{F}\) _such that_ \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{ \prime}\)_._
_The representation \(\sigma^{\prime}\) is uniquely determined up to isomorphism by (ii)._
Proof.: Let us suppose first that \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{ \prime}\) for some \(\sigma^{\prime}\). Then
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\sigma\otimes\operatorname{ Ad}(\sigma)\otimes\sigma^{\prime}\cong\left(A^{3}(\sigma)\otimes\sigma^{\prime} \right)\oplus\left(\sigma\otimes\sigma^{\prime}\right). \tag{4.1}\]
On the other hand, we have the identity
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Sym}^{6}( \sigma)\oplus\operatorname{Sym}^{4}(\sigma)\otimes\omega_{\sigma}. \tag{4.2}\]
By comparing the decompositions (4.1) and (4.2), and noting that \(\operatorname{Sym}^{4}(\sigma)\otimes\omega_{\sigma}\) is irreducible, we see that \(\operatorname{Sym}^{6}(\sigma)\) is reducible.
Let us suppose now that \(\operatorname{Sym}^{6}(\sigma)\) is reducible. Let \(\tau\) be an irreducible component of \(\operatorname{Sym}^{6}(\sigma)\) of dimension \(r\leq 3\), which exists by the semisimplicity of \(\operatorname{Sym}^{6}(\sigma)\). By (4.2), \(\tau\) is a subrepresentation of \(\sigma\otimes\operatorname{Sym}^{5}(\sigma)\) and then \(\operatorname{Sym}^{5}(\sigma)\) is a subrepresentation of \(\sigma^{\vee}\otimes\tau\), by (A.2). Since \(\operatorname{Sym}^{5}(\sigma)\) is \(6\)-dimensional, we necessarily have that \(r=3\) and
\[\operatorname{Sym}^{5}(\sigma)\cong\sigma^{\vee}\otimes\tau. \tag{4.3}\]
We observe that \(\operatorname{Sym}^{6}(\sigma)\cong\Pi\oplus\tau\) for some irreducible \(4\)-dimensional representation \(\Pi\), since \(\operatorname{Sym}^{6}(\sigma)\) cannot have irreducible summands of dimension less than \(3\) by the above. Now, \(\operatorname{Sym}^{5}(\sigma)\) is a subrepresentation of \(\sigma^{\vee}\otimes\Pi\), and by comparing dimensions we deduce that there exists an irreducible \(2\)-dimensional representation \(\sigma^{\prime}\) such that
\[\sigma^{\vee}\otimes\Pi\cong\operatorname{Sym}^{5}(\sigma)\oplus\sigma^{ \prime}. \tag{4.4}\]
Using (A.2) again and comparing dimensions, we obtain that
\[\Pi\cong\sigma\otimes\sigma^{\prime}.\]
Then
\[\sigma^{\vee}\otimes\Pi\cong\sigma^{\vee}\otimes(\sigma\otimes\sigma^{\prime })\cong(\operatorname{Ad}(\sigma)\otimes\sigma^{\prime})\oplus\sigma^{ \prime}.\]
By comparing with (4.4) we have that \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{ \prime}\), i.e., (ii) holds.
Finally, let us see that the representation \(\sigma^{\prime}\) is uniquely determined by (ii). Let \(\sigma^{\prime\prime}\) be another (necessarily irreducible) representation such that \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{ \prime\prime}\). Then
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\left(A^{3}(\sigma)\otimes \sigma^{\prime}\right)\oplus\left(\sigma\otimes\sigma^{\prime}\right)\cong \left(A^{3}(\sigma)\otimes\sigma^{\prime\prime}\right)\oplus\left(\sigma \otimes\sigma^{\prime\prime}\right).\]
The above being in turn isomorphic to
\[\operatorname{Sym}^{6}(\sigma)\oplus\operatorname{Sym}^{4}(\sigma)\otimes\omega_{ \sigma}\cong\Pi\oplus\tau\oplus\operatorname{Sym}^{4}(\sigma)\otimes\omega_{ \sigma},\]
by comparing dimensions, we obtain that \(\Pi\cong\sigma\otimes\sigma^{\prime\prime}\). Thus, \(\sigma^{\prime\prime}\) is a subrepresentation of \(\sigma^{\vee}\otimes\Pi\). By (4.4), we see that \(\sigma^{\prime\prime}\cong\sigma^{\prime}\), as wanted.
**Proposition 4.2**.: _Assume that \(\operatorname{Sym}^{4}(\sigma)\) is irreducible and \(\operatorname{Sym}^{6}(\sigma)\) is reducible. Let \(\sigma^{\prime}\) be the representation of Theorem 4.1. Then there exists a character \(\chi\) such that_
\[\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma^{\prime})\otimes \sigma\otimes\chi.\]
_In this case, we have the decomposition_
\[\operatorname{Sym}^{6}(\sigma)\cong\sigma\otimes\sigma^{\prime}\oplus \operatorname{Ad}(\sigma^{\prime})\otimes\chi\omega_{\sigma}.\]
Proof.: From (4.3) we obtain
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\sigma\otimes\sigma^{\vee} \otimes\tau\cong\operatorname{Ad}(\sigma)\otimes\tau\oplus\tau.\]
On the other hand, we have that
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Sym}^{6}( \sigma)\oplus\operatorname{Sym}^{4}(\sigma)\otimes\omega_{\sigma}\cong\sigma \otimes\sigma^{\prime}\oplus\tau\oplus\operatorname{Sym}^{4}(\sigma)\otimes \omega_{\sigma}.\]
Thus,
\[\operatorname{Ad}(\sigma)\otimes\tau\cong\sigma\otimes\sigma^{\prime}\oplus \operatorname{Sym}^{4}(\sigma)\otimes\omega_{\sigma}.\]
Therefore, \(\sigma\otimes\sigma^{\prime}\) is a subrepresentation of \(\operatorname{Ad}(\sigma)\otimes\tau\). By (A.2), noting that \(\operatorname{Ad}(\sigma)\) is selfdual, we have that \(\tau\) is a subrepresentation of
\[\operatorname{Ad}(\sigma)\otimes\sigma\otimes\sigma^{\prime}\cong\left(A^{3} (\sigma)\otimes\sigma^{\prime}\right)\oplus\left(\sigma\otimes\sigma^{\prime} \right).\]
We note that \(\tau\) is not a subrepresentation of \(\sigma\otimes\sigma^{\prime}\), since \(\sigma\otimes\sigma^{\prime}\) is irreducible. Thus, we have that \(\tau\) is a subrepresentation of \(A^{3}(\sigma)\otimes\sigma^{\prime}\). Then \(A^{3}(\sigma)\) is a subrepresentation of \(\sigma^{\prime\vee}\otimes\tau\), and there exists a (necessarily irreducible) two-dimensional representation \(\tau_{0}\) such that
\[\sigma^{\prime\vee}\otimes\tau\cong A^{3}(\sigma)\oplus\tau_{0}.\]
Then \(\tau\) is a subrepresentation of \(\sigma^{\prime}\otimes\tau_{0}\) and, by comparing dimensions, we see that \(\sigma^{\prime}\otimes\tau_{0}\) contains a character. Then, \(\tau_{0}\cong\sigma^{\prime}\otimes\mu\), for some character \(\mu\), and
\[\sigma^{\prime}\otimes\tau_{0}\cong\operatorname{Sym}^{2}(\sigma^{\prime}) \otimes\mu\oplus\omega_{\sigma^{\prime}}\mu.\]
The above implies that \(\tau\cong\operatorname{Sym}^{2}(\sigma^{\prime})\otimes\mu\). We then have that
\[\operatorname{Sym}^{5}(\sigma)\cong\sigma^{\vee}\otimes\tau\cong \operatorname{Ad}(\sigma^{\prime})\otimes\sigma\otimes\chi,\]
where \(\chi=\omega_{\sigma^{\prime}}\omega_{\sigma}^{-1}\mu\). As for the decomposition, we note that
\[\operatorname{Sym}^{6}(\sigma)\cong\Pi\oplus\tau\cong\sigma\otimes\sigma^{ \prime}\oplus\operatorname{Sym}^{2}(\sigma^{\prime})\otimes\mu\cong\sigma \otimes\sigma^{\prime}\oplus\operatorname{Ad}(\sigma^{\prime})\otimes\chi \omega_{\sigma}.\]
### Remarks
Over number fields, Ramakrishnan refers to the representations satisfying the equivalente conditions of Theorem 4.1 as quasi-icosahedral, see [15]. Over function fields, we shall make a refinement to the irreducibility criterion presented in Theorem 4.1. We prove that these representations are icosahedral in Theorem 4.6, by proving they have open kernel.
Furthermore, observe that the representation \(\sigma^{\prime}\) of Theorem 4.1 is very peculiar. For instance, \(\sigma^{\prime}\not\cong\sigma\otimes\mu\) for every character \(\mu\). Also, since \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma^{\prime})\otimes \sigma\otimes\chi\) is irreducible, then \(\operatorname{Ad}(\sigma^{\prime})\not\cong\operatorname{Ad}(\sigma)\otimes\mu\) for every character \(\mu\). However, we shall prove in Proposition 4.3 below that \(\operatorname{Sym}^{3}(\sigma^{\prime})\cong\operatorname{Sym}^{3}(\sigma) \otimes\mu\) for some character \(\mu\). This identity brings to light the fact that the properties of the character \(\chi\) of Proposition 4.2 determine it uniquely, see Corollary 4.4.
We now address a couple of delicate aspects of the criteria, giving a precise description of the representation \(\sigma^{\prime}\) and character \(\chi\) appearing in SS 4.1.
**Proposition 4.3**.: _Let us assume that \(\operatorname{Sym}^{4}(\sigma)\) is irreducible and that \(\operatorname{Sym}^{6}(\sigma)\) is reducible. Let \(\sigma^{\prime}\) be the representation in Theorem 4.1. Then there exists a unique character \(\mu\) such that_
\[\operatorname{Sym}^{3}(\sigma^{\prime})\cong\operatorname{Sym}^{3}(\sigma) \otimes\mu.\]
_More precisely, \(\mu=\omega_{\sigma^{\prime}}\omega_{\sigma}^{-1}\chi\), where \(\chi\) is as in Proposition 4.2, and satisfies_
\[\mu=(\eta\omega_{\sigma}^{2})^{3},\]
_for a quadratic character \(\eta\)._
Proof.: The unicity of \(\mu\) follows from the irreducibility of \(\operatorname{Sym}^{4}(\sigma)\) and Theorem 3.5. Let \(\chi\) be a character as in Proposition 4.2. We recall that we have
\[\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma^{\prime})\otimes \sigma\otimes\chi\cong\operatorname{Ad}(\sigma)\otimes\sigma^{\prime}.\]
Then
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\sigma\otimes\operatorname{ Ad}(\sigma^{\prime})\otimes\sigma\otimes\chi\cong\left(\operatorname{Sym}^{2}( \sigma)\otimes\operatorname{Ad}(\sigma^{\prime})\otimes\chi\right)\oplus \left(\operatorname{Ad}(\sigma^{\prime})\otimes\chi\omega_{\sigma}\right),\]
and
\[\sigma\otimes\operatorname{Sym}^{5}(\sigma)\cong\sigma\otimes\operatorname{ Ad}(\sigma)\otimes\sigma^{\prime}\cong\left(A^{3}(\sigma)\otimes\sigma^{\prime} \right)\oplus\left(\sigma\otimes\sigma^{\prime}\right).\]
Since \(\sigma\otimes\sigma^{\prime}\) is irreducible, the identities above imply that it is a subrepresentation of \(\operatorname{Sym}^{2}(\sigma)\otimes\operatorname{Ad}(\sigma^{\prime}) \otimes\chi\). Then \(\chi\) is a character contained in \(\operatorname{Sym}^{2}(\sigma^{\vee})\otimes\operatorname{Ad}(\sigma^{\prime} )\otimes\sigma\otimes\sigma^{\prime}\). Using Clebsch-Gordan, we see that \(\chi\) is contained in
\[\left(A^{3}(\sigma)\otimes A^{3}(\sigma^{\prime})\otimes\omega_{\sigma}^{-1} \right)\oplus\left(A^{3}(\sigma)\otimes\sigma^{\prime}\otimes\omega_{\sigma}^ {-1}\right)\oplus\left(\sigma^{\vee}\otimes A^{3}(\sigma^{\prime})\right) \oplus\left(\sigma^{\vee}\otimes\sigma^{\prime}\right).\]
The representation \(\sigma\otimes\sigma^{\prime}\) being irreducible, the character \(\chi\) is not contained in \(\sigma^{\vee}\otimes\sigma^{\prime}\). Also, since \(A^{3}(\sigma)\) and \(\sigma^{\prime}\) are irreducible and of different dimension, \(\chi\) cannot be contained in \(A^{3}(\sigma)\otimes\sigma^{\prime}\otimes\omega_{\sigma}^{-1}\). Let us assume that \(\sigma^{\vee}\otimes A^{3}(\sigma^{\prime})\) contains \(\chi\). Then \(A^{3}(\sigma^{\prime})\) is reducible and \(\sigma\otimes\chi\) is one of its irreducible factors. Now, since \(\operatorname{Ad}(\sigma^{\prime})\)
is irreducible (otherwise, \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma^{\prime})\otimes \sigma\otimes\chi\) would be reducible), then by Theorem 3.3 we have that
\[A^{3}(\sigma^{\prime})\cong\sigma^{\prime}\otimes\mu_{0}\oplus\sigma^{\prime} \otimes\mu_{0}^{2},\]
for some character \(\mu_{0}\). Then \(\sigma\otimes\chi\) is (isomorphic to) either \(\sigma^{\prime}\otimes\mu_{0}\) or \(\sigma^{\prime}\otimes\mu_{0}^{2}\). This is a contradiction, for we know that \(\sigma^{\prime}\not\cong\sigma\otimes\mu\) for every character \(\mu\).
Therefore, \(A^{3}(\sigma)\otimes A^{3}(\sigma^{\prime})\otimes\omega_{\sigma}^{-1}\) contains \(\chi\). This implies that \(A^{3}(\sigma^{\prime})\) is irreducible and \(A^{3}(\sigma^{\prime})\cong A^{3}(\sigma)^{\vee}\otimes\omega_{\sigma}\chi\). Then if we let \(\mu=\omega_{\sigma^{\prime}}\omega_{\sigma}^{-1}\chi\), we get
\[\operatorname{Sym}^{3}(\sigma^{\prime})\cong\operatorname{Sym}^{3}(\sigma) \otimes\mu,\]
as desired.
Now, by comparing determinants from the relation \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{\prime}\), we have
\[\omega_{\sigma}^{15}=\omega_{\sigma^{\prime}}^{3}. \tag{4.5}\]
On the other hand, by taking contragradient from both sides of \(\operatorname{Sym}^{3}(\sigma^{\prime})\cong\operatorname{Sym}^{3}(\sigma) \otimes\mu\), we obtain \(\operatorname{Sym}^{3}(\sigma^{\prime})\otimes\omega_{\sigma^{\prime}}^{-3} \cong\operatorname{Sym}^{3}(\sigma)\otimes\omega_{\sigma}^{-3}\mu^{-1}\). Thus,
\[\operatorname{Sym}^{3}(\sigma)\otimes\mu\cong\operatorname{Sym}^{3}(\sigma^ {\prime})\cong\operatorname{Sym}^{3}(\sigma)\otimes(\omega_{\sigma^{\prime}} \omega_{\sigma}^{-1})^{3}\mu^{-1},\]
i.e.
\[\operatorname{Sym}^{3}(\sigma)\cong\operatorname{Sym}^{3}(\sigma)\otimes( \omega_{\sigma^{\prime}}\omega_{\sigma}^{-1})^{3}\mu^{-2}.\]
Since \(\sigma\) is not of octahedral type, the above implies that \(\mu^{2}=(\omega_{\sigma^{\prime}}\omega_{\sigma}^{-1})^{3}\). But from (4.5) we have \((\omega_{\sigma^{\prime}}\omega_{\sigma}^{-1})^{3}=\omega_{\sigma}^{12}\). Thus
\[\mu=\eta\omega_{\sigma}^{6}=(\eta\omega_{\sigma}^{2})^{3}\]
for some quadratic character \(\eta\).
We now register a consequence that we remarked in SS 4.2.
**Corollary 4.4**.: _Let \(\sigma^{\prime}\) be the irreducible two-dimensional Galois representation of Theorem 4.1 satisfying_
\[\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma)\otimes\sigma^{ \prime}.\]
_Then the character \(\chi\) of Proposition 4.2 satisfying_
\[\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma^{\prime})\otimes \sigma\otimes\chi\]
_is unique with respect to this property._
Proof.: If \(\xi\) were another character satisfying \(\operatorname{Sym}^{5}(\sigma)\cong\operatorname{Ad}(\sigma^{\prime})\otimes \sigma\otimes\xi\), then Proposition 4.2 tells us that
\[\operatorname{Sym}^{6}(\sigma)\cong\sigma\otimes\sigma^{\prime}\oplus \operatorname{Ad}(\sigma^{\prime})\otimes\xi\omega_{\sigma}.\]
Hence, we would have
\[\operatorname{Ad}(\sigma^{\prime})\cong\operatorname{Ad}(\sigma^{\prime}) \otimes\xi\chi^{-1}.\]
If \(\xi\chi^{-1}\) were not trivial, then by Theorem 3.3, \(\operatorname{Sym}^{3}(\sigma^{\prime})\) would be reducible, a contradiction.
Let us denote \(\xi:=\eta^{-1}\omega_{\sigma}^{-2}\) and \(\widetilde{\sigma}:=\sigma^{\prime}\otimes\xi\). Thus, \(\operatorname{Sym}^{3}(\sigma)\cong\operatorname{Sym}^{3}(\widetilde{\sigma})\). We observe that this implies in particular that \(\operatorname{Sym}^{3}(\widetilde{\sigma})\) is irreducible. Also, by Theorem 3.5, we have that \(\operatorname{Sym}^{4}(\widetilde{\sigma})\) is irreducible. Furthermore, by decomposing via the Clebsch-Gordan formulas both sides of
\[\operatorname{Sym}^{3}(\sigma)\otimes\operatorname{Sym}^{3}(\sigma)\cong \operatorname{Sym}^{3}(\widetilde{\sigma})\otimes\operatorname{Sym}^{3}( \widetilde{\sigma})\]
and using the fact that \(\operatorname{Sym}^{6}(\sigma)\) is reducible, we deduce that \(\operatorname{Sym}^{6}(\widetilde{\sigma})\) is reducible. As a conclusion of this, \(\widetilde{\sigma}\) satisfies the conditions in Theorem 4.1.
We write \(\Gamma=\sigma(G_{F})\) and \(\widetilde{\Gamma}=\widetilde{\sigma}(G_{F})\). We shall need the following useful fact.
**Lemma 4.5**.: _The groups \(\Gamma\) and \(\widetilde{\Gamma}\) consist of semisimple automorphisms._
Proof.: Let \(g\in\Gamma\), and let us assume that \(g\) is not semisimple. Then, the Jordan decomposition tells us that there is a suitable basis for \(V\) where we may write
\[g=\left(\begin{array}{cc}\lambda&1\\ 0&\lambda\end{array}\right)\]
for some \(\lambda\in\overline{\mathbb{Q}}_{\ell}^{\times}\). But then the Jordan normal form of \(\operatorname{Sym}^{6}(g)\) has a single block, i.e., none of its invariant subspaces have a complement. This is not possible, since \(\operatorname{Sym}^{6}(\sigma)\) is semisimple and reducible. Since \(\operatorname{Sym}^{6}(\widetilde{\sigma})\) is also reducible, the same applies to \(\widetilde{\Gamma}\).
In our study of the image \(\Gamma=\sigma(G_{F})\), we come across the following _no-small finite subgroups_ property:
A compact subgroup \(K\) of \(\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\), has itself a compact open subgroup
\(\mathcal{U}\) without non-trivial finite subgroups.
Actually, if \(K\) is any compact subgroup of \(\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{\ell})\), then there exists a finite extension of \(\mathbb{Q}_{\ell}\) with ring of integers \(\mathcal{O}\) such that \(K\subseteq\operatorname{GL}_{2}(\mathcal{O})\). It is well known that \(\operatorname{GL}_{2}(\mathcal{O})\) contains a compact open subgroup, say \(\mathcal{U}_{0}\), without non-trivial finite subgroups. Then it suffices to take \(\mathcal{U}=\mathcal{U}_{0}\cap K\).
We observe that \(\mathcal{U}\) could very well be trivial, and clearly this is the case when \(K\) is finite. Note that \(\Gamma=\sigma(G_{F})\) is compact, hence it satisfies the no-small finite subgroups property.
**Theorem 4.6**.: _If \(\operatorname{Sym}^{4}(\sigma)\) is irreducible and \(\operatorname{Sym}^{6}(\sigma)\) is reducible, then \(\sigma\) has open kernel and \(J=\operatorname{proj}(\sigma(G_{F}))\) is icosahedral._
Our proof relies on the following technical lemma. That \(\operatorname{proj}(\sigma(G_{F}))\) is icosahedral follows once we settle that \(\sigma\) has open kernel, for we are assuming that \(\sigma\) is neither dihedral, tetrahedral, nor octahedral.
**Lemma 4.7**.: _Assume that \(\,\mathcal{U}_{\Gamma}\) is a non-trivial compact open subgroup of \(\,\Gamma=\sigma(G_{F})\) without non-trivial finite subgroups. Let_
\[\varphi\colon\operatorname{Sym}^{3}(\sigma)\stackrel{{\sim}}{{ \longrightarrow}}\operatorname{Sym}^{3}(\widetilde{\sigma})\]
_be a given \(G_{F}\)-isomorphism. Then there exist bases for \(V\) and \(\widetilde{V}\) and \(g\in\operatorname{M}_{2}(\overline{\mathbb{Q}}_{\ell})\) such that the corresponding matrix for \(\varphi\) is \(\operatorname{Sym}^{3}(g)\)._
Proof.: (of Lemma 4.7). Let us first assume that \(\omega_{\sigma}\) is of finite order. Let \(h\in\mathcal{U}_{\Gamma}\) be different from the identity. We choose a basis for \(V\) such that \(h\) is representable by a diagonal matrix, say \(h=\operatorname{diag}(a,b)\). Let us choose a basis for \(\widetilde{V}\) such that
\[\varphi\cdot\operatorname{Sym}^{3}(h)\cdot\varphi^{-1}=\varphi\cdot \operatorname{diag}(a^{3},a^{2}b,ab^{2},b^{3})\cdot\varphi^{-1}\in\operatorname {Aut}_{\overline{\mathbb{Q}}_{\ell}}(\widetilde{V})\]
is representable by a diagonal matrix. Since \(\varphi\colon\operatorname{Sym}^{3}(\sigma)\to\operatorname{Sym}^{3}( \widetilde{\sigma})\) is a \(G_{F}\)-isomorphism, we may write
\[\varphi\cdot\operatorname{diag}(a^{3},a^{2}b,ab^{2},b^{3})\cdot\varphi^{-1}= \operatorname{diag}(\alpha^{3},\alpha^{2}\beta,\alpha\beta^{2},\beta^{3}) \tag{4.6}\]
for some \(\alpha,\beta\in\overline{\mathbb{Q}}_{\ell}\). We note that all the non-zero entries of \(\operatorname{Sym}^{3}(h)\) are different from each other. Otherwise we would have that \(a^{k}=b^{k}\) for some integer \(k>0\), and since \(h\) has a finite-order determinant, then \(a^{n}=b^{n}=1\) for some other integer \(n>0\). But then \(h\) would generate a finite non-trivial group, contradicting the no-small finite subgroups property of \(\Gamma\). Thus, we may write \(\varphi\) as \(TM\), where \(T\) is diagonal and \(M\) a permutation matrix. Let us see actually that \(M\) is either \(I=(\delta_{i,j})\) or \(I^{-}=(\delta_{i,5-j})\). From (4.6), we have
\[\alpha^{3} =a^{n_{0}}b^{3-n_{0}},\] \[\alpha^{2}\beta =a^{n_{1}}b^{3-n_{1}},\] \[\alpha\beta^{2} =a^{n_{2}}b^{3-n_{2}},\] \[\beta^{3} =a^{n_{3}}b^{3-n_{3}},\]
for some permutation \((n_{0},n_{1},n_{2},n_{3})\) of \((0,1,2,3)\). Again, since \(a^{k}\neq b^{k}\) for all integers \(k\neq 0\), the equations above imply
\[n_{0}+n_{3} =n_{1}+n_{2},\] \[n_{0}+n_{2} =2n_{1}.\]
It is readily seen that the unique solutions of the latter are \((0,1,2,3)\) and \((3,2,1,0)\), as claimed.
Since both \(I\) and \(I^{-}\) are of the form \(\operatorname{Sym}^{3}(g_{0})\) for some \(g_{0}\), it suffices to prove that \(T=\operatorname{Sym}^{3}(g)\) for some \(g\). Let us suppose that
\[T=\operatorname{diag}(t_{1},t_{2},t_{3},t_{4}).\]
Necessary and sufficient conditions for \(T\) to be of the form \(\operatorname{Sym}^{3}(g)\) for some \(g\) are \(t_{1}t_{4}=t_{2}t_{3}\) and \(t_{1}t_{3}=t_{2}^{2}\). Let us verify these. Since \(\sigma\) is not dihedral, there exists \(\left(\begin{array}{cc}x&y\\ w&z\end{array}\right)\in J\) such that \(xy\neq 0\) or \(wz\neq 0\). We note that \(T\cdot\operatorname{Sym}^{3}\left(\left(\begin{array}{cc}x&y\\ w&z\end{array}\right)\right)\cdot T^{-1}\) is equal to
\[\left(\begin{array}{cccc}x^{3}&x^{2}y\cdot t_{1}t_{2}^{-1}&xy^{2}\cdot t_{1} t_{3}^{-1}&y^{3}\cdot t_{1}t_{4}^{-1}\\ *&*&*&*\\ *&*&*&*\\ w^{3}\cdot t_{4}t_{1}^{-1}&w^{2}z\cdot t_{4}t_{2}^{-1}&wz^{2}\cdot t_{4}t_{3}^ {-1}&z^{3}\end{array}\right), \tag{4.7}\]
where we will only use the entries in the first and last row of the previous matrix.
Let us suppose that \(xy\neq 0\). Since the matrix in (4.7) is \(\operatorname{Sym}^{3}(\widetilde{h})\) for some \(\widetilde{h}\), then we certainly have
\[x^{3}y^{3}\cdot t_{1}t_{4}^{-1} =(x^{2}y\cdot t_{1}t_{2}^{-1})(xy^{2}\cdot t_{1}t_{3}^{-1}),\] \[x^{3}(xy^{2}\cdot t_{1}t_{3}^{-1}) =(x^{2}y\cdot t_{1}t_{2}^{-1})^{2}.\]
Since \(xy\neq 0\), then \(t_{1}t_{4}^{-1}=t_{1}^{2}t_{2}^{-1}t_{3}^{-1}\) and \(t_{1}t_{3}^{-1}=t_{1}^{2}t_{2}^{-2}\). It is straightforward to see that these lead us to the desired properties \(t_{1}t_{4}=t_{2}t_{3}\) and \(t_{1}t_{3}=t_{2}^{2}\). If \(wz\neq 0\), then we must pay attention to the fourth row of (4.7), and the above works similar.
In general, we note that, by [3] SS IV.2.9, there exists an \(\ell\)-adic character \(\chi\) such that \(\det(\sigma\otimes\chi)\) is of finite order. Since every \(G_{F}\)-isomorphism
\[\varphi\colon\operatorname{Sym}^{3}(\sigma\otimes\chi)\xrightarrow{\sim} \operatorname{Sym}^{3}(\widetilde{\sigma}\otimes\chi)\]
is in turn a \(G_{F}\)-isomorphism
\[\varphi\colon\operatorname{Sym}^{3}(\sigma)\xrightarrow{\sim} \operatorname{Sym}^{3}(\widetilde{\sigma}),\]
the proposition follows.
Proof of Theorem 4.6.: Let \(\mathcal{U}_{\Gamma}\) be a compact open subgroup of \(\Gamma\) without non-trivial finite subgroups. If \(\mathcal{U}_{\Gamma}\) is trivial, then \(\Gamma\) is finite and we are done. Hence, we may assume that \(\mathcal{U}_{\Gamma}\) is non-trivial.
We fix bases for \(V\) and \(\widetilde{V}\) and choose \(g\in\operatorname{Hom}_{\overline{\mathbb{Q}}_{\ell}}(V,\widetilde{V})\) such that \(\operatorname{Sym}^{3}(g)\) is a \(G_{F}\)-isomorphism \(\operatorname{Sym}^{3}(\sigma)\xrightarrow{\sim}\operatorname{Sym}^{3}( \widetilde{\sigma})\), just as in Lemma 4.7. Since the group homomorphism \(\operatorname{Sym}^{3}\colon\operatorname{GL}_{2}(\overline{\mathbb{Q}}_{ \ell})\to\operatorname{GL}_{4}(\overline{\mathbb{Q}}_{\ell})\) has finite kernel, then there exists an open subgroup \(U\) of \(\Gamma\) such that \(\operatorname{Sym}^{3}|_{U}\) is injective.
If we set \(H=\sigma^{-1}(U)\), then \(H\) is an open subgroup of \(G_{F}\) and \(g\) becomes an \(H\)-isomorphism \(V\xrightarrow{\sim}\widetilde{V}\). We observe that since \(\sigma_{H}\cong\widetilde{\sigma}_{H}\cong\sigma_{H}^{\prime}\otimes\xi_{H}\), and
\[\operatorname{Sym}^{5}(\sigma_{H})\cong\operatorname{Ad}(\sigma_{H})\otimes \sigma_{H}^{\prime},\]
then \(\operatorname{Sym}^{5}(\sigma_{H})\) is reducible. This implies that \(\sigma_{H}\) has open kernel. Since \(H\) is open in \(G_{F}\), we are done.
### Higher Symmetric Powers
We here register Ramakrishnan's irreducibility criterion for higher symmetric powers, Theorem A' (c) of [15].
**Theorem 4.8**.: _Let \(n>6\) be an integer. Then \(\operatorname{Sym}^{n}(\sigma)\) is irreducible if and only if \(\operatorname{Sym}^{6}(\sigma)\) is irreducible._
The proof for complex representations over number fields of [_loc. cit._], is also valid in the case of \(\ell\)-adic representations over function fields. It is a formal consequence of Lemmas 2.1 and 2.2.
The following theorem summarizes what we have done so far.
**Theorem 4.9**.: _Let \(\sigma\colon G_{F}\to\operatorname{GL}(V)\) be an irreducible 2-dimensional \(\ell\)-adic Galois representation. Then the following are equivalent:_
1. \(\sigma\) _has open kernel._
2. _There exists an integer_ \(n\geq 2\) _such that_ \(\operatorname{Sym}^{n}(\sigma)\) _is reducible._
3. \(\operatorname{Sym}^{6}(\sigma)\) _is reducible._
_To be more precise, if the above equivalent conditions are met we then have the following classification._
* _M=1:_ \(\sigma\) _is dihedral._
* _M=2:_ \(\sigma\) _is tetrahedral._
* _M=3:_ \(\sigma\) _is octahedral._
* _M=5:_ \(\sigma\) _is icosahedral._
_And, \(\operatorname{Sym}^{5}(\sigma)\) is irreducible if and only if \(\operatorname{Sym}^{4}(\sigma)\) is irreducible._
Proof.: At this point, only the equivalence of (i), (ii) and (iii) require comment.
Assume that \(\sigma\) has open kernel, then \(\Gamma=\sigma(G_{F})\) is finite. Now, for each \(n\), \(\operatorname{Sym}^{n}(\sigma)\) can be seen as a representation of \(\Gamma\) over the algebraically closed field of characteristic zero \(\overline{\mathbb{Q}}_{\ell}\). Notice that \(\Gamma\) can only have finitely many irreducible representations up to isomorphism. Necessarily, \(\operatorname{Sym}^{n}(\sigma)\) is reducible for some integer \(n\geq 2\) and (ii) holds.
Now, assume that \(\operatorname{Sym}^{n}(\sigma)\) is reducible for some \(n\geq 2\). If \(n\leq 6\), then \(\operatorname{Sym}^{6}(\sigma)\) is reducible by Lemma 2.1. And, if \(n>6\) Theorem 4.8 implies that \(\operatorname{Sym}^{6}(\sigma)\) is reducible.
Finally, assume that \(\operatorname{Sym}^{6}(\sigma)\) is reducible. Then the maximal irreducible symmetric power satisfies \(M\leq 5\). The case of \(M=4\) cannot happen by Lemma 2.2, and each case among \(M=1,2,3\) is covered by one of the irreducibility criteria of SS 3 and Theorem 4.6 applies to \(M=5\). It follows that \(\sigma\) has open kernel.
## 5. Passage to Automorphic Representations
The landmark results of L. Lafforgue over a global function field \(F\)[11], establish a correspondence between cuspidal automorphic representations of \(\operatorname{GL}_{n}(\mathbb{A}_{F})\) with finite central character and irreducible \(n\)-dimensional \(\ell\)-adic Galois representations that are unramified almost everywhere and have finite determinant.
In [3], the global Langlands correspondence is slightly expanded. It is phrased in the context of the more general \(\ell\)-adic Weil representations, which we have taken to be unramified almost everywhere by definition in SS 1. The crucial point in this generalization is that for an arbitrary \(\ell\)-adic Weil representation \(\sigma\colon\mathcal{W}_{F}\to\operatorname{GL}(V)\), there exists an \(\ell\)-adic character \(\chi\) such that \(\sigma\otimes\chi\) can be extended to an \(\ell\)-adic Galois representation with finite-order determinant. That is to say, \(\sigma\otimes\chi\) lies in the setting of L. Lafforgue [11]. More precisely, in [3] the authors establish a bijection between the set \(\mathcal{G}_{\ell}^{n}(F)\) of irreducible \(n\)-dimensional \(\ell\)-adic representations of \(\mathcal{W}_{F}\) and the set \(\mathcal{A}^{n}(F)\) of cuspidal automorphic representations of \(\operatorname{GL}_{n}(\mathbb{A}_{F})\), such that if \(\sigma\in\mathcal{G}_{\ell}^{n}(F)\) corresponds to \(\pi\in\mathcal{A}^{n}(F)\), then it agrees at every place with the local Langlands correspondence established by Laumon, Rapoport and Stuhler [13]. We write \(\sigma\longleftrightarrow\pi\) to denote corresponding representations.
An \(\ell\)-adic Galois character \(\chi\) corresponds to an automorphic character which we again denote in this case by \(\chi\), this is possible via class field theory after fixing a field isomorphism \(\iota\colon\overline{\mathbb{Q}}_{\ell}\to\mathbb{C}\), cf. SS IV.2 of [3]. Furthermore, if \(\pi\in\mathcal{A}^{n}(F)\) corresponds to \(\sigma\in\mathcal{G}_{\ell}^{n}(F)\), then the central character \(\omega_{\pi}\) of \(\pi\) corresponds to \(\omega_{\sigma}=\det(\sigma)\).
The correspondence is further extended so that a semisimple \(n\)-dimensional \(\ell\)-adic representation \(\sigma\) of \(\mathcal{W}_{F}\), written as a direct sum
\[\sigma=\sigma_{1}\oplus\cdots\oplus\sigma_{d},\quad\sigma_{i}\in\mathcal{G}_{ \ell}^{n_{i}}(F),\]
corresponds to an automorphic representation \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{A}_{F})\), given by an isobaric sum
\[\pi=\pi_{1}\boxplus\cdots\boxplus\pi_{d},\quad\pi_{i}\in\mathcal{A}^{n_{i}}(F).\]
The correspondence is such that \(\sigma_{i}\longleftrightarrow\pi_{i}\), for each \(i=1,\ldots,d\); and, we have equality of Artin and automorphic \(L\)-functions
\[L(s,\sigma_{i})=L(s,\pi_{i}).\]
This latter form of the Langlands correspondence between semisimple representations on the Galois side and automorphic representations of the general linear group is the one we consider; we again write \(\sigma\longleftrightarrow\pi\) in this setting for corresponding representations.
We are fortunate in the case of function fields that we can extend Theorem 4.9 to irreducible \(\ell\)-adic representations of \(\mathcal{W}_{F}\) and pass to the automorphic side of the global Langlands correspondence to obtain Theorem 5.1 below. The general
classification result on the automorphic side can be succintly stated if we let \(M\) be the maximal cuspidal symmetric power of \(\pi\). When \(M\) exists as a finite positive integer, it is determined by \(\operatorname{Sym}^{M}(\pi)\) being cuspidal, while \(\operatorname{Sym}^{M+1}(\pi)\) is non-cuspidal. We write \(M=\infty\) in case every symmetric power of \(\pi\) is cuspidal.
**Theorem 5.1**.: _Let \(\pi\) be a cuspidal automorphic representation of \(\operatorname{GL}_{2}(\mathbb{A}_{F})\). Then \(\operatorname{Sym}^{6}(\pi)\) is cuspidal if and only if \(M=\infty\). If \(\operatorname{Sym}^{6}(\pi)\) is non-cuspidal, then \(\pi\) admits the following classification._
_M=1:_ \(\pi\) _is dihedral._
_M=2:_ \(\pi\) _is tetrahedral._
_M=3:_ \(\pi\) _is octahedral._
_M=5:_ \(\pi\) _is icosahedral._
_Additionally, \(\operatorname{Sym}^{4}(\pi)\) is cuspidal if and only \(\operatorname{Sym}^{5}(\pi)\) is cuspidal._
Proof.: Let \(\sigma\colon\mathcal{W}_{F}\to\operatorname{GL}(V)\) be the irreducible 2-dimensional \(\ell\)-adic representation attached to \(\pi\). Let \(\chi\) be a character such that \(\sigma\otimes\chi\) can be extended to \(G_{F}\).
We continue to denote the extension of \(\sigma\otimes\chi\) to \(G_{F}\) by \(\sigma\otimes\chi\). For every integer \(n\geq 2\), we consider the representations
\[\operatorname{Sym}^{n}(\sigma\otimes\chi)=\operatorname{Sym}^{n}(\sigma) \otimes\chi^{n}\colon G_{F}\to\operatorname{GL}(\operatorname{Sym}^{n}(V)),\]
and
\[\operatorname{Sym}^{n}(\sigma)\colon\mathcal{W}_{F}\to\operatorname{GL}( \operatorname{Sym}^{n}(V)).\]
Observe that \(\operatorname{Sym}^{n}(\sigma\otimes\chi)\) is reducible if and only if \(\operatorname{Sym}^{n}(\sigma)\) is reducible.
If \(\operatorname{Sym}^{n}(\sigma)\) is reducible for some \(n\) then \(\sigma\otimes\chi\) has open kernel, by Theorem 4.9; in particular, \(\operatorname{proj}((\sigma\otimes\chi)(G_{F}))\) is finite. Since there is an inclusion map \(\mathcal{W}_{F}\hookrightarrow G_{F}\) with dense image and the involved representations to \(\operatorname{PGL}_{2}(\overline{\mathbb{Q}}_{\ell})\) are continuous, \(\operatorname{proj}((\sigma\otimes\chi)(\mathcal{W}_{F}))=\operatorname{ proj}(\sigma(\mathcal{W}_{F}))\) is forced to equal \(\operatorname{proj}((\sigma\otimes\chi)(G_{F}))\).
With these observations, the theorem follows from Theorem 4.9, via the global Langlands correspondence.
Over number fields, the cuspidality criterion for symmetric powers is conjectured in [9], see SS 3 Conjecture therein; one can there find proofs of the characteristic zero statements corresponding to the cases \(M=2,3\) of Theorem 5.1.
Let us next proceed to the automorphic statements corresponding to the detailed criteria obtained along the course of proving our results on the Galois side. In the refined statements, we make use of quadratic base change and automorphic induction, available to us in greater generality thanks to the work of Henniart-Lemaire [3].
### On cuspidality criteria for the symmetric square, cube and fourth
We work with a cuspidal representation \(\pi\) of \(\operatorname{GL}_{2}(\mathbb{A}_{F})\), and make use of the following notation:
\[A^{n}(\pi)=\operatorname{Sym}^{n}(\pi)\otimes\omega_{\pi}^{-1},\]
where \(\omega_{\pi}\) denotes the central character of \(\pi\). The case of \(n=2\) is the adjoint lift of Gelbart-Jacquet [2], which we denote by \(\operatorname{Ad}(\pi)\).
To begin with, in the case of symmetric square, Galois Theorem 3.1 allows us to infer that the following statements are equivalent:
1. \(\operatorname{Sym}^{2}(\pi)\) is non-cuspidal.
2. \(\pi\cong\pi\otimes\chi\) for some non-trivial character \(\chi\).
Note that \(\pi\) is dihedral precisely when these conditions are met, by Theorem 5.1.
From Galois Theorem 3.3, translated to the automorphic side: if \(\pi\) is non-dihedral, then \(\operatorname{Sym}^{3}(\pi)\) is non-cuspidal if and only if there exists some non-trivial character \(\mu\) such that
\[\operatorname{Sym}^{2}(\pi)\cong\operatorname{Sym}^{2}(\pi)\otimes\mu;\]
the condition being equivalent to \(\pi\) being tetrahedral. When this is the case we have the decomposition
\[A^{3}(\pi)\cong\pi\otimes\mu\boxplus\pi\otimes\mu^{2}.\]
Now, assume that \(\operatorname{Sym}^{3}(\pi)\) is irreducible. Via Galois Theorem 3.5, we obtain that \(\operatorname{Sym}^{4}(\pi)\) is cuspidal if and only if there exists a non-trivial quadratic character \(\chi\) such that
\[\operatorname{Sym}^{3}(\pi)\cong\operatorname{Sym}^{3}(\pi)\otimes\chi;\]
the condition being equivalent to \(\pi\) being octahedral. When this is the case, we get
\[A^{4}(\pi)\cong\Pi^{F}_{E}(\omega_{\pi_{E}}\mu)\boxplus\operatorname{Sym}^{2}( \pi)\otimes\chi,\]
where \(E/F\) is the quadratic extension corresponding to \(\chi\) via class field theory and \(\mu\) is some cubic character of \(\mathbb{A}^{\times}_{E}\). Let us explain the notation, where we are assuming \(\pi\longleftrightarrow\sigma\), i.e., \(\pi\in\mathcal{A}^{2}(F)\) corresponds to \(\sigma\in\mathcal{G}^{2}_{\ell}(F)\) under global Langlands; with corresponding base change \(\pi_{E}\longleftrightarrow\sigma_{E}\). Then we have induced representations on the Galois side and monomial representations on the automorphic side, in particular,
\[\operatorname{Ind}^{\operatorname{F}}_{\operatorname{E}}(\omega_{\sigma_{ \operatorname{E}}}\mu)\longleftrightarrow\Pi^{\operatorname{F}}_{ \operatorname{E}}(\omega_{\pi_{\operatorname{E}}}\mu),\]
where \(\mu\) is some cubic character of \(G_{E}\) and \(\Pi^{F}_{E}\) denotes automorphic induction, cf. SS IV.5 of [3].
### On cuspidality criteria involving symmetric sixth
If \(\operatorname{Sym}^{4}(\pi)\) is cuspidal, then Galois Theorem 4.1 leads us to conclude that the following statements are equivalent:
1. \(\operatorname{Sym}^{6}(\pi)\) is non-cuspidal.
2. There exists a cuspidal two-dimensional representation \(\pi^{\prime}\) of \(\operatorname{GL}_{2}(\mathbb{A}_{F})\) such that \(\operatorname{Sym}^{5}(\pi)\cong\operatorname{Ad}(\pi)\boxtimes\pi^{\prime}\).
The representation \(\pi^{\prime}\) is uniquely determined up to isomorphism by (ii).
Next, we look into the results corresponding to Galois Proposition 4.2. Assume that \(\operatorname{Sym}^{4}(\pi)\) is cuspidal and \(\operatorname{Sym}^{6}(\pi)\) is non-cuspidal. Then there exists a unique character \(\chi\) such that
\[\operatorname{Sym}^{5}(\pi)\cong\operatorname{Ad}(\pi^{\prime})\boxtimes\pi \otimes\chi,\]
and we have the decomposition
\[\operatorname{Sym}^{6}(\pi)\cong\pi\boxtimes\pi^{\prime}\boxplus\operatorname{ Ad}(\pi^{\prime})\otimes\chi\omega_{\pi}.\]
Notice that the uniqueness of \(\chi\) is obtained via Galois Corollary 4.4. Furthermore, from Galois Proposition 4.3, we obtain that
\[\operatorname{Sym}^{3}(\pi^{\prime})\cong\operatorname{Sym}^{3}(\pi)\otimes\mu,\]
for \(\mu=\omega_{\pi^{\prime}}\omega_{\pi}^{-1}\chi\); and, one can also write \(\mu=(\eta\,\omega_{\pi}^{2})^{3}\) for a quadratic character \(\eta\).
## Appendix A L-functions and Subrepresentations
In this paragraph, we gather results about Artin \(L\)-functions and reducibility over a global function field \(F\), and formulate a useful property on subrepresentations. In view of the global Langlands correspondence of L. Lafforgue [11] summarized in SS 5 following Henniart-Lemaire [3], and noting that an \(\ell\)-adic representation of \(G_{F}\) restricted to \(\mathcal{W}_{F}\) is an \(\ell\)-adic representation, it is enough for us to work with semisimple \(\ell\)-adic representations of \(G_{F}\). Finite dimensional Galois representations correspond to automorphic representations of general linear groups. Now, on the automorphic side of the correspondence we work with Rankin-Selberg products and their \(L\)-functions, where further references can be found in [4].
Let \(\sigma\colon G_{F}\to\operatorname{GL}(V)\) be a given semisimple \(\ell\)-adic \(n\)-dimensional representation. We can write \(\sigma\) as the direct sum
\[\sigma=\sigma_{1}\oplus\cdots\oplus\sigma_{d},\]
where each \(\sigma_{i}\) is an irreducible representation of dimension \(n_{i}\), for \(i=1,\ldots,d\). We note that each \(\sigma_{i}\) is an \(\ell\)-adic representation, since they are all continuous and unramified at each place where \(\sigma\) is unramified. For each \(i=1,\ldots,d\) there exists a cuspidal automorphic representation \(\pi_{i}\) of \(\operatorname{GL}_{n_{i}}(\mathbb{A}_{F})\) such that the automorphic representation of \(\operatorname{GL}_{n}(\mathbb{A}_{F})\) given by
\[\pi=\pi_{1}\boxplus\cdots\boxplus\pi_{d},\]
is such that \(\pi\longleftrightarrow\sigma\), i.e., \(\pi\) corresponds to \(\sigma\) under global Langlands.
Given semisimple \(\ell\)-adic Galois representations \(\sigma\) and \(\sigma^{\prime}\) of finite dimensions \(n\) and \(n^{\prime}\), respectively, Artin \(L\)-functions satisfy the following additive property
\[L(s,\sigma\oplus\sigma^{\prime})=L(s,\sigma)L(s,\sigma^{\prime}).\]
For automorphic representations \(\pi\) of \(\operatorname{GL}_{n}(\mathbb{A}_{F})\) and \(\pi^{\prime}\) of \(\operatorname{GL}_{n^{\prime}}(\mathbb{A}_{F})\), written as isobaric sums
\[\pi=\pi_{1}\boxplus\cdots\boxplus\pi_{d},\quad\pi^{\prime}=\pi^{\prime}_{1} \boxplus\cdots\boxplus\pi^{\prime}_{e},\]
we have the following multiplicativity property of Rankin-Selberg \(L\)-functions
\[L(s,\pi\times\pi^{\prime})=\prod_{i=1}^{d}\prod_{j=1}^{e}L(s,\pi_{i}\times\pi^ {\prime}_{j}).\]
If the Galois and automorphic representations are such that they correspond to each other under global Langlands
\[\sigma\longleftrightarrow\pi,\quad\sigma^{\prime}\longleftrightarrow\pi^ {\prime},\]
then Artin \(L\)-functions and Rankin-Selberg products are related by
\[L(s,\sigma\otimes\sigma^{\prime})=L(s,\pi\times\pi^{\prime}).\]
Now, fix an automorphic representation \(\pi\) and a semisimple Galois representation \(\sigma\) that correspond to each other under global Langlands
\[\pi=\pi_{1}\boxplus\cdots\boxplus\pi_{d}\ \longleftrightarrow\ \sigma=\sigma_{1} \oplus\cdots\oplus\sigma_{d}.\]
Considering the contragredient representation
\[\pi^{\vee}=\pi_{1}^{\vee}\boxplus\cdots\boxplus\pi_{d}^{\vee},\]
we find that
\[L(s,\pi\times\pi^{\vee})=\prod_{i=1}^{d}\prod_{j=1}^{d}L(s,\pi_{i}\times\pi^{ \vee}_{j}).\]
Since \(L(s,\pi_{i}\times\pi^{\vee}_{j})=L(s,\sigma_{i}\otimes\sigma^{\vee}_{j})\), \(1\leq i,j\leq d\), the additive property of Artin \(L\)-functions leads us to
\[L(s,\pi\times\pi^{\vee})=\prod_{i=1}^{d}\prod_{j=1}^{d}L(s,\sigma_{i}\otimes \sigma^{\vee}_{j})=L(s,\sigma\otimes\sigma^{\vee}).\]
Next, we employ a couple of facts about automorphic \(L\)-functions. Namely, a non-vanishing result of Shahidi [17] and properties for Rankin-Selberg products found in SSSS 3 and 4 of [6] part II: \(L(s,\pi_{i}\times\pi^{\vee}_{j})\) does not vanish at \(s=1\); and, it has a simple pole at \(s=1\) if \(\pi_{j}\cong\pi_{i}\), otherwise it is invertible at \(s=1\). In this manner, we derive that the representation \(\sigma\) is irreducible if and only if \(L(s,\sigma\otimes\sigma^{\vee})\) has a simple pole at \(s=1\). Additionally, if \(\sigma\) is irreducible and \(\tau\) is another irreducible finite-dimensional \(\ell\)-adic representation, we then have \(L(s,\sigma\otimes\tau)\) is invertible at \(s=1\), unless \(\tau\cong\sigma^{\vee}\), in which case \(L(s,\sigma\otimes\tau)\) has a simple pole at \(s=1\). We summarize with the following general property:
* Given \(\sigma\) semisimple and \(\tau\) irreducible finite-dimensional \(\ell\)-adic representations of \(G_{F}\), then \(L(s,\sigma\otimes\tau^{\vee})\) has a pole at \(s=1\) if and only if \(\tau\) appears as a subrepresentation of \(\sigma\). The multiplicity of \(\tau\) in \(\sigma\) is \(-\text{ord}_{s=1}L(s,\sigma\otimes\tau^{\vee})\).
In fact, one immediately sees that if \(\sigma\) and \(\tau\) are in general semisimple \(\ell\)-adic Galois representations of finite dimension such that \(\tau\) is a subrepresentation of \(\sigma\), then \(L(s,\sigma\otimes\tau^{\vee})\) has a pole at \(s=1\).
Now, suppose that \(\rho,\sigma\) and \(\tau\) are semisimple finite-dimensional \(\ell\)-adic Galois representations, with \(\sigma\) irreducible. If \(\tau\) is a subrepresentation of \(\rho\otimes\sigma\), then \(L(s,\rho\otimes\sigma\otimes\tau^{\vee})\) has a pole at \(s=1\), by Property (A.1). But
\[L(s,\rho\otimes\sigma\otimes\tau^{\vee})=L(s,\rho\otimes\tau^{\vee}\otimes( \sigma^{\vee})^{\vee}),\]
then \(\sigma^{\vee}\) is a subrepresentation of \(\rho\otimes\tau^{\vee}\), again by Property (A.1). For reference purposes, we register the following represention-theoretical property:
* Let \(\rho,\sigma\) and \(\tau\) be semisimple finite-dimensional \(\ell\)-adic representations of \(G_{F}\), with \(\sigma\) irreducible. If \(\tau\) is a subrepresentation of \(\rho\otimes\sigma\), then \(\sigma^{\vee}\) is a subrepresentation of \(\rho\otimes\tau^{\vee}\); equivalently, \(\sigma\) is a subrepresentation of \(\rho^{\vee}\otimes\tau\).
|
2308.00005 | Detection and Classification of Novel Attacks and Anomaly in IoT Network
using Rule based Deep Learning Model | Attackers are now using sophisticated techniques, like polymorphism, to
change the attack pattern for each new attack. Thus, the detection of novel
attacks has become the biggest challenge for cyber experts and researchers.
Recently, anomaly and hybrid approaches are used for the detection of network
attacks. Detecting novel attacks, on the other hand, is a key enabler for a
wide range of IoT applications. Novel attacks can easily evade existing
signature-based detection methods and are extremely difficult to detect, even
going undetected for years. Existing machine learning models have also failed
to detect the attack and have a high rate of false positives. In this paper, a
rule-based deep neural network technique has been proposed as a framework for
addressing the problem of detecting novel attacks. The designed framework
significantly improves respective benchmark results, including the CICIDS 2017
dataset. The experimental results show that the proposed model keeps a good
balance between attack detection, untruthful positive rates, and untruthful
negative rates. For novel attacks, the model has an accuracy of more than 99%.
During the automatic interaction between network-devices (IoT), security and
privacy are the primary obstacles. Our proposed method can handle these
obstacles efficiently and finally identify, and classify the different levels
of threats. | Sanjay Chakraborty, Saroj Kumar Pandey, Saikat Maity, Lopamudra Dey | 2023-07-29T05:01:53Z | http://arxiv.org/abs/2308.00005v1 | Detection and Classification of Novel Attacks and Anomaly in IoT Network using Rule based Deep Learning Model
###### Abstract
Attackers are now using sophisticated techniques, like polymorphism, to change the attack pattern for each new attack. Thus, the detection of novel attacks has become the biggest challenge for cyber experts and researchers. Recently, anomaly and hybrid approaches are used for the detection of network attacks. Detecting novel attacks, on the other hand, is a key enabler for a wide range of IoT applications. Novel attacks can easily evade existing signature-based detection methods and are extremely difficult to detect, even going undetected for years. Existing machine learning models have also failed to detect the attack and have a high rate of false positives. In this paper, a rule-based deep neural network technique has been proposed as a framework for addressing the problem of detecting novel attacks. The designed framework significantly improves respective benchmark results, including the CICIDS 2017 dataset. The experimental results show that the proposed model keeps a good balance between attack detection, untruthful positive rates, and untruthful negative rates. For novel attacks, the model has an accuracy of more than 99%. During the automatic interaction between network-devices (IoT), security and privacy are the primary obstacles. Our proposed method can handle these obstacles efficiently and finally identify, and classify the different levels of threats.
Anomaly detection, Network-attack, Classification, Machine Learning, Deep Learning.
## 1 Introduction
Attackers today make use of complex methods, such as polymorphism, to modify their attack pattern with each new assault they launch. As a result, the identification of previously unknown attacks has emerged as the primary obstacle for cybersecurity professionals and researchers. Recent studies show that signature based, statistics based, anomaly-based and hybrid approaches can be used for the detection of network attacks. Utilizing pre-established attack signatures, signature-based detection systems are put into practice. That is why they cannot detect novel attacks and new variants of known attacks [1]. Machine learning is used in statistics-based detection to gather information from previously identified exploits and establish a baseline for secure system behaviour [31]. Although there is a chance of false positives or negatives with this procedure, its usefulness is restricted. Overall, statistics-based strategies for detecting novel attacks are not very effective. Despite the possibility that anomaly detection systems are useful against novel threats, one of the biggest difficulties is their high false positive rate [2, 3]. Today's hybrid detection methods avoid the shortcomings of the three strategies outlined above while utilizing their various benefits. Typically, hybrid detection systems combine two or three techniques to provide findings that are more reliable [4].
In this article, we have proposed and evaluated a hybrid approach using deep learning and rule-based method to detect and classify known attacks as well as novel attacks with high stages of accuracy, low false positives and false negatives. The model classifies attacks into 3 major classes based on the pattern of network traffic. The classes are "Normal", "Known attack" and "Novel attack". We employ data with three new attack types that are unique to the training CICIDS2017 Dataset to test the proposed model's propensity to recognize novel attacks (also known as attacks that have not been observed previously). Network traffics contain huge volume of data in real life. To handle such large amount of data and classify the different threats and anomalies from them are challenging tasks for typical machine learning algorithms. Therefore, deep learning plays a vital role in terms of efficiency, accuracy and time consumption in this situation. Training time can be less for such huge data compare to ML algorithms. In addition to the expansion of 5G technology, many inexpensive IoT devices can afford to produce considerable amounts of network traffic, which can be exploited for a variety of attacks. It can degrade
the performance of the IoT devices and make it vulnerable. To handle such kinds of challenges during IoT traffic, our proposed method can be used.
In summary, contributions of the present study include,
* A novel end-to-end framework for detection of all types of attacks like known attacks and novel attacks. To the best of our knowledge, this is the first such try using machine learning.
* The suggested tactic has increased attack detection's precision; reduce the percentage of false positives and false negatives.
* Our proposed technique requires less time complexity compared to any traditional ML algorithm in case of handling huge amount of network traffic.
The proposed model attacking levels are shown and compared with the existing IDS in Figure 1. Figure 2 and Figure 3 also represent the network architecture of our proposed model that shows the strategy to protect the network from various novel attacks.
The main objective of this work is to introduce and develop a deep learning inspired rule-based network security model that is capable to detect and classify the different categories of network attacks in various types of networks including IoT network. It opens the door of handling several security issues and attacks on different kind of networks.
Figure 1: IDS vs Proposed model
Figure 2: IDS protect network
This paper is divided into 5 Sections, Section 2 deals with background details and the overall architecture and ruleset of our proposed network model, Section 3 problem statement and type of attacks is defined along with experimented dataset description, Section 4 deals with the experimentation and results analysis. Finally, in Section 5 conclusion is given. At the end of this paper, the readers can find some interesting references.
## 2 Background and Proposed Model
In previous studies, some of machine learning approaches have been proposed for network attack detection [23]. Some research works use seven different machine learning methods (Naive Bayes, QDA, Random Forest, ID3, AdaBoost, MLP, and K Nearest Neighbours) to detect network anomalies on some popular datasets [5; 6; 28; 29]. Boukhamla use two multi-class classifier MPL model with a different feature set with varying numbers of CICIDS2017 dataset and the model with higher number of features set gives better performance [7]. As a distributed Deep Belief Network (DBN) feature reduction strategy, Marir used different sets. The obtained features are used for a multilayer group SVM. A 60% to 40% split between the training and test datasets in the CICIDS2017 dataset is used [8]. Researchers are also using Neural networks for network attack detection. Pektas suggested merging CNN and LSTM in a deep learning architecture to enhance the performance of the attack detection [35; 36]. The model use CICIDS2017 dataset for testing [9]. Watson uses the Convolutional Neural Network (CNN) classifier and Multi-Layer Perceptron (MLP) classifier with specified packet header features of CICIDS2017 packet capture file [10]. To accomplish attack detection and analysis utilizing deep learning nets and association rule mining, Thilina introduces a unique framework [11]. Zhu uses a CNN model for attack detection and identification. In comparison to conventional machine learning algorithms, this model performs better [12]. To identify port-scan assaults, Aksu suggested a deep learning model and compared the outcomes with the SVM. Deep learning has a total of 30 epochs, a RELU activation function, and 7 hidden layers. The CICIDS 2017 dataset is utilized to train the model. The deep learning model's accuracy rate is 97%, compared to the SVM model's accuracy rate of 67% [13; 14]. Now-a-days several technologies have been developed to handle harmful malware attacks in cloud [22]. In the paper [24], logistic regression and neural network classifiers are mainly used for detecting and preventing threats and anomalies in smart IoT devices. In the paper [25], besides the naive bayes (NB) and support vector machine (SVM), a unique deep learning algorithm called long-short term memory (LSTM) is considered for anomaly or redundancy detection and modifying attacks in IoT framework [32]. In the paper [26], another stacked-deep polynomial learning based intrusion detection framework is designed and introduced to detect threats in IoT environment. This approach [26] is inspired by the spider-monkey optimization (SMO) technique to choose the optimal features in the dataset and improve the accuracy for the detection of the anomaly. Similarly, a long short term memory (LSTM), BiLSTM, and Gated Recurrent Unit (GRU) techniques are used to introduced an anomaly detection system in IoT networks [27]. Initially, convolution neural network (CNN) is used for analyzing the input features and then the above three deep learning techniques are applied for binary classification of anomaly detection [27]. A very new transfer learning auto-encoder model is introduced to detect noisy DDoS malware attacks such as Mirai and Bashlite in IoT devices [30]. However,
Figure 3: Protection of network by proposed model
both deep learning and machine learning methods have some significant impact on anomaly detection and classification in different networks including IoT [33; 34; 35; 36].
Motivated by the above works, we have developed a rule based deep learning classifier model for similar kind of threats or anomaly detection in networks including IoT environment. The proposed approach classifies among the normal, novel and known attacks based on the historical data of network traffics with high accuracy rates.
According to the Figure 4, the proposed rule-based framework has two-stage. In the first stage, the deep to learn, a learning model is applied attack patterns from the training dataset to find the probability distribution for each attack class of the training dataset. In the second stage, a rule-based model is used to classify the input sample into 3 major classes "Normal", "Known attack" and "Novel attack" based on the probability distribution of the new input sample calculated in the first stage.
**Ruleset:**
1. Network traffic = "Normal traffic". 1. Rule: P(Normal) \(>\) =.80
2. Network traffic = "Known Attack". 1. Rule: P (AC 1) / P (AC 2) /.... /P (AC N-1) / P (AC N) \(>=.80\) 2. Here N is the total number of attack categories in the training dataset which is 5 for this dataset. 3. AC mean Attack category and P is the probability of the attack category calculated by the deep learning method.
3. Network traffic = "Novel Attack". 1. Rule: (P(Normal) \(<.80\)) && (P (AC 1) \(<.80\) && \(\ldots\)... && P (AC N) \(<.80\)) 2. In case of a novel attack, the network traffic will be sent to a cyber forensic team for further analysis of the traffic and once the traffic is classified, the model will be trained with the new attack class.
**Procedure:**
**Input:**: CICIDS2017 dataset for traffic data.
**Output:**: Categorizing three kinds of attacks (normal, novel and known).
Begin
1. Pre-process the input dataset. Then, split the dataset into training and testing part (70:30).
2. Apply the proposed rule based deep learning model with a specified architecture on the training dataset.
3. Measure the performance of the training model through different parameters.
4. Now, apply the trained deep learning model on the unknown testing data and find the classes of different categories of attacks with the accuracy, recall and FPR measure.
End
## 3 Dataset Description
Due to a paucity of trustworthy datasets, machine learning systems struggle to produce accurate and consistent performance ratings. Some available attack datasets are KDD CUP 1992 datasets, DARPA 1998/1991, BSM98,
Figure 4: Proposed model architecture
BSM99, KDD99, NSDL-KDD, ISCX2012, UNSW-NB15. The majority of the readily accessible datasets are stale and unreliable. Some of them struggle with low traffic volume and diversity, and some of them also lack feature sets and metadata [15]. CICIDS2017 dataset is released by the Canadian Institute for Network security (CIC) in 2017, where it contains benign/normal traffic data and up-to-data common attacks which resemble the true real-world data. As shown in Table 1, CICIDS2017 dataset has the most common attack categories Dos, DDoS, Patator/Brute force, Web-based, Heart-bleed, Infiltration, Bot and Portscan with a total of 14 types of attacks DoSHulk attacks, DoS-Slow HTTPTest attack, DoS-GoldenEye attack, DoS-Slowloris attack, DDoS LOIT, Botnet, FTP-Brute Force, SSH-Brute Force, Brute Force-Web, Web Attack - XSS, Web Attack - SQL Injection, Infiltration and Heartbleed are covered in the dataset. All the data are fully labelled with 78 features (like destination ports, Packet Length, Flow Duration etc.) extracted from the network traffic [16,17].
The CICIDS2017 dataset is segmented into 3 datasets, a training and testing dataset for the deep learning model, the dataset for probability distribution calculation, sample dataset of novel attack. Training and testing dataset for deep learning model has data of four attack categories (DoS attack, Patator attack, Web-based attack and Portscan attack) and Normal traffic [18,19] shown in figure 5.
Dataset for probability distribution calculation has data of four attack categories (DoS attack, Patator attack, Web-based attack and Portscan attack) and Normal traffic. The dataset has unique data [20] shown in figure 6 & figure 7.
Sample dataset of novel attack has data of four attack categories (DDoS attack, Bot attack, Heartbleed attack, Infiltration attack). Figure 8 shows the different categories of attacks [21].
## 4 Result Analysis
The proposed model is written in python programming language and python Keras library with TensorFlow backend is used for the deep learning model. All our evaluations are performed on a Windows machine with a quad-core 1.60 GHZ processor, 8 GB RAM. The CICIDS2017 dataset is used for both training and testing of the deep learning model. A brief description of CICIDS2017 dataset is summarized in Table 1. Some records in the collection have values of NaN and infinite. The dataset is cleaned up by removing any entries with NaN values and endless values. Some attributes in the dataset that are derived from the network traffic have unusually wide ranges between their minimum and maximum values. The feature values between 0 and 1 are linearly normalized using a min-max scaling approach to reduce the impact of these outliers.
As shown in Table 2, the output variable of the dataset is a categorical variable, containing 4 different attack labels and 1 normal label. We applied the One-Hot Encoding method to the output variable to transform categorical values into vectors where only one element is non-zero, or hot.
Therefore, the total number of input dimensions is 78 and the output dimension is 5 (4 attack categories and 1 normal). The training dataset of the deep learning model is split into is train and test dataset in the ratio of 70:30. The cross-validation and training datasets are divided from the training dataset in an 80:20 ratio. In addition, figure 8 displays the distribution of attack data in the training dataset. An input layer, several hidden layers, and an output layer make up the deep neural network. 78 neurons make up the input layer, matching the number of input features. Seven hidden layers and the activation function are both part of the deep learning architecture. In the proposed architecture, we have used ReLU activation function in the hidden layer. In comparison to sigmoid and tanh
Figure 8: Different categories of attacks classified through ruleset
Figure 7: Proposed Attack Model for Attack identification
functions, the convergence is quicker. This is so because one linear component of the ReLU function's derivative (slope) is fixed, while the other linear component's derivative is zero. As a result, the ReLU function speeds up the learning process significantly. The output layer and Softmax() function both have 5 neurons, matching the number of attack types in the training dataset. In multi-class issues, Softmax assigns decimal probability to each output class. Because it transforms the scores into a normalised probability distribution and evenly divides the probability among each output node so that the total is 1, Softmax is incredibly helpful. Softmax's output can be seen or utilized as an input by other systems. It is customary to add a Softmax() function as the neural network's final layer because of this. Cross-entropy is employed as the cost function in the deep learning model, while Adam is used as the optimizer with a learning rate of 0.0001. Table 8 represents a detailed description of the used deep learning architecture in tabular form.
Figure 11: Classification Report
Figure 10: Loss change rate
Figure 9: Accuracy vs. Epochs
As shown in Table 4, the required maximum and minimum probability values for classifying network traffic as a "PortScan attack" are 1.00 and.892. Recall, false negative ratio, and false positive ratio (FPR) are three crucial metrics to evaluate an attack detection system's effectiveness shown in table 5 & table 6. For each novel assault, the suggested model's accuracy is measured in terms of recall, false-negative rate, and false-positive rate shown in fig 11. Recall (R) measures how well the model can identify attacks. It is shown in equation (1).
\[\text{R}=\text{TP}\,/\,\text{(TP+ FN)}\,\,\ldots\,\ldots\,\,\ldots\,\,\,\,\, \text{(1)}\]
When an activity is labelled as an attack by the IDS but is actually just permissible behaviour, this is known as a false positive state (shown in equation (2)). A false alarm is a false positive.
\[\text{FPR}=\text{FP}\,/\,\text{(TP+ FP)}\,\,\ldots\,\ldots\,\,\,\,\text{(2)}\]
The most serious and hazardous state is a false negative. When an activity is actually an assault, the IDS may mistakenly classify it as permissible behaviour. In other words, a false negative occurs when the IDS misses an attack.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**PortScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} \\ \hline Max & 0 & 0 & 0.99 & 0 & 0.01 \\ \hline Min & 0.01 & 0 & 0.96 & 0 & 0.03 \\ \hline \end{tabular}
\end{table}
Table 6: Probability values for Patator attack
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**PortScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} \\ \hline Max & 1 & 0 & 0 & 0 & 0 \\ \hline Min & 0.93 & 0 & 0.06 & 0 & 0.01 \\ \hline \end{tabular}
\end{table}
Table 4: Probability values for DoS attack
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**PortScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} \\ \hline Max & 0 & 1 & 0 & 0 & 0 \\ \hline Min & 0.07 & 0.86 & 0.06 & 0.00 & 0.01 \\ \hline \end{tabular}
\end{table}
Table 5: Probability values for PortScan attack
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**PortScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} \\ \hline Max & 0 & 0.08 & 0.01 & 0.91 & 0 \\ \hline Min & 0.01 & 0 & 0.96 & 0 & 0.03 \\ \hline \end{tabular}
\end{table}
Table 7: Recall & FPR
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**PortScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} & \multicolumn{1}{c|}{**TopScan attack**} \\ \hline Max & 0 & 1 & 0 & 0 & 0 \\ \hline Min & 0.07 & 0.86 & 0.06 & 0.00 & 0.01 \\ \hline \end{tabular}
\end{table}
Table 2: Probability value range for different attack class
The above result shows a low false positive (FP) rate of less than 0.025 and a high true positive (TP) rate of more than 98.5%. The suggested ruled-based architecture maintains a great trade-off between attack detection and false positive rates, as illustrated in Table 7's performance evaluation findings and conclusion. The result shows that deep learning models could be used to detect existing as well as new network attacks. In comparison to existing traditional machine learning techniques, our model improves network attack detection accuracy while lowering false positive and true positive rates. The system's overall accuracy is 99.5%, while the recall rate for the four categories is 99.9%. The ruled-based model performs well to classify the Infiltration and Heartbleed novel attacks. A higher attack detection rate of the model can be achieved by training the model with a dataset that has a large number of diversified attack classes and data.
In the Table 9, we have shown a comparison with some existing approaches on our experimented dataset (CICIDS 2017). In this comparison, it is clearly seen that our proposed approach provides a stronger security mechanism due to the inclusion of the rule based approaches and due to the impact of deep learning algorithms, it provides much better accuracy and speedup compared to all the existing ML approaches.
This proposed approach has several usage in real-life applications. Applications of such kind of anomaly detection include finding possible risks or medical issues in health data, detecting faults in manufacturing, detecting intrusions into computer networks, monitoring sensor readings in aeroplanes, and predictive maintenance. Monitoring any data source, including user logs, devices, networks, and servers, is possible thanks to anomaly
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Layers** & **No. of** & **Activation** & **Learning** & **Optimizer** & **No. of** \\ & **Neurons/layers** & **Function** & **Rate** & & **epochs** \\ \hline Input Layer & 1 input layer & - & - & - & - \\ & contains 78 & & & & \\ & neurons & & & & \\ \hline Hidden & 7 & ReLU & 0.0001 & Cross- & 30 \\ Layers & & & & entropy and & \\ & & & & ADAM & \\ \hline Output Layer & 1 (Five & Softmax & - & - & - \\ & dimensions) & & & & \\ \hline \end{tabular}
\end{table}
Table 8: Proposed Deep Learning Model Architecture
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Approaches** & **Accuracy** & **Parameters** \\ \hline Transfer Auto-encoder Neural & 99.98\% & transfer learning takes \\ Network [30] & & 47.31\% and 58.27\% \\ & & less time compared to \\ & & traditional deep \\ & & learning \\ \hline ML based Techniques ([1, 2, 13, 24, 28, 29]) & \(\sim\) 90\% (avg) & More time required to \\ & 24, 28, 29]) & handle huge network \\ & & & traffic. \\ \hline SMO inspired DL [29] & 99.02\% & Time complexity is \\ & & high due to SMO \\ \hline Proposed Method & **99.5\%** & - Overall time is less \\ & & compared to and ML \\ & & based techniques. \\ & & - Rule based approach \\ & & makes the security \\ & & clauses stronger. \\ \hline \end{tabular}
\end{table}
Table 9: Comparison in terms of accuracy and other properties on CICIDS2017 dataset
detection. Quickly recognise unknown security dangers as well as zero-day attacks. Discover anomalous activities across data sources that are missed by conventional security techniques.
## 5 Conclusions
Detection and classification of various network attacks plays a vital role for efficient and effective network traffic communication. This work focuses on the development of deep learning and rule-based network security model that helps to find the network traffic patterns to detect and categorize the different kinds of network attacks. The proposed model is evaluated and cross validated for accuracy in terms of the percentage of false positives and false negatives. Our proposed model achieves a very attractive attack detection rate on the benchmark experiment dataset. The proposed system has a 99.5% average accuracy rate, and a 99.9% recall rate for the 4 categories of attacks. A low FP rate (\(<0.025\)) and a high TP rate (\(>98.5\%\)) are also displayed in the results. The CICIDS2017 dataset alone does not offer enough reference points for this suggested model to develop a comprehensive picture in general. While detecting anomalies, this model can exhibit varied behaviours on real-time traffic data.
In the future, we plan to test the proposed approach over a set of different classes of network traffic, and different types of novel attacks and evaluate in detail the attack prediction capability of the model. For anomaly detection on the real-time traffic, we plan to convert the raw-traffic files into feature vectors through IPFIX framework in future as it can configure unique properties with ease. This would make for an intriguing foundation for creating a future feature list that is more comprehensive. Besides that, RNN/LSTM use can be applied to handle the anomalies on time series IoT traffic data.
**Conflicts of Interest**: The authors declare no conflicts of interest.
**Funding Statement**: All authors declare that there is no funding available for this article.
|
2305.07796 | aedFaCT: Scientific Fact-Checking Made Easier via Semi-Automatic
Discovery of Relevant Expert Opinions | In this highly digitised world, fake news is a challenging problem that can
cause serious harm to society. Considering how fast fake news can spread,
automated methods, tools and services for assisting users to do fact-checking
(i.e., fake news detection) become necessary and helpful, for both
professionals, such as journalists and researchers, and the general public such
as news readers. Experts, especially researchers, play an essential role in
informing people about truth and facts, which makes them a good proxy for
non-experts to detect fake news by checking relevant expert opinions and
comments. Therefore, in this paper, we present aedFaCT, a web browser extension
that can help professionals and news readers perform fact-checking via the
automatic discovery of expert opinions relevant to the news of concern via
shared keywords. Our initial evaluation with three independent testers (who did
not participate in the development of the extension) indicated that aedFaCT can
provide a faster experience to its users compared with traditional
fact-checking practices based on manual online searches, without degrading the
quality of retrieved evidence for fact-checking. The source code of aedFaCT is
publicly available at https://github.com/altuncu/aedFaCT. | Enes Altuncu, Jason R. C. Nurse, Meryem Bagriacik, Sophie Kaleba, Haiyue Yuan, Lisa Bonheme, Shujun Li | 2023-05-12T23:09:26Z | http://arxiv.org/abs/2305.07796v1 | # **aedFaCT: Scientific Fact-Checking Made Easier**
###### Abstract
In this highly digitised world, fake news is a challenging problem that can cause serious harm to society. Considering how fast fake news can spread, automated methods, tools and services for assisting users to do fact-checking (i.e., fake news detection) become necessary and helpful, for both professionals, such as journalists and researchers, and the general public such as news readers. Experts, especially researchers, play an essential role in informing people about truth and facts, which makes them a good proxy for non-experts to detect fake news by checking relevant expert opinions and comments. Therefore, in this paper, we present _aedFaCT_, a web browser extension that can help professionals and news readers perform fact-checking via the automatic discovery of expert opinions relevant to the news of concern via shared keywords. Our initial evaluation with three independent testers (who did not participate in the development of the extension) indicated that aerfactCT can provide a faster experience to its users compared with traditional fact-checking practices based on manual online searches, without degrading the quality of retrieved evidence for fact-checking. The source code of aerfact is publicly available at [https://github.com/altuncu/aedFaCT](https://github.com/altuncu/aedFaCT).
\({}^{1}\)Institute of Cyber Security for Society (ICSS) & School of Computing, University of Kent
\({}^{2}\)School of Computing, University of Kent
## 1 Introduction
The digital age has evolved into an infodemic age, with the rapid propagation of false and misleading information in this highly digitised world, mixed with true and reliable information. As part of this, fake news prevents society from obtaining accurate information based on real evidence. The COVID-19 pandemic has demonstrated how fake news can seriously cause harm to people (BBC News 2020).
Considering the amount of information available online and how fast information can be widely disseminated with the help of digital communication technologies, detecting fake news at scale is an important task. Therefore, many researchers have studied automated fact-checking methods (Zhou and Zafarani 2020). However, automated fact-checking solutions are yet to be sufficient to adapt to various contexts, languages, and modalities. In addition, they insufficiently consider human factors, such as trust and usability, which is crucial for practical use (Das et al. 2023). These paved the way for semi-automated solutions that attempt to combine human and machine intelligence. With this respect, the literature involves many fact-checking systems leveraging human-machine teaming in different ways (Guo et al. 2020; Das et al. 2023). Besides, there exists a wide range of tools and services that can assist professionals and common readers with fake news detection (Nakov et al. 2021).
Experts play a crucial role in the fight against fake scientific news by enlightening society with truths and facts through various communication channels, especially the news media. In the complicated landscape of the infodemic age, people are likely to seek help from experts they trust, such as scientists and professionals, since they are often considered the highly trusted groups in society (Iposos MORI 2022). This makes them a proxy for non-experts to fact-check suspicious scientific claims. Other than giving expert comments and being interviewed, experts support journalists in reporting online false information to bridge gaps in their contextual understanding and methodological expertise (McClure Haughey, Povolo, and Starbird 2022). Besides, experts are a crucial element of the fact-checking process conducted by human fact-checkers (Graves 2017). With this respect, there has been much effort to engage with experts for scientific fact-checking. To exemplify, Science Media Centre1 aims to build bridges between experts and journalists so that scientific information covered in the media becomes accurate and evidence-based. Another example is Meedan's Digital Health Lab2, which is composed of scientists, content moderation experts, and journalists to support evidence-based responses to health misinformation.
Footnote 1: [https://www.sciencemediacentre.org/](https://www.sciencemediacentre.org/)
Footnote 2: [https://meedan.com/programs/digital-health-lab](https://meedan.com/programs/digital-health-lab)
However, this expert-journalist collaboration could be insufficient to combat fake scientific news due to several reasons, including challenges in scientific communication (Bucchi 2017), the existence of outlier experts who do not share the majority opinions, and the selection of experts with incompatible expertise (Palmer 2020). These problems indicate the need and potential usefulness of tools that can leverage multiple experts' opinions as evidence for fact-checking purposes. Therefore, in this work, we present _aedFaCT_, a web browser extension that can help professionals
and common readers to discover the opinions of multiple experts on relevant topics of a particular scientific news article in a semi-automated manner. aerFaCT extracts expert opinions from several credible news sources based on a number of candidate keywords automatically extracted from the target news article, and it also automatically retrieves relevant peer-reviewed scientific publications based on such keywords. Based on the results, users can make a decision on the veracity of suspicious claims on their own by considering the retrieved evidence. Moreover, aerFaCT enables users to see a list of researchers with relevant expertise based on their publications in order to inform them about who to follow and to approach on a specific topic. In a nutshell, aerFaCT is a "smart search assistant" for fact-checkers to help minimise the manual work they have to do using online search engines and other known information sources.
The rest of the paper is organised as follows. Section 2 briefly reviews related work. Then, Section 3 presents a focus group study to understand the mental process of users during scientific fact-checking. The architecture of the proposed system is introduced in Section 4, and the details of its evaluation are provided in Section 5. Finally, the paper is concluded with a brief discussion in Section 6 and the concluding remarks in Section 8.
## 2 Related Work
### Human-Machine Teaming Approaches in Fact-Checking
Automated fact-checking at scale is a challenging task. Hence, recent research includes hybrid solutions based on human-machine teaming to assist fact-checkers and the general public with a level of automation in the process of fact-checking. For example, Nguyen et al. (2018) designed a mixed-initiative approach to fact-checking where the system predicts the veracity of a claim based on relevant articles with their stance towards the veracity of the claim and the reputation of each source. The users' role in this design is to change the source reputation and stance of each article for more accurate prediction. More recently, Gupta et al. (2021) introduced an evidence retrieval approach to search for semantically-similar news articles to assist users when validating news articles. This system leaves the fact-checking decision to the user. Moreover, La Barbera, Roi-tero, and Mizzaro (2022) proposed a hybrid human-in-the-loop framework for the veracity assessment of claims, relying on three major components: AI, crowdsourcing, and experts. The veracity of the claim is considered as correctly classified if any component produces a prediction with a high confidence score. Otherwise, the claim is forwarded to the next component. Another human-in-the-loop AI system is HAMLET, a conceptual framework leveraging AI-expert teaming in multiple fact-checking tasks, such as the collection of expert data annotations and expert feedback, AI system performance monitoring, and life cycle management (Bandhakavi, Hoffmann, and Lear, 2022). Finally, Arroyo Guardeno et al. (2021) introduced a toolbox, namely Ms.W, combining several publicly available services and tools that help users with fact-checking and source credibility assessment.
As another way of human-machine teaming, several fact-checking systems utilise crowd intelligence in different stages. For instance, Vo and Lee (2018) leveraged guardians, who are social media users correcting false information by referring to fact-checking URLs, and presented a fact-checking URL recommendation model to motivate them to engage more in fact-checking activities. Furthermore, social media companies enable users to flag posts containing false information and sent them to fact-checkers for further investigation if there are sufficient flags. Recently, Twitter launched Community Notes3 (previously known as Birdwatch), where users can add context to tweets to prevent the platform from false information.
Footnote 3: [https://help.twitter.com/en/using-twitter/community-notes](https://help.twitter.com/en/using-twitter/community-notes)
Footnote 4: [https://www.thefactual.com/](https://www.thefactual.com/)
### Web Browser Extensions for Fact-Checking
Web browser extensions are quite useful for fact-checking, especially for web-based documents and articles. For example, _BRENDA_ allows users to perform automatically fact-checking a news article or a snippet from the article (Botnevik, Sakariassen, and Setty, 2020). It identifies the checkworthy claims, classifies them with a deep neural network, and then, shows the results to the user along with the evidence found from top-10 Google Search results. Another automated solution is _FADE_, which discovers multiple sources containing the same news story and performs automated fact-checking according to the trustworthiness of the news sources and the cited sources in the article (Jabiyev et al., 2021). Other than the solutions developed in academia, The Factual5 automatically rates news articles based on several characteristics, including their source quality and bias, author expertise, and tone.
Footnote 5: [https://www.newguardtech.com/](https://www.newguardtech.com/)
There also exist Web browser extensions helping users with content analysis and evidence retrieval for fact-checking. One such tool is _InVID_, which helps users verify videos and images with a number of tools it contains (Teyssou, 2019). As another example, _News2PubMed_ retrieves relevant health research papers given a news article (Wang and Yu, 2021). Another tool is called _News Scan_, which shows several characteristics of the source and content of news articles, such as source popularity, sentiment, objectivity, and bias, to assist users to make a judgement on the source and content credibility (Kevin et al., 2018). Finally, NewsGuard6 shows manually assigned source credibility ratings next to links on search engines and social media platforms.
Footnote 5: [https://www.newguardtech.com/](https://www.newguardtech.com/)
## 3 Mental Process of Users During Fact-Checking
In this study, our aim is to develop a semi-automated fact-checking system for both professionals and common readers, which automates, at least, part of the users' claim investigation process. To this end, we need to understand how users manually perform fact-checking and what strategies
they normally use to investigate a claim. From a general perspective, content is the most important factor for users during fact-checking [20]. Users mainly rely on their own knowledge and sense of judgement to make a decision, and they perform external acts of authentication (e.g., searching for more information via Google, family and friends, and experts) only if the first phase fails [19, 18]. When users seek external information, they commonly prefer information that they consider credible, such as peer-reviewed scientific papers, fact-checking reports, mainstream news articles, and Wikipedia entries [17].
Since the current literature lacks a systematical discussion of how different processes that fact-checkers and common readers follow to verify scientific information, we conducted a focus group discussion between the first author and three other co-authors (the third, fourth and sixth) of this paper, who were all PhD students in Computer Science focusing on a relevant research topic (AI, NLP, and/or cyber security), to understand how users verify the veracity of news content. At the time of the discussion, only the first co-author knew about the details of the study as the initialiser of the work. During the discussion, an example news article containing a false claim about COVID-19 was provided to the participants, and the investigation of the claim has been performed by discussing each step of the fact-checking process. The discussion was conducted with three fact-checking scenarios, separately: (1) the participants (as researchers) performed fact-checking themselves; (2) the participants simulated how common readers with less domain knowledge would perform fact-checking without using expert opinions as a proxy; and (3) the participants simulated how common readers would perform fact-checking by using expert opinions as a proxy. For all the scenarios, the discussion was made with the same participants instead of separate groups of researcher and common reader participants, for the sake of simplicity and to allow cross-scenario alignment. Using researchers as common readers is not necessarily a problematic setup, since researchers are effectively like common readers for research areas beyond their own expertise (e.g., health and medicine for all the authors of this paper).
In the first scenario, the participants suggested identifying some keywords about the investigated claim and using them to search for relevant research papers on Google Scholar. Then, they suggested reading the abstracts of the first few publications to make a decision, provided that they trust the publisher. In the second scenario, however, they preferred to use Google Search to search for relevant material with the same set of keywords, assuming that common readers would have been unfamiliar with scientific papers and research databases. Then, they wanted to check out the search results that are trustable for them, e.g., a news article from a news outlet they trusted, or a post from a university's official website advertising their research. Finally, in the third scenario, the participants suggested identifying multiple relevant domain experts through the websites of the corresponding institution or departments of well-known universities. Moreover, they found relevant news articles useful to identify some domain experts by checking who has been interviewed in the article.
The focus group discussion provided three major conclusions on users' scientific fact-checking process, supporting the findings of existing literature on the general fact-checking practices of fact-checkers and laypeople [14, 15, 16]: (1) domain experts were generally at the core of the fact-checking process, either explicitly, or implicitly through their publications; (2) only the sources they trusted were considered; and (3) multiple sources were taken into account for cross-checking what has been obtained.
## 4 System Design
### Overview
The overview architecture of aerFaCT is shown in Figure 1. The system involves three main parts: (i) keyword extraction and selection; (ii) expert opinion discovery; (iii) scientific evidence retrieval.
### Keyword Extraction and Selection
As the first step, the system needs to learn the context of the given news article by extracting a number of descriptive keywords. We designed this process as a human-in-the-loop mechanism to avoid topic drift while using the obtained keywords in searching. The system first fetches and parses the news content using the Newspaper3k6 library. Then, it performs automatic keyword extraction (AKE) with a state-of-the-art AKE algorithm, SIFRank+ [23], to obtain the initial set of keywords. Based on the findings of our previous study [15], we used our own version of SIFRank+, enhanced with post-processing. More precisely, the enhancement involves PoS-tagging-based filtering, and prioritising keywords contained in the corresponding domain thesaurus or Wikipedia as an entry. This ensures that only noun phrases are considered keywords, and contextual keywords are given priority. As AKE methods are incapable of providing sufficient accuracy [2], we ask users to select the keywords relevant to the article out of ten identified keywords through the pop-up window shown in the Web browser, as depicted in Figure 2. Users are also allowed to add and select their own keywords through the user interface.
Footnote 6: [https://newspaper.readthedocs.io/en/latest/](https://newspaper.readthedocs.io/en/latest/)
### Expert Opinion Discovery
This step aims to explore the scientific views or comments of domain experts in the news media on the identified topic. The system combines the keywords selected by the user in the previous step with the _AND_ operator to generate a search query to search for relevant news items. Although a more useful query can be formed with a combination of different logical operators, we simply used the _AND_ operator for the sake of simplicity. The searches are done via Google's search APIs by considering the following types of news sources:
1. _Mainstream News Outlets:_ This includes credible news outlets with high traffic and wide news coverage. We set up a Google site-restricted search engine, which allows 10 websites for inclusion, and covered 10 news outlets with high credibility in English-language and having no paywall. For the source credibility measure, we considered the Media Bias/Fact Check (MBFC) credibility ratings7 since it has been utilised by several recent studies [13, 14, 15]. The included news outlets are shown in Table 1. Footnote 7: [https://mediabiasfactcheck.com/](https://mediabiasfactcheck.com/)
2. _Scientific News Outlets:_ This type involves credible proscience news websites, featuring scientific views and recent research findings. For this part, we set up another Google site-restricted search engine with 10 selected news websites. The selection was made according to the MBFC credibility ratings with the help of bias, credibility, and traffic filters, and the websites with pro-science bias, wide news coverage, higher traffic, and no paywall were preferred. Table 1 indicates the list of selected websites of this type.
3. _Other Credible News Sources:_ In addition to the previous types, there are other types of news sources that might include expert opinions, such as news released by institutions and domain-specific news websites (e.g., Medscape, News Medical). To cover these, we set up a Google custom search engine without any site restriction to augment the search results containing the other two types of news sources. Since Google can also show results from non-news websites, we limited the search results with the _NewsArticle8_ Schema.org type to include only news articles. Furthermore, we utilised the Iffy Index of Unreliable Sources9, which is based on MBFC, to exclude untrustworthy news sources from the search results. Footnote 8: [https://schema.org/NewsArticle](https://schema.org/NewsArticle)
Footnote 9: [https://fify.news/index/](https://fify.news/index/)
The search results obtained from the three search engines are aggregated with the given order. Although it is possible to merge the three search engines into a single custom search engine, we preferred to use site-restricted engines for the first two types of news sources since we observed that site-restricted search engines provide more reliable results, and custom search engines configured to search the entire Web are limited to a subset of the Google Web Search corpus10. Hence, we benefited from a custom search engine as a secondary source to populate the obtained results from the site-restricted search engines.
Footnote 10: [https://support.google.com/programmable-search/answer/70392](https://support.google.com/programmable-search/answer/70392)
Once the aggregated set of search results is obtained, the system tries to capture expert opinions from each article, which are mostly in the form of reported speeches, since they contain the most indicative elements (e.g., reported speeches, named entities, and quotes) of page usefulness for fact-checking [1]. In this manner, it first downloads the news article with the Newspaper3k library. Then, the article is tokenised with two consecutive newline characters to obtain its paragraphs. Finally, for each paragraph, named entities are extracted with the spaCy library's NER feature. Only the paragraphs which contain at least one person name, one academic organisation name (containing an indicative word or phrase, such as _university_, _institute_, _academy_, and _research centre_) and a pair of single or double quotation marks (indicating a reported speech) are selected. As an exception, the summary extracted by the Newspaper3k library is shown to users for the _The Conversation_ news articles instead of retrieved expert opinions as they are already written by researchers and academics. As shown in Figure 3, the selected paragraphs are combined and shown to users in an individual box that also contains the source type (icon on the top-left), source name, and publish date. If the shown expert opinions are insufficient for a judgement and require further reading, users can click on the box to see the full article. Furthermore, a green clickable
Figure 1: The architecture of aedFaCT
tick directing to the corresponding MBFC credibility rating webpage is added next to the names of the mainstream and science news sources for better explainability.
### Scientific Evidence Retrieval
Scientific publications can also be considered a source of expert opinions as they are written by domain experts. Therefore, this step aims to retrieve research papers relevant to the topic of the input article.
Similar to the previous step, we try to include only the records with high credibility. With this respect, we utilised Scopus API (with the help of the Pybliometrics library [14]) to search for relevant peer-reviewed publications. The searches are made by combining the selected keywords with an _AND_ operator, similar to the previous step. In addition, each keyword is surrounded by double quotation marks since it enables the inclusion of loose matches by allowing for wildcards and lemmatisation [1]. As shown in the upper side of Figure 4, the obtained search results are shown to the user inside individual boxes containing the title, source, publication year, and abstract, with an order of relevance and publication year.
In addition to the scientific evidence provided by the tool, users, especially fact-checkers and journalists, might want to know the experts on the topic themselves to follow their research and/or make contact with them. To enable this, our proposed tool profiles the co-authors of the publications retrieved in the previous step, by obtaining relevant information, e.g., profile links, from their Scopus and ORCID profiles. The obtained researcher profiles are ordered by their number of publications in the search result. In the case that this number is equal, they are ranked based on the amount
\begin{table}
\begin{tabular}{r l} \hline \hline
**Mainstream News Outlets** & **Scientific News Outlets** \\ \hline NPR (www.npr.org) & Science (www.science.org) \\ NBC News (www.nbcnews.com) & EurekAlert (www.eurekalert.org) \\ Sky News (news.sky.com) & The Scientist (www.the-scientist.com) \\ ABC News (www.abcnews.go.com) & Science News (www.sciencenews.org) \\ Euronews (www.euronews.com) & MIT Technology Review (www.technologyreview.com) \\ Reuters (www.reuters.com) & Popular Science (www.popsci.com) \\ BBC News (www.bbc.com) & Science Daily (www.sicencedaily.com) \\ PBS NewsHour (www.pbs.com/newshour) & Science Alert (www.sciencealert.com) \\ Associated Press (www.apnews.com) & Live Science (www.livescience.com) \\ CBS News (www.cbsnews.com) & The Conversation (www.theconversation.com) \\ \hline \hline \end{tabular}
\end{table}
Table 1: News outlets covered by the site-restricted search engines
Figure 2: The user interface of the keyword extraction step in aedFaCT
of information their profile contains to prioritise more contactable researchers. The bottom side of Figure 4 shows an example output from the user interface showing a list of researchers.
## 5 Evaluation
To check the functionality and validity of the proposed Web browser extension, we conducted an initial evaluation as a pilot study with three co-authors (the third, fourth, and fifth) of this paper (one male and two female researchers), who were not included in the design and implementation phases of the tool. They were provided with 20 health news articles released by multiple sources with different credibility levels. The health domain was selected in order for a better simulation of common readers since it was outside the participants' areas of expertise. Then, the participants were asked to investigate the veracity of each news article and provide ratings for the shown output, in two rounds: 1) manually by following the investigation practices in their daily lives, such as using a Web search engine, and/or using a research database; 2) by using our proposed tool, adFaCT.
For collecting the ratings from the participants, we set up a survey on Google Forms, containing a rating scale for each processed news article in both rounds together with a figure explaining each option in the scale. In addition, the survey included two questions to assess the perceived success of adFaCT in terms of which approach had been faster and more helpful (with the options _manual investigation_, _investigation with adFaCT_, and _no difference_). Finally, it concluded with an open-ended question for comments and feedback.
In terms of the evaluation criteria in the rating scale, we followed Google's search quality guidelines [14], which was proposed for evaluating Google search engine results with human raters. Although there are criticisms regarding the inadequacy of such retrieval effectiveness tests [10], similar approaches are still being used in the literature [12]. The guidelines involve mainly two tasks: determining to what extent the page achieves its purpose ("_Page Quality_") and determining if search results are useful ("_Needs Met_"). Because our tool only benefits from credible news outlets and peer-reviewed publications, the former task is redundant in our case. Therefore, we only covered the latter task in our evaluation.
The "Needs Met" task involves two steps, which are about determining the user intent and the rating. Since all users of our tool will have the same intent, i.e., veracity assessment, the first step is redundant. Therefore, we only asked our evaluators to determine the rating of the search results by following the scale shown in Table 2.
As a result of the evaluation, the average rating of the three raters when they manually investigated the given news articles was 4.35. This average has risen to 4.57 when they utilised adFaCT in their investigations. In addition, the raters were in moderate agreement that adFaCT provided better or similar results with respect to what they were able to obtain with their manual investigations, with a Fleiss' Kappa of 53.33%. However, the raters have all agreed that fact-checking with adFaCT was faster than their own practices. These results indicate that adFaCT can help users perform fact-checking faster without degrading the quality of retrieved evidence for fact-checking. However, more extensive experiments are needed to evaluate its performance.
## 6 Further Discussions
### Comparing adFaCT with Existing Tools
aedFaCT differs from existing fact-checking systems in several ways. To begin with, to the best of our knowledge, it is the first fact-checking system completely based on expert opinion discovery although there exist studies leveraging experts in fact-checking [11, 12, 13]. Secondly, it is an evidence retrieval tool, and the fi
Figure 3: An example output from adFaCT showing some of the retrieved news articles.
nal decision on the veracity is given by the user. Thus, it can establish trust among the users more easily unlike many fact-checking tools with a black-box design and fully automated decision-making mechanism, due to scepticism towards automation [14]. Another strength of aerIoT is that it targets both common readers and professionals by retrieving both news articles and scientific publications. This enables users to consider information sources
\begin{table}
\begin{tabular}{r l} \hline \hline
**Rating** & **Description** \\ \hline Fully Meets (_FullyM_) & All or almost all users would be immediately and fully satisfied by the result and would not need to view other results to satisfy their need. \\ Highly Meets (_HM_) & Very helpful for many or most users. Some users may wish to see additional results. \\ Moderately Meets (_MM_) & Helpful for many users OR very helpful for some users. Some or many users may wish to see additional results. \\ Slightly Meets (_SM_) & Helpful for fewer users. There is a connection between the query and the result, but not a strong or satisfying connection. Many or most users would wish to see additional results. \\ Fails to Meet (_FailsM_) & Completely fails to meet the needs of the users. All or almost all users would wish to see additional results. \\ Not Applicable (_N/A_) & The evaluator was unable to evaluate the result. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rating scale for the Needs Met task [11]
Figure 4: An example output from aedFaCT showing some of the retrieved scientific publications and their co-authors, respectively
that they are more familiar with, depending on their level of expertise. In addition, it provides users with evidence from multiple sources and experts, which is a beneficial approach to breaking users out of their echo chambers. Finally, its overall workflow aligns with the common practices of human fact-checkers, in which engaging with experts is a key element, meaning that fact-checkers can use the tool to accelerate their claim investigation processes [16, 15]. To be more precise, an overview of different characteristics of aerGaCT and other existing tools is provided in Table 3. As a result, we believe that aerGaCT can be made a new useful tool for fighting against false information and has the potential to be a part of standard fact-checking processes performed by both human fact-checkers and common readers.
### Limitations and Future Work
The existing version of the proposed tool has a number of limitations. Firstly, it depends on external APIs (i.e., Google and Scopus) having quotas for the number of requests. Google Custom Search API11 allows 10,000 requests per day while Scopus Search APIs12 have a weekly quota between 5,000 and 20,000 requests, depending on the used API service. This makes it quite difficult to deploy aerGaCT for a wider community. Another limitation of the tool is its relatively low speed during keyword extraction. Since AKE methods already suffer from poor accuracy [1], we preferred accuracy over speed when selecting the AKE method and used the one (i.e., SIFRank+) providing the best accuracy although there exist various lightweight AKE algorithms. Besides, the information retrieval process was based on the selected keywords combined simply with the _AND_ operator. This causes fewer records as a result of the searches, especially when too many keywords were chosen by the user. Therefore, a smarter approach utilising a combination of logical operators (e.g., using the _OR_ operator for similar keywords) is needed for obtaining better search results. Lastly, the evaluation of aerGaCT has been conducted as a pilot study with a small number of participants having similar backgrounds. Hence, more extensive experiments with a more diverse and representative participant population, covering both professionals (e.g., fact-checkers and journalists) and common readers, are required.
Footnote 11: [https://developers.google.com/custom-search/v1/overview#](https://developers.google.com/custom-search/v1/overview#) pricing
Footnote 12: [https://dev.elsevier.com/api_key_settings.html](https://dev.elsevier.com/api_key_settings.html)
Apart from resolving the limitations, for future work, we aim to improve the capabilities of aerGaCT. The existing version does not specifically consider retrieving results from official websites, e.g., governmental organisations, NGOs, and academic institutions. These can be retrieved by checking the URL extensions of the general Web results. Moreover, we plan to incorporate research on claim detection into aerGaCT so that extracted keywords can be more focused on specific claims in the input article.
### Broader Impact
This work has some potential outcomes from a broader perspective. Since it accelerates the fact-checking process for users, it might encourage them to make fact-checking a daily activity and increase awareness in society for tackling false information online. However, users should be conscious when assessing the veracity of news items with the expert opinions shown by aerGaCT. Although aerGaCT retrieves evidence only from trustworthy sources, the displayed expert opinions might contradict each other due to disagreements between different experts. Therefore, aerGaCT does not eliminate the need for critical thinking ability for its users.
## 7 Research Ethics Considerations
The work reported in this paper involved a focus group discussion participated and a validation experiment participated by some co-authors of the paper only. According to the research ethics guidelines of the University of Kent's Central Research Ethics Advisory Group and general advice given by the School of Computing's Research Ethics Officer, such user studies involving researchers who are part of the research only were exempted from going through a research ethics review process. Both user studies did not involve any explicit collection of personal data or other sensitive data, and all participants explicitly consented to participate. Participating in the studies did not cause any noticeable harm to participants, but brought some benefits to them - they could all achieve a better understanding of how to conduct fact-checking as a common reader and researcher.
## 8 Conclusion
Fake news is a challenging problem in society and causes serious harm. Its speed of propagation with the help of digital technologies suggests the need for automated solutions that can help people combat fake news at scale. Although there has been much effort to detect fake news, existing tools and services overlooked engaging with experts, who are commonly consulted during standard fact-checking processes. Therefore, this paper proposed _aedFaCT_, a Web browser extension that retrieves expert opinions related to a news article to help fact-checkers and the general public perform fact-checking. Our initial evaluation suggested that it can accelerate fact-checking process without negatively affecting the search quality.
**CRediT authorship contribution statement Enes Altuncu**: Conceptualization, Methodology, Software, Writing - original draft, Writing - review & editing. **Jason Nurse**: Methodology, Writing - review & editing, Supervision. **Meryem Bagiracik**: Investigation, Validation, Writing - review & editing. **Sophie Kaleba** : Investigation, Validation, Writing - review & editing. **Haiyue Yuan**: Validation, Writing - review & editing. **Lisa Bonheme**: Investigation, Writing - review & editing. **Shujun Li**: Conceptualization, Methodology, Supervision, Writing - review & editing.
**Acknowledgements** We would like to thank all the reviewers for their valuable feedback. The first and third co
authors, E. Altuncu and M. Bagriacik, were supported by funding from the Ministry of National Education, Republic of Turkey, through the MoNE-YLSY scholarship program.
|
2310.12893 | Blind quantum machine learning with quantum bipartite correlator | Distributed quantum computing is a promising computational paradigm for
performing computations that are beyond the reach of individual quantum
devices. Privacy in distributed quantum computing is critical for maintaining
confidentiality and protecting the data in the presence of untrusted computing
nodes. In this work, we introduce novel blind quantum machine learning
protocols based on the quantum bipartite correlator algorithm. Our protocols
have reduced communication overhead while preserving the privacy of data from
untrusted parties. We introduce robust algorithm-specific privacy-preserving
mechanisms with low computational overhead that do not require complex
cryptographic techniques. We then validate the effectiveness of the proposed
protocols through complexity and privacy analysis. Our findings pave the way
for advancements in distributed quantum computing, opening up new possibilities
for privacy-aware machine learning applications in the era of quantum
technologies. | Changhao Li, Boning Li, Omar Amer, Ruslan Shaydulin, Shouvanik Chakrabarti, Guoqing Wang, Haowei Xu, Hao Tang, Isidor Schoch, Niraj Kumar, Charles Lim, Ju Li, Paola Cappellaro, Marco Pistoia | 2023-10-19T16:42:32Z | http://arxiv.org/abs/2310.12893v1 | # Blind Quantum Machine Learning with Quantum Bipartite Correlator
###### Abstract
Distributed quantum computing is a promising computational paradigm for performing computations that are beyond the reach of individual quantum devices. Privacy in distributed quantum computing is critical for maintaining confidentiality and protecting the data in the presence of untrusted computing nodes. In this work, we introduce novel blind quantum machine learning protocols based on the quantum bipartite correlator algorithm. Our protocols have reduced communication overhead while preserving the privacy of data from untrusted parties. We introduce robust algorithm-specific privacy-preserving mechanisms with low computational overhead that do not require complex cryptographic techniques. We then validate the effectiveness of the proposed protocols through complexity and privacy analysis. Our findings pave the way for advancements in distributed quantum computing, opening up new possibilities for privacy-aware machine learning applications in the era of quantum technologies.
## I Introduction
Quantum computation that leverages the principles of quantum mechanics has the potential to tackle problems that are beyond the reach of classical computers, revolutionizing fields ranging from cryptography [1] to finance [2] and drug discovery [3]. Distributed quantum computing has attracted a lot of attention in recent years [4; 5; 6; 7; 8; 9; 10] due to the rapid progress in quantum communication technologies. In distributed quantum computing, multiple quantum processors are connected over a network, enabling collaborative computation and resource sharing. This approach is crucial for scaling up quantum computing power and overcoming the limitations of individual quantum systems. Exploiting distributed quantum resources enables tackling larger and more computationally complex problems in domains such as optimization, simulation and quantum machine learning (QML). QML is especially suitable for distributed computation due to the need to process large datasets.
Privacy in distributed computing plays a vital role in ensuring the confidentiality and security of sensitive information processed by multiple parties. Distributed quantum computation involves sharing and transmitting of quantum states across multiple nodes, making it paramount to protect the privacy of data and prevent unauthorized access. Furthermore, in practice, addressing privacy concerns in distributed quantum computing is essential for facilitating applications in fields such as finance and healthcare, where preserving the privacy of sensitive data is of utmost importance.
A number of protocols have been proposed in recent years that aim to implement private distributed quantum computing. For example, blind quantum computing [11; 12; 13] enables the client to execute a quantum computation using one or more remote quantum servers while keeping the structure of the computation hidden. Meanwhile, reducing the overhead in communication over blind quantum computation protocols has been an active research area since the first proposal of universal blind quantum computation (UBQC) [11]. However, for distributed quantum computing problems such as QML, ensuring the privacy of data from a certain party while reducing the overhead in both quantum communication and computation remains a challenge.
In this work, we introduce novel protocols for blind distributed quantum machine learning based on quantum bipartite correlator algorithm that can perform inner product estimation tasks. Our protocols are communication-efficient compared with state-of-the-art classical and quantum blind distributed machine learning algorithms. Particularly, for the task of distributed inner product estimation, a core subroutine in machine learning applications, the protocols involve a communication complexity \(O(\log N/\epsilon)\) with \(N\) and \(\epsilon\) being the size of the vectors and standard estimation error, respectively. We demonstrate how our protocols allow the client to conceal its data from the server, and vice versa. We provide a detailed resource analysis for both communication and computation costs of our methods. Our work paves the way for performing quantum machine learning with an untrusted device, while maintaining the privacy and
keeping the resource overhead low.
## II Formalism
We start by presenting the problem statement in distributed quantum computation. The basic setting includes two parties, Alice and Bob. We assume that Alice has more quantum computational resources than Bob, such as a larger number of qubits. In many distributed quantum computation applications such as a delegated computation setting, Alice can be considered as a quantum server with Bob being a client. Furthermore, there is a quantum channel where qubits can be transmitted between the two parties. For the distributed QML tasks studied in this work, we assume that Alice holds the data \(\mathbf{X}\) and Bob holds \(\mathbf{y}\). For example, in supervised learning, \(\mathbf{X}\) and \(\mathbf{y}\) could be feature data and labels, respectively [14], while in unsupervised learning, both \(\mathbf{X}\) and \(\mathbf{y}\) can be feature data with the objective to cluster them based on distance estimation [15].
We consider the task of blind quantum machine learning, such as linear regression or classification [16; 17; 18; 19]. In machine learning, evaluating the inner product between two vectors is an important algorithmic building block. The server holds the data vector \(\mathbf{X}\) of size \(N\) and the number of features for each data point is \(M\), and the client holds a one-dimensional bitstring \(\mathbf{y}\) with the same size \(N\). Note that transmitting the data classically to the server would introduce \(O(N)\) complexity in communication. Meanwhile, as we consider distributed quantum computation, the data \(\mathbf{X}\) and \(\mathbf{y}\) are only held locally by the server and client, respectively.
In classical settings, the goal of achieving distributed machine learning with privacy can be approached using various techniques, such as homomorphic encryption [20; 21], which allows computation over encrypted data. Specifically, for distributed bipartite correlation estimation, many methods could be employed, including linearly homomorphic encryption [22; 23], non-interactive inner product protocols [24] and oblivious-transfer-based secure computation [25]. However, it is important to note that these classical methods often introduce considerable overhead in terms of computation and communication complexity. Particularly, a communication cost of \(\tilde{O}(N)\) would be a minimum requisite [24]. As a result, their practical applications become limited, especially when dealing with large data sizes.
## III Quantum bipartite correlator algorithm and its privacy
In this section, we briefly introduce the quantum bipartite correlator (QBC) algorithm that can estimate the correlation between two bitstrings held by remote parties [8]. The algorithm can be easily generalized to perform other computation tasks, such as the Hamming distance estimation. We remark that estimating bipartite correlation or Hamming distance serves as the building block of a general class of machine learning problems, including least-square fitting and classification of discrete labels [26; 27].
Without loss of generality, we consider binary floating point numbers. We take the feature dimension \(M\) to be one for simplicity hereafter unless specified. For two vectors \(\mathbf{X},\mathbf{y}\equiv[x_{1},\cdots x_{N}]^{T},[y_{1},\cdots y_{N}]^{T}\in\{0,1 \}^{N}\), we are interested in evaluating \(\overline{xy}=\frac{1}{N}\sum_{i=1}^{N}x_{i}y_{i}\) within a standard deviation error \(\epsilon\). To begin with, we assume that the two parties Alice and Bob hold a local oracle that can encode their own data using a unitary transformation. That is, for Alice, one has \(\hat{U}_{\vec{x}}\colon|i\rangle_{n}|0\rangle\mapsto|i\rangle_{n}|x_{i}\rangle\) that encodes the data \(x_{i}\), where \(|i\rangle_{n}\) is an \(n\equiv\lceil\log_{2}(N)\rceil\)-qubit (called index qubit hereafter) state \(|i_{1}i_{2}\cdots i_{n}\rangle\), representing the index of the queried component with \(i_{k}\in\{0,1\}\), \(k\in[N]\), and \(|x_{i}\rangle\) is a single-qubit state. Similarly, Bob has an oracle \(\hat{U}_{\vec{y}}\) of the same type that encodes his local data \(y_{i}\). These oracle operators, as well as the ones introduced later, could be implemented with various techniques such as quantum random access memory [28].
QBC is based on the quantum counting algorithm, where Alice and Bob send qubits via quantum channels and communicate with each other to realize the phase oracle [8; 29], as shown in the top of Fig. 1. The quantum counting algorithm consists of a Grover operator \(\hat{G}_{\vec{x},\vec{y}}\equiv\hat{H}^{\otimes n}(2|0\rangle_{n}\langle 0|_{ n}-\hat{I})\hat{H}^{\otimes n}\hat{U}_{xy}\), where \(\hat{U}_{xy}\) is a unitary operator that encodes information of both parties as we will introduce below, and inverse Quantum Fourier transform (QFT!) on register qubits \(\left|\cdot\right\rangle_{t}\). When measuring the \(t\)-register, one can project it into a state \(|j\rangle_{t}\) with phase \(2\pi j\cdot 2^{-t}\) which encodes either \(\hat{\theta}\) or \(2\pi-\hat{\theta}\), where \(\theta=2\arcsin\sqrt{xy}\), with equivalent standard deviation: \(\Delta\hat{\theta}=2^{-t+1}\)[8].
During the phase oracle \(\hat{G}_{\vec{x},\vec{y}}\), the following unitary circuit is applied to achieve encoding of \(x_{i}\) and \(y_{i}\)
\[\hat{U}_{xy}\left|i\right\rangle_{n}\left|00\right\rangle_{o_{1}o_{2}}=(-1)^{x _{i}y_{i}}\left|i\right\rangle_{n}\left|00\right\rangle_{o_{1}o_{2}}, \tag{1}\]
where \(o_{1}\), \(o_{2}\) are two qubits locally held by Alice and Bob, respectively. The above unitary operator can be implemented with the local oracles that Alice and Bob hold, i.e., \(\hat{U}_{\vec{x}}\) and \(\hat{U}_{\vec{y}}\).
Specifically, Alice encodes her local information \(\mathbf{X}\) into qubit \(o_{1}\) via \(\hat{U}_{\vec{x}}\) operator and sends the \((n+1)\)-qubit state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\left|i\right\rangle_{n}\left|x_{i}\right\rangle _{o_{1}}\) to Bob via a quantum channel. After Bob applies his oracle and generates the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\left|i\right\rangle_{n}\left|x_{i}\right\rangle _{o_{1}}\left|y_{i}\right\rangle_{o_{2}}\), a controlled-Z (CZ) gate between qubit \(o_{1}\) and \(o_{2}\) is applied to encode the correlation information into the phase of the quantum state. That is, the bipartite quantum state is described by \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\left|i\right\rangle_{n}\left| x_{i}\right\rangle_{o_{1}}\left|y_{i}\right\rangle_{o_{2}}\). The following local oracles would then yield the desired state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\left|i\right\rangle_{n}\) on which Alice will apply the quantum counting algorithm to estimate \(\overline{xy}=\frac{1}{N}\sum_{i=1}^{N}x_{i}y_{i}\) with bounded error \(\epsilon\). We note that the CZ gate might
be replaced with a different set of gates to estimate other types of correlations between \(\mathbf{X}\) and \(\mathbf{y}\). For example, to calculate their Hamming distance, one can implement the XOR gate \(x_{i}\oplus y_{i}\) by replacing the CZ gate with a Z gate on \(o_{2}\) sandwiched by two CNOT gates between \(o_{1}\) and \(o_{2}\)[8].
In the QBC algorithm, the communication complexity, i.e., the qubits transmitted during the overall process, is given by the Grover operation's \(2(n+1)\) qubits communication repeated for \(2^{t}-1\) iterations:
\[\mathcal{C}_{\mathrm{comm}}=2(n+1)(2^{t}-1)=O\left(\frac{\log_{2}(N)}{\epsilon }\right), \tag{2}\]
where the number of register qubits \(t\) is chosen to satisfy the desired error bound. We remark that the above communication complexity is advantageous compared with the SWAP-test-based algorithm that has a scaling of \(O\left(\log_{2}(N)/\epsilon^{2}\right)\)[30] or LOCC-based algorithms with a scaling of \(O\left(\log_{2}(N)\max\{1/\epsilon^{2},\sqrt{N}/\epsilon\}\right)\)[31]. This advantage is achieved by utilizing the distributed Grover operations.
The computational complexity, on the other hand, is the total number of oracle calls by Alice and Bob:
\[\mathcal{C}_{\mathrm{comp}}=4(2^{t}-1)=O\left(\frac{1}{\epsilon}\right). \tag{3}\]
We next consider the privacy of data in the QBC algorithm discussed above. From now on, we consider Alice as a server and Bob as a client. We first focus on the privacy of the client's information \(\mathbf{y}\) to a semi-honest adversary. In this type of adversary, the honest-but-curious server follows the protocol and does not do any malicious behavior, but it tries to violate the privacy of the client's input by scrutinizing the messages transmitted in the protocol. That is, the server tries to infer \(\mathbf{y}\) from the estimated \(\frac{1}{N}\sum_{i}^{N}x_{i}y_{i}\).
In the trivial case when \(x_{i}=0,\forall i\leq N\), we have \(\overline{xy}=0\) no matter what \(\mathbf{y}\) is and the protocol has the best privacy. While in the worst case where the \(x_{i}=1,\forall i\leq N\) and \(\overline{xy}=1\), the server could infer that \(y_{i}=1,\forall i\leq N\). In general, for \(\mathbf{X}\) with Hamming weight \(d_{x}\), the probability that the server gets the exact \(\mathbf{y}\) (that is, the Hamming distance between extracted and exact bitstring is \(d_{0}=0\)) is given by
\[\Pr(d_{x})=\frac{1}{2^{N-d_{x}}}\frac{\prod_{i=1}^{d_{x}}i}{\prod_{i=1}^{N \overline{xy}}i\prod_{i=1}^{d_{x}-N\overline{xy}}i}, \tag{4}\]
where the factor \(\frac{1}{2^{N-d_{x}}}\) comes from server having random guess on the indices \(j\) that satisfies \(x_{j}=0\). For a honest server in the original QBC protocol, however, the \(\mathbf{y}\) information is always hidden from the server and is private.
In addition to the semi-honest adversary scenario discussed above, we note that in the original QBC algorithm, the preservation of privacy is not assured when we consider a malicious server Alice. The server has the capability to acquire, to a certain extent, Bob's strings \(\mathbf{y}\) by deviating from the expected quantum operations. We next discuss the designed blind QBC protocol with such an untrusted server.
## IV Blind QBC with untrusted server
A malicious server can get the client's information by deviating from the established QBC protocol. One example is that the server could perform quantum gate operations and measurements to extract the phase information instead of following the expected Grover steps after receiving \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}|i\rangle_{n}|x_{i}\rangle_{o_{1}}\) from the client Bob. Alternatively, a malicious server could potentially manipulate the state of qubit \(o_{1}\) sent to the client, rather than genuinely encoding the information of \(\mathbf{X}\). In principle, for each communication round, the server can acquire one bit of information of client's data \(\mathbf{y}\). Then with the \(2^{t}-1=O(\frac{1}{\epsilon})\) Grover iterations, the server could get \(O(\frac{1}{\epsilon})\) bits of information in \(\mathbf{y}\). Such an attack strategy might be implemented by preparing the \(o_{1}\) qubit in \(|+\rangle\) state and sending \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\left|i\right\rangle_{n}|+\rangle_{o_{1}}\) to the client (Appendix A). Subsequent to the reception of the quantum
Figure 1: Diagram for blind QBC with untrusted server. The upper diagram shows the quantum counting algorithm consisting Grover phase oracles \(\hat{G}_{x,\vec{y}}\) and inverse QFT, while the lower box panel shows the realization details of each phase oracle. Compared to the original QBC algorithm, we introduce an ancillary qubit \(o_{3}\) on client’s side to add a phase \(g_{i}\) during the computation process. The phase can be introduced via applying a phase gate on qubit \(o_{3}\), which encodes a bitstring that is random and unknown to the server. The detailed phase encoding rule is explained in the text. The quantum state at the star point is shown in the inset of the figure. After the server finishes the quantum circuit, it sends the extracted modified bipartite correlation \(\frac{1}{N}\sum_{i}^{N}(x_{i}y_{i}+g_{i})\) to the client via a classical communication channel. We omit the \(1/\sqrt{N}\) normalization factor for index qubit states \(\sum_{i}^{N}\left|i\right\rangle\) in the figures hereafter for simplicity.
state from the client, the server undertakes an \(X\) basis measurement on qubit \(o_{1}\). The server could perform the sampling procedure encompassing the bitstrings of the index qubits during the \(O(\frac{1}{\epsilon})\) communication rounds.
We note that the server could not manipulate the index qubit states \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}\) to amplify the amplitude of a specific bitstring of interest, as the client is capable of verifying the received quantum state of index qubits by performing X basis measurements to check whether they have the same amplitude. On the other hand, it is possible to employ a redundant encoding strategy to further decrease the probability that the server attains a specific \(y_{i}\) corresponding to an intended index. However, this comes at the expense of increased communication complexity, as detailed in Appendix. B.
To counteract the aforementioned attack strategy, we need to devise a protocol enabling the server to execute machine learning tasks while remaining unaware of the exact label information \(\mathbf{y}\), even when the malicious server does not follow the designed protocol. In this case, we consider an honest client, who is not interested in learning \(\mathbf{X}\). This assumption might be removed if we consider further encoding privacy in \(\mathbf{X}\) when sending information to the client. To implement remote blind bipartite correlation estimation, a desired protocol should have 1) less overhead in quantum communication, 2) less requirements in the computational power of client, 3) a certified estimation result with error \(\epsilon\).
We thus consider the revised QBC algorithm below (Fig. 1). Inspired by quantum one-time pad [11], the protocol utilizes phase padding to preserve privacy. The client Bob now has one or more qubits at hand, where he can encode a bit string \(\ket{g_{i}}\) that is blind to the server. That is, the client has an oracle \(\hat{U}_{\vec{g}}\) for the extra qubit (denoted as \(o_{3}\) hereafter), and the modified phase oracle of Eq. 1 reads as
\[\hat{U}_{xyg}\ket{i}_{n}\ket{000}_{o_{1}o_{2}o_{3}}=(-1)^{x_{i}y_{i}+g_{i}} \ket{i}_{n}\ket{000}_{o_{1}o_{2}o_{3}}. \tag{5}\]
To implement the above unitary \(\hat{U}_{xyg}\), similar to the \(\hat{U}_{xy}\), the client performs \(\hat{U}_{\vec{g}}\) and \(\hat{U}_{\vec{g}}\) oracle after receiving state from server to create the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\ket{x_{i}}_{o_{1}}\ket{y_{i}}_{o_{2} }\ket{g_{i}}_{o_{3}}\), followed by a controlled-Z gate between \(o_{1}\) and \(o_{2}\). Then a local Z gate can be applied on qubit \(o_{3}\) to add the phase \((-1)^{g_{i}}\) that is random to server.
Since the phase term \((-1)^{x_{i}y_{i}+g_{i}}\) is binary here with modular addition between \(x_{i}y_{i}\) and \(g_{i}\), we design the following rule for the application of random phase \(g_{i}\). For a given index \(i\), when \(y_{i}=0\), the client chooses a random number from \(\{0,1\}\); while when \(y_{i}=1\), the client sets \(g_{i}=0\). Under this setting, the server cannot get \(y_{i}\) in general from direct measurement of the parity at each Grover step, even if the server knows exactly the circuit that the client performs.
The above phase encoding rule on \(g_{i}\) guarantees that \(x_{i}y_{i}+g_{i}\in\{0,1\}\). The quantum counting algorithm can then estimate \(\frac{1}{N}\sum_{i}^{N}(x_{i}y_{i}+g_{i})=\frac{1}{N}\sum_{i}^{N}(x_{i}y_{i}+g _{i}\mod 2)\) with error bound \(\epsilon\). Finally, after the measurement, the server sends the estimated result back to the client via a classical channel, from which the client can extract \(\frac{1}{N}\sum_{i}^{N}x_{i}y_{i}\) using his local information of \(\frac{1}{N}\sum_{i}^{N}g_{i}\). Alternatively, depending on the specific use cases, the client could directly share \(\frac{1}{N}\sum_{i}^{N}g_{i}\) with the server and let it extract the bipartite correlation between \(\mathbf{X}\) and \(\mathbf{y}\).
We emphasize that in principle, the aforementioned protocol could still inadvertently leak a portion of the information in \(\mathbf{y}\) to the server. As can be seen from the scheme, in the case where \(x_{j}=1\) and the final phase term is \(x_{j}y_{j}+g_{j}=0\), if the server knows the above application rule of \(g_{i}\) and extracts the phase corresponding to the index qubit \(\ket{i}_{i=j}\), it could infer that \(y_{j}=0\). We consider the worst scenario where the malicious server picks \(x_{i}=1,\forall i\leq N\) and has client's local phase encoding rule. The server's attack strategy is to measure the phase of a randomly picked index \(\ket{i}\) to extract \(x_{i}y_{i}+g_{i}\) at each Grover iteration. Then, for \(\mathbf{y}\) with Hamming weight \(d_{y}\), the probability that the server extracts a bitstring \(\mathbf{y^{\prime}}\) that is \(d_{0}\)-close (\(d_{0}\leq d_{y}\)) to \(\mathbf{y}\) using the information of the measured phases and without doing random guess is simply given by
\[\begin{split}&\Pr(d(\mathbf{y},\mathbf{y^{\prime}})=d_{0})=\\ &\frac{C(d_{y},d_{0})C(N-d_{y},\min(2^{t}-1,d_{y})-d_{0})}{C(N, \min(2^{t}-1,d_{y}))}\end{split} \tag{6}\]
where \(C(\cdot,\cdot)\) denotes the binomial coefficient. As can be seen from the analysis above, even in the worst case, the probability that the server can successfully extract part of \(\mathbf{y}\) information becomes considerably low when the data size becomes large, particularly when \(N\geq 2^{t}-1\), while in the original QBC a malicious server could get \(2^{t}-1\) bits of information from the client during the communication round. Note that the iteration number \(2^{t}-1\) yields the standard deviation of the estimated correlation, that is, \(2^{t}-1=O(\frac{1}{\epsilon})\). A less tight error bound \(\epsilon\) will reduce the number of communication rounds between server and client thus increasing the privacy of client's data.
We remark that the quantum communication complexity of the aforementioned algorithm for blind server is \(\mathcal{C}^{b_{s}}_{\mathrm{comm}}=O(\frac{\log_{2}(N)}{\epsilon})\), which is the same as the original QBC as depicted in Eq. 2. Moreover, akin to the QBC algorithm, a classical communication channel is needed at the end of QBC to deliver estimation results to the client. In terms of computational overhead experienced by the client, introducing the ancilla qubit \(o_{3}\) only adds \(O(\frac{1}{\epsilon})\) number of two-qubit phase gates and as a result, does not alter the inherent computational complexity. To this end, the blind QBC protocol proposed here could enable communication-efficient blind distributed machine learning tasks between a server and a client without pre-supposing substantial quantum resources on the client.
Blind QBC with Untrusted Client
We now discuss the scenario where the server would like to estimate \(\frac{1}{N}\sum_{i}^{N}x_{i}y_{i}\) while keeping \(\mathbf{X}\) hidden from the client at all times during the process. In practical applications such as model-as-a-service platforms [32, 33], the server's information, including the model's parameters or training data, should remain hidden from the clients. By hiding the server-side information, they can prevent the client from reverse-engineering or extracting valuable information about the underlying model architecture or training data. Under this setting, the protocol should be secure against not only a honest-but-curious client, but also a malicious client who tries to get \(\mathbf{X}\) by deviating from the original quantum algorithm.
Here we assume an honest server that follows the protocol exactly without trying to get the label information \(\mathbf{y}\). The goal is then to encode \(\mathbf{X}\) when the server sends qubits to the client while running the QBC algorithm. That is, we are interested in designing a privacy-preserving operator \(\hat{O}_{f}\) such that
\[\hat{O}_{f}\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\ket{00}_{o_{1}o_{2}}= \frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}\ket{00}_{o_{1}o_{2}}. \tag{7}\]
Inspired by quantum key distribution protocols [34] such as BB84 [35], we consider a modified local oracle operator \(\hat{U}_{X_{1}}\) held by the server, where the data information \(\mathbf{X}\) is encoded in different basis (Fig. 2). Specifically, at each iteration of quantum counting algorithm, for a given index \(i\), the server chooses a random number \(R_{i}\) from \(\{0,1\}\). When \(R_{i}=0\), the server encodes \(x_{i}\) using the Z basis, i.e., \(\ket{i}_{n}\ket{0}_{o_{i}}\) or \(\ket{i}_{n}\ket{1}_{o_{1}}\), depending on whether \(x_{i}\) being \(0\) or \(1\); if \(R_{i}=1\), \(x_{i}\) is encoded in the X basis and now the state reads \(\ket{i}_{n}\ket{+}_{o_{1}}\) or \(\ket{i}_{n}\ket{-}_{o_{1}}\). Here \(\ket{+(-)}=\frac{1}{2}(\ket{0}\pm\ket{1})\) are the eigenstates of Pauli X operator. This oracle \(\hat{U}_{X_{1}}\) can be implemented with the original oracle \(\hat{U}_{\mathbf{x}}\) with Hadamard gates on \(o_{1}\) conditioned on index \(\ket{i}_{n}\).
Then, the state received by the client at each time reads as \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\ket{X_{i}}_{o_{1}}\) with \(X_{i}\) being \(1(0)\) or \(+(-)\). As the client does not know which basis the server chooses for given \(i\), at each Grover iteration, measurement of qubit \(o_{1}\) on index \(\ket{i}\) will have the probability of yielding both \(0\) or \(1\), hence the client cannot infer the \(x_{i}\) information from the single copy of the received \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\ket{X_{i}}_{o_{1}}\) state. Note that the server could pick different random numbers \(R_{i}\) at different communication rounds when executing the QBC algorithm.
As in the original QBC algorithm, the client performs CZ gate between the received qubit \(o_{1}\) and local qubit \(o_{2}\) sandwiched by \(\hat{U}_{\mathbf{y}}\) operators. Then, the state received by the server from the quantum channel is \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}(a_{i}\ket{0}+b_{i}(-1)^{y_{i}} \ket{1})_{o_{1}}\) where \(a_{i}(b_{i})\) is decided by \(x_{i}\) and the encoding basis \(R_{i}\) thus is known to the server. We next discuss how the server could perform operations to reach the target state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}\) for running the follow-up QBC algorithm. We consider a second oracle operator held by the server \(\hat{U}_{X_{2}}\):
\[\begin{split}&\hat{U}_{X_{2}}\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{ n}(a_{i}\ket{0}+b_{i}(-1)^{y_{i}}\ket{1})_{o_{1}}=\\ &\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}(a_{i} \ket{0}+b_{i}(-1)^{y_{i}}\ket{1})_{o_{1}}.\end{split} \tag{8}\]
This can be achieved via the help of an additional qubit \(o_{a}\) held by the server that encodes the \(\mathbf{X}\) information in the normal Z basis (see Appendix C for details of circuit implementation).
Note that the server cannot decouple the \(o_{1}\) qubit with an unknown state, as the honest server only has the information of \(a_{i}\) and \(b_{i}\) but doesn't have the information of \(\mathbf{y}\). In order to reset the state of qubit \(o_{1}\), the server could return the state back to client to have the client remove the phase \((-1)^{y_{i}}\). Before doing so, the server would like to first hide its information by adding a random phase padding by applying \(\hat{U}_{X_{3}}\) which is defined as
\[\begin{split}&\hat{U}_{X_{3}}\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y _{i}}\ket{i}_{n}(a_{i}\ket{0}+b_{i}(-1)^{y_{i}}\ket{1})_{o_{1}}=\\ &\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}+h_{i}}\ket{i}_{n}( a_{i}\ket{0}+b_{i}(-1)^{y_{i}}\ket{1})_{o_{1}}.\end{split} \tag{9}\]
Here \(h_{i}\in\{0,1\}\) is blind to the client and could change in different communication rounds, therefore the client would not be able to extract \(x_{i}\) information. The client performs a controlled-Z gate again between its local qubit \(o_{2}\) and the received qubit \(o_{1}\), after which the phase term \((-1)^{y_{i}}\) becomes \((-1)^{y_{i}+y_{i}}=1\). Then, the server receives the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}+h_{i}}\ket{i}_{n}(a_{i}\ket{0}+ b_{i}\ket{1})_{o_{1}}\) from client and performs oracle \(\hat{U}_{X_{4}}\):
\[\begin{split}&\hat{U}_{X_{4}}\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y _{i}+h_{i}}\ket{i}_{n}(a_{i}\ket{0}+b_{i}\ket{1})_{o_{1}}=\\ &\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}\ket{0 }_{o_{1}}.\end{split} \tag{10}\]
It can be easily seen that to implement \(\hat{U}_{X_{4}}\), the server could simply perform \(\hat{U}_{X_{3}}\) again to remove the added random phase term \((-1)^{h_{i}}\) and then reset the qubit \(o_{1}\) to \(\ket{0}_{o_{1}}\) as the server knows the all coefficients \(a_{i}\) and \(b_{i}\).
We remark that the random numbers \(R_{i}\) and \(h_{i}\) can change in different Grover iterations. That is, the client will not get useful information by performing measurements on each iteration and using the joint results from a sequence of measurements to infer \(\mathbf{X}\). The privacy of \(\mathbf{X}\) is guaranteed by the fact that measuring a single copy in a given basis cannot reveal both the basis information \(R_{i}\) and the data information \(x_{i}\). The probability that
the client gets \(\mathbf{X^{\prime}}\) that is \(d_{0}\)-close to the true \(\mathbf{X}\) would simply be the same as a random guess.
To this end, we have described a phase encoding oracle \(\hat{O}_{f}\) that lets the server acquire the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}\) for subsequent operations without leaking the information of data \(\mathbf{X}\) to an untrusted client. The scheme is based on a random encoding of \(\mathbf{X}\) and is information-theoretic secure against an untrusted client, with the proof of security following directly from the corresponding proof for the BB84 protocol [35, 36]. The total number of oracle calls by server and client only increases by a constant at each iteration, thus leading to the same computation complexity \(O(\frac{1}{\epsilon})\) as Eq. 3. The total communication cost of this blind client scheme is given by
\[\mathcal{C}_{\mathrm{comm}}^{b_{c}}=4(n+1)(2^{t}-1)=O\left(\frac{\log_{2}(N)}{ \epsilon}\right), \tag{11}\]
which has the same complexity scaling as the original QBC algorithm. We summarize the proposed algorithms here and above in Table. 1.
## VI Generalization into multi-party settings
The algorithms discussed above can be generalized into multi-party settings and find applications in secure multi-party computation and machine learning [37, 38], where parties collaboratively perform computations on their combined data sets without revealing the data they possess to untrusted parties. For example, to perform model aggregation, an untrusted central server would like to perform linear regression or classification using its local data as well as labels that are distributed among multiple clients. Then, the protocol in Sec. IV can be applied in which the server can interact with each client to extract model parameters individually.
Here we provide an example of multi-party protocols. We consider a system consisting of a central server and \(m\) clients, where the server is untrusted by the clients. The task is to have the server evaluate \(f_{m}=\frac{1}{N}\sum_{i}^{N}(\sum_{j}^{m}x_{i}y_{i}^{(j)}\mod 2)\) without leaking individual information of clients. Similar to the phase pad technique introduced in Sec. IV, one can protect each individual client's information by adding additional terms in the phase when running the QBC algorithm. Specifically, we consider a cascaded protocol where each client encodes its information into the phase of index qubits and passes the state into the next client. In each communication round, the \(k\)-th client would receive the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}^{(1)}+x_{i}y_{i}^{(2)}+\dots+ x_{i}y_{i}^{(k-1)}}\ket{i}_{n}\ket{x_{i}}\) from the \((k-1)\)-th client. Then, by applying CZ gate between \(o_{1}\) and its local qubit, the \(j\)-th client sends the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}^{(1)}+x_{i}y_{i}^{(2)}+\dots+ x_{i}y_{i}^{(k-1)}+x_{i}y_{i}^{(k)}}\ket{i}_{n}\ket{x_{i}}\) to the next client. The final \(m\)-th client will pass the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{\sum_{j}^{m}x_{i}y_{i}^{(j)}}\ket{i}_{n} \ket{x_{i}}\) to the server which can then perform the remaining part of the original QBC algorithm to extract the desired \(f_{m}\).
We note that a malicious server could only get the \(\sum_{j}^{m}y_{i}^{(m)}\) and the individual \(y_{i}^{(j)}\) information is not leaked, as the phase added by each client servers as a random pad of other clients. For the same reason, the \(j\)-th (\(j\geq 3\)) client cannot get previous clients' informa
tion as it can only extract \(\sum_{k=1}^{j-1}y_{i}^{(k)}\). The first client (\(j=1\)) can further add a random pad \(g_{i}^{(1)}\) to protect its information against the second client (\(j=2\)). The protocol here is similar to incremental learning [39], where the model aggregation is performed while preserving privacy. We remark that the total communication cost scales as \(O\left(\frac{m\log_{2}(N)}{\epsilon}\right)\) and the privacy mechanism does not introduce additional communication cost. To this end, our work paves the way for communication-efficient private machine learning for multi-party system, such as quantum federated learning [40; 41; 42].
## VII Discussion and Conclusion
As mentioned above, the proposed blind distributed inner product estimation protocols can be applied in distributed machine learning where a central task is to evaluate correlations between remote matrices or vectors. Here we give an example of such applications. In linear regression problems, one is interested in finding the coefficient vector \(\mathbf{\lambda}\) with standard error \(\epsilon\) that satisfies \(\mathbf{X}_{N\times M}\mathbf{\lambda}_{M\times 1}=\mathbf{y}_{N\times 1}\), where the \(N\)-by-\(M\) matrix \(\mathbf{X}\) and \(N\)-by-\(1\) vector \(\mathbf{y}\) are separately held by two remote parties, a server and a client, respectively. We consider the case where the server would like to estimate \(\mathbf{\lambda}\) without letting the client extract its local information \(\mathbf{X}_{N\times M}\). The \(l\)-th component of \(\mathbf{\lambda}\) reads \(\lambda_{l}=\sum_{i=1}^{N}X_{li}^{\dagger}y_{i}\), where \(l\) and \(i\) labels the index of the element in the matrix or vector. The problem can be reduced to estimate product of distributed numbers \(a_{li}=X_{li}^{\dagger}\) and \(b_{i}=y_{i}\). They can be expanded as binary floating point numbers using, for example, \(a_{li}=\sum_{k=0}^{\infty}2^{u-k}x_{li}^{(k)}\) and \(b_{i}=\sum_{k=0}^{\infty}2^{v-k}y_{i}^{(k)}\), for which \(u\) and \(v\) denote the highest digits of \(a\) and \(b\), respectively [43; 8]. Then, the target coefficient \(\lambda_{l}\) can be written as \(\lambda_{l}=\sum_{i=1}^{N}a_{li}b_{i}=\sum_{r=0}^{\infty}2^{u+v-r}\sum_{k=0}^ {r}\sum_{i=1}^{N}x_{li}^{(k)}y_{i}^{(r-k)}\), where the blind QBC algorithm introduced in Sec. V can be directly applied. In this case, the untrusted client can neither directly extract the information of \(\mathbf{X}_{N\times M}\) during the blind QBC communication, nor indirectly have an estimation on \(\mathbf{X}_{N\times M}\) from the knowledge of coefficient \(\mathbf{\lambda}_{M\times 1}\). To this end, our proposed algorithms exhibit direct applicability within the domain of distributed blind machine learning tasks, particularly in scenarios involving matrix or vector multiplication operations.
We further remark that the proposed quantum algorithms offer many benefits for practical applications with large data sizes. Notably, the quantum communication cost in estimating the bipartite correlation scales as \(O(\frac{\log N}{\epsilon})\) and additionally, the discussed data privacy mechanism does not impose any additional overhead in terms of communication cost. Furthermore, the protocols eliminate the need for a trusted third party and necessitate only a minimal quantum resource allocation from the participating clients, encompassing the number of qubits and gate operations.
In summary, this study introduces novel blind quantum machine learning protocols that utilize a quantum bipartite correlator estimation algorithm for distributed parties. By addressing the potential threat of malicious parties attempting to extract information from others, we propose two distinct settings that ensure privacy preservation for each party in the QBC algorithm. Leveraging the advantageous properties of quantum phases and the flexibility of encoding data in various bases, our protocols can effectively safeguard information. The developed blind QML algorithm offers notable advantages, including low communication and computational complexity. This work contributes to the advancement of secure and efficient QML protocols, thus presenting an efficient pathway for distributed quantum computing.
###### Acknowledgements.
JL acknowledges support by DTRA (Award No. HDTRA1-20-2-0002) Interaction of Ionizing Radiation with Matter (IIRM) University Research Alliance (URA).
## Disclaimer
This paper was prepared for informational purposes with contributions from the Global Technology Applied Research center of JPMorgan Chase & Co. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates makes any explicit or implied representation or warranty and none of them accept any
\begin{table}
\begin{tabular}{l|l|l|l|l} Adversaries & Protocol & Privacy mechanism & Privacy & Communication complexity \\ \hline Honest-but- & & & worst scenario in Eq. 4 & \(O((\log_{2}N)/\epsilon)\) \\ curious server & original QBC algorithm [8] & - & & \\ \hline \multirow{3}{*}{Malicious server} & blind QBC for untrusted server (Sec. IV) & random phase padding & worst scenario in Eq. 6 & \(O((\log_{2}N)/\epsilon)\) \\ \cline{2-5} & blind QBC for untrusted client (Sec. V) & random basis encoding, random phase padding & \multicolumn{1}{l|}{information-theoretic secure} & \multicolumn{1}{l}{\(O((\log_{2}N)/\epsilon)\)} \\ \cline{1-1} \cline{2-5} & & & & \\ \end{tabular}
\end{table}
Table 1: Privacy and communication complexity of proposed distributed inner product estimation algorithms.
liability in connection with this position paper, including, without limitation, with respect to the completeness, accuracy, or reliability of the information contained herein and the potential Legal, compliance, tax, or accounting effects thereof. This document is not intended as investment research or investment advice, or as a recommendation, offer, or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction.
## Appendix A Extraction of \(y_{i}\) information in QBC by malicious server
We discuss a feasible attack protocol for a malicious server to extract information of \(\mathbf{y}\) with the received state \(\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}\ket{x_{i}}\) in the original QBC algorithm. In this protocol, the server prepares the \(o_{1}\) qubit simple in the \(\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})\) state. The quantum state sent to client would then be
\[\frac{1}{\sqrt{2}}(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\ket{0}_{o_{1}}+ \frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\ket{1}_{o_{1}}) \tag{10}\]
The honest client then encodes \(y_{i}\) information in the phase with his own local oracle, leading to state
\[\begin{split}&\frac{1}{\sqrt{2N}}(\sum_{i}^{N}\ket{i}_{n}\ket{0}_{o_ {1}}+\ket{1}_{o_{h}}\sum_{i}^{N}(-1)^{y_{i}}\ket{i}_{n}\ket{1}_{o_{1}})\\ &=\frac{1}{\sqrt{2N}}\sum_{i}^{N}\ket{i}_{n}(\ket{0}_{o_{1}}+(-1) ^{y_{i}}\ket{1}_{o_{1}})\end{split} \tag{11}\]
that is sent back to server.
Then, it's clear to see that to extract client's information, the server could perform measurement on qubit \(o_{1}\) in the X basis and extract the \(y_{j}\) information depending on the measured index qubit bitstring \(j\). In this case, by performing sampling on the N index qubit states during the \(2^{t}-1=O(1/\epsilon)\) communication rounds, the malicious server could get \(O(1/\epsilon)\) information of \(\mathbf{y}\). Indeed, given the state Eq. 11 received by the server, the upper bound of information that the server could get at each round by performing measurement on index qubits and qubit \(o_{1}\) is determined by the Holevo's bound [44]:
\[H(C:S)\leq S(\rho)-\frac{1}{N}\sum_{i}^{N}S(\rho(i))=\log 2N, \tag{12}\]
where \(S(\rho)\) denotes the von Neumann entropy for density matrix \(\rho\) that corresponds to Eq. 11, and \(\rho_{i}=\ket{a_{i}}\ket{i}\bra{i}\bra{a_{i}}\) (\(a_{i}=+,-\)) forms the POVM set that server performs.
One might argue that the server could amplify the probability of sampling a particular index qubit bitstring \(j\) by reducing the amplitude of other index qubit bitstrings. That is, the quantum state sent to client could be
\[\frac{1}{\sqrt{2}}(\sum_{i}^{N}A_{i}\ket{i}_{n}\ket{0}_{o_{1}}+\sum_{i}^{N}A _{i}\ket{i}_{n}\ket{1}_{o_{1}}) \tag{13}\]
where \(\ket{A_{i=j}}^{2}\gg\ket{A_{i\neq j}}^{2}\) and \(\sum_{i}^{N}\ket{A_{i}}^{2}=1\). However, the client can add an additional verification on the \(\lceil\log_{2}(N)\rceil\) index qubits upon receiving them by performing measurements on X basis. This should yield \(+1\) for all index qubits, as the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}\) can be rewritten as \(\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})^{\otimes\lceil\log_{2}(N)\rceil}\). While for the manipulated state outlined in Eq. 13, there exists a nonzero probability of producing a measurement outcome of \(-1\) for at least a portion of the measurements.
## Appendix B Redundant encoding against malicious server
We describe a redundant encoding approach aimed at reducing the probability that a malicious server acquiring a specific \(y_{i}\) information with \(i\) being the pertinent index of interest using the attack strategy in Appendix. A.
Given that the server is restricted to preparing the index qubits in a manner where each index bitstring holds identical probability, after receiving the state back from client, the probability that server samples a specific index bitstring \(\ket{i}_{n}\) is simply \(\frac{1}{N}\). That is, in each iteration of communication during the execution of QBC algorithm, the server is constrained to attain a specific \(y_{i}\) corresponding to the intended index with a probability of \(\frac{1}{N}\); and for \(1/\epsilon\) iterations needed for QBC algorithm, this will cause a total amount of information being extracted to be \(\frac{1}{N\epsilon}\). Following this, we can consider a protocol where both the client and server encode their single bit local information \(y_{i}\) and \(x_{i}\) into bitstrings \(\left[y^{\prime}_{i,1},y^{\prime}_{i,2},\cdots,y^{\prime}_{i,M}\right]^{\prime}\) and \(\left[x^{\prime}_{i,1},x^{\prime}_{i,2},\cdots,x^{\prime}_{i,M}\right]\) with size \(M\), where \(M>1\). The total amount of bits increases from \(N\) to \(MN\). The encoding rule is shown as follows:
\[\mathbf{x}^{\prime}_{i,j}=x_{i};\quad i=1,2,...,N;j=1,2,...,M; \tag{14}\]
which is a simply copy the bit \(x_{i}\) for \(M\) times. As for \(\mathbf{y}^{\prime}\), the client can hide the information \(y_{i}\) randomly in one of the \(M\) digits and let the other \(M-1\) digits to be all zero or one. That is, client chooses either
\[\begin{split}\mathbf{y}^{\prime}_{i,j}=\delta_{j,J_{i}}\cdot y_{i};\\ i=1,2,...,N;& j=1,2,...,M,J_{i}\in\{1,2,...,M\}. \end{split} \tag{15}\]
or
\[\begin{split}\mathbf{y}^{\prime}_{i,j}=(1-\delta_{j,J_{i}})\cdot y_{i}; \\ i=1,2,...,N;& j=1,2,...,M,J_{i}\in\{1,2,...,M\}. \end{split} \tag{16}\]
where \(J_{i}\) is an random number and \(\delta_{j,J_{i}}\) is the Kronecker symbol. In these cases, the server would get \(\frac{1}{NM}\sum_{i}^{N}x_{i}y_{i}\) or \(\frac{1}{NM}\sum_{i}^{N}x_{i}y_{i}+\frac{M-1}{NM}\sum_{i}^{N}x_{i}\) by executing the QBC algorithm, depending on whether the client chooses encoding method Eq. 15 or Eq. 16. Afterwards, the client can send an one-bit message via classical channel to the server and let server knows which one was used.
We remark that at each communication round, the probability that the server samples a specific bit reduces from \(\frac{1}{N}\) to \(\frac{1}{NM}\). Even though that \(M\)-times more communication round will be needed to achieve the same error bound \(\epsilon\) as in the original QBC case, the server would not know which digit encodes the correct \(y_{i}\) information as here \(J_{i}\)s are random numbers. Therefore, using the
attack strategy detailed in the Appendix. A, the probability that the server successfully gets a specific bit \(y_{i}\) would be \(\frac{1}{NM}\times\frac{M}{\epsilon}\times\frac{1}{M}=\frac{1}{NMe}\), where the second term \(\frac{M}{\epsilon}\) is the total number of communication rounds and the third term is \(\frac{1}{M}\) is due to the randomness in \(J_{i}\). It's clear to see that a larger value of \(M\) corresponds to a decreased probability for the server to successfully extract valuable information from the client through the attack strategy. The flexibility that the client can independently choose encoding method also protects the majority information of \(\mathbf{y}\), i.e., the client may choose Eq. 2 to encode data if the majority of \(\mathbf{y}\) is 1 to decrease the probability that 1s are being detected. Nevertheless, the trade-off for employing this redundant encoding approach manifests as an augmented quantum communication complexity, which reads \(O(\frac{\log(NM)}{\epsilon})\).
## Appendix C Construction of oracle operator \(\hat{U}_{X_{2}}\) for blind QBC with untrusted client
In this section, we give the details for the implementation of \(\hat{U}_{X_{2}}\) operator mentioned in Sec. V. Recall that \(\hat{U}_{X_{2}}\) is applied to extract the phase term \((-1)^{x_{i}y_{i}}\), as shown in Eq. 8. The quantum state before applying \(\hat{U}_{X_{2}}\) is given by
\[\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\left(a_{i}\ket{0}+b_{i}(-1)^{y_{i}} \ket{1}\right)_{o_{1}}, \tag{30}\]
where \(a_{i}\) and \(b_{i}\) depends on \(x_{i}\) and the encoding basis \(R_{i}\). For the data \(j\) encoded in Z basis, i.e., \(R_{j}=0\), one has \(a_{j}b_{j}=0\) and the phase \((-1)^{x_{i}y_{i}}\) naturally shows up as in the original QBC algorithm. While for the data encoded in X basis, i.e., \(R_{j}=1\), we target to extract the \((-1)^{x_{i}y_{i}}\) term by transforming it back to Z basis.
For this purpose, we consider the following protocol. Firstly, \(\hat{U}_{\vec{x}}\) oracle is called to generate the state \(\frac{1}{\sqrt{N}}\sum_{i}^{N}\ket{i}_{n}\left(a_{i}\ket{0}+b_{i}(-1)^{y_{i} }\ket{1}\right)_{o_{i}}\ket{x_{i}}_{o_{a}}\) where the additional qubit \(o_{a}\) encodes \(x_{i}\) in Z basis. Secondly, a Hadamard gate is applied on qubit \(o_{1}\) conditioned on index qubit state \(\ket{i}_{n}=\ket{j}_{n}\) that satisfies \(R_{j}=1\) (i.e., encoding in X basis). This will transform the X basis encoding to Z basis. Then, a NOT gate on qubit \(o_{1}\) conditioned on those index qubit states followed by a controlled-Z gate between \(o_{1}\) and \(o_{a}\) is applied. With the above steps, a phase \((-1)\) is generated unless \(x_{i}=y_{i}=1\). The state now reads:
\[\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}\ket{m_{i}}_{o_{1}} \ket{x_{i}}_{o_{a}}. \tag{31}\]
Here \(m_{i}=x_{i}\) when \(R_{i}=0\) or when \(R_{i}=1\) and \(y_{i}=0\).
Now that as the phase term \((-1)^{x_{i}y_{i}}\) has already been extracted, we transform the \(o_{1}\) qubit state \(\ket{m_{i}}_{o_{1}}\) back to the initial \(\left(a_{i}\ket{0}+b_{i}(-1)^{y_{i}}\ket{1}\right)_{o_{1}}\) by applying the controlled Hardamard and NOT gate again, and then decouple the ancillary qubit by calling the \(\hat{U}_{\vec{x}}\). The resulting quantum state reads \(\frac{1}{\sqrt{N}}\sum_{i}^{N}(-1)^{x_{i}y_{i}}\ket{i}_{n}\left(a_{i}\ket{0}+ b_{i}(-1)^{y_{i}}\ket{1}\right)_{o_{1}}\).
|
2309.01032 | Hessian-aware Quantized Node Embeddings for Recommendation | Graph Neural Networks (GNNs) have achieved state-of-the-art performance in
recommender systems. Nevertheless, the process of searching and ranking from a
large item corpus usually requires high latency, which limits the widespread
deployment of GNNs in industry-scale applications. To address this issue, many
methods compress user/item representations into the binary embedding space to
reduce space requirements and accelerate inference. Also, they use the
Straight-through Estimator (STE) to prevent vanishing gradients during
back-propagation. However, the STE often causes the gradient mismatch problem,
leading to sub-optimal results.
In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an
effective solution for discrete representations of users/items that enable fast
retrieval. HQ-GNN is composed of two components: a GNN encoder for learning
continuous node embeddings and a quantized module for compressing
full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from
both lower memory requirements and faster inference speeds compared to vanilla
GNNs. To address the gradient mismatch problem in STE, we further consider the
quantized errors and its second-order derivatives for better stability. The
experimental results on several large-scale datasets show that HQ-GNN achieves
a good balance between latency and performance. | Huiyuan Chen, Kaixiong Zhou, Kwei-Herng Lai, Chin-Chia Michael Yeh, Yan Zheng, Xia Hu, Hao Yang | 2023-09-02T22:34:26Z | http://arxiv.org/abs/2309.01032v1 | # Hessian-aware Quantized Node Embeddings for Recommendation
###### Abstract.
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back-propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results.
In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance.
Collaborative Filtering, Graph Neural Networks, Low-bit Quantization, Generalized Straight-Through Estimator +
Footnote †: journal: Information systems Recomm
They often use the Straight-through Estimator (STE) (Beng et al., 2017) to avoid zero gradients during the back-propagation. Specifically, the non-differentiable quantized function is replaced with a surrogate: the identity function (Kang et al., 2018) or the scaled tanh function (Beng et al., 2017; Li et al., 2018). However, the use of different forward and backward functions results in a gradient mismatch problem, i.e., the modified gradient is certainly not the gradient of loss function, which makes the network training unstable (Kang et al., 2018; Li et al., 2019).
In this work, we propose the Hessian-aware Quantized GNN (HQ-GNN) for effective discrete representations of users and items for fast retrieval. Specifically, HQ-GNN consists of two components: a GNN encoder for learning continuous user/item embeddings, and a quantized module for compressing the full-precision embeddings into low-bit ones. Instead of 1-bit, HQ-GNN allows arbitrary bit quantization for better trade-offs between latency and performance. To address the gradient mismatch problem, we tailor the STE by further considering the quantized errors and second-order derivatives (e.g. Hessian) for better stability and accuracy. As such, HQ-GNN can benefit from both lower memory footprint and faster inference speed comparing to vanilla GNN. Experimental results on several large-scale datasets show the superiority of our HQ-GNN.
## 2. Related Work
GNNs have received a lot of attention in graph domains. GNNs learn how to aggregate messages from local neighbors using neural networks, which have been successfully applied to user-item bipartite graphs (Hamilton et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019). Some representative models include PinSage (Li et al., 2019), NGCF (Li et al., 2018), LightGCN (Li et al., 2019), etc. Although GNNs have great ability of capturing high-order collaborative signals between users and items, their node embeddings are stored in continuous space (e.g., FP32), which is the major bottleneck for searching and ranking (e.g., high computational cost of similarity calculation between continuous embeddings). It is thus essential to improve the efficiency of generating top-\(k\) recommendations at scale (Li et al., 2019; Kang et al., 2018).
Network QuantizationsQuantization is a hardware-friendly approach by approximating real values with low-bit ones (Beng et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Meanwhile, network inference can be performed using cheaper fixed-point multiple-accumulation operations. As a result, quantization can reduce the storage overhead and inference latency of networks (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). In recommender systems, HashNet (Beng et al., 2017) proposes to binarize the embeddings by continuation method for multimedia retrieval. Similarly, CIGAR (Li et al., 2019) learns binary codes to build a hash table for retrieving top-\(k\) item candidates. Recently, HashGNN (Kang et al., 2018) learns hash functions and graph representations in an end-to-end fashion. Our HQ-GNN builds on HashGNN. Specifically, we extend 1-bit quantization of HashGNN to arbitrary-bit one, and address the gradient mismatch issue of STE, resulting in better performance.
## 3. Methodology
### Task Description
Generally, the input of recommender systems includes a set of users \(\mathcal{U}=\{u\}\), items \(\mathcal{I}=\{i\}\), and users' implicit feedback \(\mathcal{O}^{+}=\{(u,i)\mid u\in\mathcal{U},i\in\mathcal{I},y_{ui}=1\}\), where \(y_{ui}=1\) indicates that user \(u\) has adopted item \(i\) before, \(y_{ui}=0\) otherwise. One can construct a corresponding bipartite graph \(\mathcal{G}=(\mathcal{V}=\mathcal{U}\cup\mathcal{I},\mathcal{E}=\mathcal{O}^ {+})\). The goal is to estimate the user preference towards unobserved items.
We next introduce our HQ-GNN that consists of two parts: a GNN encoder and a quantized module.
### GNN-based Recommenders
Most GNNs fit under the message-passing schema (Li et al., 2019; Li et al., 2019), where the representation of each node is updated by collecting messages from its neighbors via an aggregation operation Agg\((\cdot)\) followed by an Update\((\cdot)\) operation as:
\[\begin{split}\mathbf{e}_{u}^{(l)}=&\text{Update} \left(\mathbf{e}_{u}^{(l-1)},\text{Agg}\left(\{\mathbf{e}_{i}^{(l-1)}\mid i \in\mathcal{N}_{u}\}\right)\right),\\ \mathbf{e}_{i}^{(l)}=&\text{Update}\left(\mathbf{e }_{i}^{(l-1)},\text{Agg}\left(\{\mathbf{e}_{u}^{(l-1)}\mid u\in\mathcal{N}_{ i}\}\right)\right),\end{split} \tag{1}\]
where \(\{\mathbf{e}_{u}^{(l)},\mathbf{e}_{i}^{(l)}\}\in\mathbb{R}^{d}\) denote the embeddings of user and item in the \(l\)-th layer; \(\mathcal{N}_{u}\) and \(\mathcal{N}_{i}\) denote neighbors of user \(u\) and item \(i\), respectively. By propagating \(L\) layer, a pooling operator is used to obtain the final representations:
\[\mathbf{e}_{u}=\text{Pool}(\mathbf{e}_{u}^{(0)},\ldots,\mathbf{e}_{u}^{(L)}), \quad\mathbf{e}_{i}=\text{Pool}(\mathbf{e}_{i}^{(0)},\ldots,\mathbf{e}_{i}^{(L)}), \tag{2}\]
where the final representations \(\mathbf{e}_{u}\in\mathbb{R}^{d}\) and \(\mathbf{e}_{i}\in\mathbb{R}^{d}\) can be used for downstream tasks. However, the full-precision embeddings, _e.g._, FP32, usually require high memory cost and power consumption to generate top-\(k\) recommendations for the billion-scale graphs.
### Low-bit Quantization
Quantization is a hardware-friendly technique to reduce memory footprint and energy consumption (Li et al., 2019; Li et al., 2019; Li et al., 2019). For a uniform \(b\)-bit quantization, one can clip and normalize a floating-point number \(x\) into a quantization interval, parameterized by an upper \(u\) and a lower \(l\) bounds, as:
\[x_{n}=\frac{\text{clip}(x,l,u)-l}{\Delta}, \tag{3}\]
where \(x_{n}\) is the normalized output, \(\text{clip}(x,l,u)=\min(\max(x,l),u)\), \(\Delta=\frac{u-l}{2^{d}-1}\) is the interval length, and \(b\) denotes the number of quantization levels, _e.g._, \(b=8\) for 8-bit quantization. During training, the clipping interval \((l,u)\) is often unknown beforehand, two strategies are commonly used to determine the upper/lower thresholds: exponential moving averages (Li et al., 2019) and treating the thresholds as learnable parameters (Li et al., 2019). The normalized output \(x_{n}\) can be then converted to a discrete value \(x_{b}\) using a round function with post-scaling as (Li et al., 2019; Li et al., 2019; Li et al., 2019):
\[x_{b}=x_{q}\cdot\Delta,\quad x_{q}=\text{round}(x_{n}), \tag{4}\]
where \(\text{round}(\cdot)\) maps a full-precision value to its nearest integer. The quantized tensor \(x_{b}\) can be then used for efficient computation by emergent accelerators (_e.g._, NVIDIA TensorRT) that are able to handle \(\Delta\) efficiently.
By combining Eq. (3) and Eq. (4), we can defined a quantization function \(Q_{b}(\cdot)\) as: \(x_{b}=Q_{b}(x)\). If the input is a vector/matrix, \(Q_{b}(\cdot)\) would apply to each element of the vector/matrix. To this end, we can quantize the GNN embeddings \(\mathbf{e}_{u}\) and \(\mathbf{e}_{i}\) in Eq. (2) into:
\[\mathbf{q}_{u}=Q_{b}(\mathbf{e}_{u}),\quad\mathbf{q}_{i}=Q_{b}(\mathbf{e}_{i}), \tag{5}\]
where \(\{\mathbf{q}_{u},\mathbf{q}_{i}\}\in\mathbb{R}^{d}\) are the \(b\)-bit representations of user \(u\) and item \(i\), respectively. Our model follows the mixed-precision quantization policy (Zhu et al., 2017), where we only compress the _activations_ of GNNs for faster inference, and leave the _weights_ of GNNs at full precision. Since GNNs often contain less than three layers and have limited weights, the mixed-precision scheme could achieve good trade-offs between performance and memory size (Krizhevsky et al., 2017). The mixed-precision quantization has also become more and more common in deep learning frameworks2.
Footnote 2: [https://www.tensorflow.org/guide/mixed_precision](https://www.tensorflow.org/guide/mixed_precision)
However, the non-differentiable quantized processes are undesirable for the standard back-propagation, i.e., the quantization function is intrinsically a discontinuous step function and nearly has zero gradients, which significantly affects the training of HQ-GNN. We next present a Generalized Straight-Through Estimator to address this problem.
### Generalized Straight-Through Estimator
The main challenge of training our HQ-GNN arises from the discretized round function in Eq. (4), where its derivative is either infinite or zero at almost everywhere. One popular family of estimators are the so-called Straight-Through Estimators (STE) (Srivastava et al., 2015; Srivastava et al., 2015). In STE, the forward computation of \(\text{round}(\cdot)\) is unchanged, but back-propagation is computed through a surrogate (Srivastava et al., 2015; Srivastava et al., 2015; Srivastava et al., 2015): replacing \(\text{round}(\cdot)\) with an identity function, _i.e._, \(\mathcal{G}_{\mathbf{x_{u}}}=\mathcal{G}_{\mathbf{x_{q}}}\) where \(\mathcal{G}\) denotes the gradient operator. However, STE runs the risk of convergence to poor minima and unstable training (Srivastava et al., 2015). For example, both values of \(0.51\) and \(1.49\) round to same integer \(1\) with different quantized errors. Moreover, STE forces to update both values equally with the same gradient at integer \(1\), which is likely to be biased with cumulative quantized errors. Moreover, a small decrement (_e.g._, \(-0.2\)) for value \(0.51\) can largely change the quantized integer from \(1\) to \(0\), while a same decrement to \(1.49\) cannot.
To mitigate the impact of quantized errors, we generalize the STE as (Srivastava et al., 2015):
\[\mathcal{G}_{\mathbf{x_{u}}}=\mathcal{G}_{\mathbf{x_{q}}}\odot\left(1+\delta \cdot\text{sign}(\mathcal{G}_{\mathbf{x_{q}}})\odot(\mathbf{x_{n}}-\mathbf{x_ {q}})\right), \tag{6}\]
where \(\odot\) denotes element-wise product; \(\text{sign}(\cdot)\) is a sign function such that \(\text{sign}(x)=+1\) if \(x\geq 0\), \(-1\) otherwise; \(\delta\) is the scaling factor. Eq. (6) is able to scale up/down the gradient of \(\mathcal{G}_{\mathbf{x_{q}}}\) when the \(\mathbf{x_{n}}\) requires a larger/smaller magnitude for an update. Moreover, Eq. (6) is equivalent to vanilla STE when setting \(\delta=0\). It is thus crucial to determine the scaling factor \(\delta\) during training.
Inspired by Hessian-aware quantized networks (Krizhevsky et al., 2017; Krizhevsky et al., 2017), we use second-order information to guide the selection of \(\delta\). Let \(\epsilon=\mathbf{x_{n}}-\mathbf{x_{q}}\) denote the quantized error for round function, where each element of \(\epsilon\) is well bound by a small number, _i.e._, \(|\epsilon_{i}|\leq\frac{0.5}{2^{b}-1}\), with element-wise Taylor expansion, we have:
\[\mathcal{G}_{\mathbf{x_{n}}}= \mathcal{G}_{\mathbf{x_{q}}}+\frac{\mathcal{G}_{\mathbf{x_{n}}}- \mathcal{G}_{\mathbf{x_{q}}}}{\mathbf{x_{n}}-\mathbf{x_{q}}}\odot(\mathbf{x_ {n}}-\mathbf{x_{q}})\] \[= \mathcal{G}_{\mathbf{x_{q}}}+\frac{\mathcal{G}_{\mathbf{x_{q}} +\epsilon}-\mathcal{G}_{\mathbf{x_{q}}}}{\epsilon}\odot(\mathbf{x_{n}}- \mathbf{x_{q}})\] \[\approx \mathcal{G}_{\mathbf{x_{q}}}+\mathcal{G}_{\mathbf{x_{q}}}^{\prime }\odot(\mathbf{x_{n}}-\mathbf{x_{q}}),\]
where \(\left[\cdot\right]\) is the element-wise division, \(\mathcal{G}_{\mathbf{x_{q}}}^{\prime}=\frac{\alpha\mathcal{G}_{\mathbf{x_{q}}} }{\alpha\mathbf{x_{q}}}\) denotes the second-order derivative of a task loss with respect to \(\mathbf{x_{q}}\). The above equation can be represented as:
\[\mathcal{G}_{\mathbf{x_{n}}}\approx\mathcal{G}_{\mathbf{x_{q}}}\odot\left(1+ \frac{\mathcal{G}_{\mathbf{x_{q}}}^{\prime}}{|\mathcal{G}_{\mathbf{x_{q}}}|} \odot\text{sign}(\mathcal{G}_{\mathbf{x_{q}}})\odot(\mathbf{x_{n}}-\mathbf{x_ {q}})\right), \tag{7}\]
where \(|\cdot|\) denotes the absolute value. Comparing Eq. (6) and Eq. (7) suggests that we can connect \(\delta\) with \(\frac{\mathcal{G}_{\mathbf{x_{q}}}^{\prime}}{|\mathcal{G}_{\mathbf{x_{q}}}|}\), but explicitly forming the Hessian matrix \(\mathbf{H}\) (containing all \(\mathcal{G}_{\mathbf{x_{q}}}^{\prime}\)) is computationally infeasible in practice. Instead, recent quantized networks approximate the second-order information by the average Hessian Trace (Krizhevsky et al., 2017) or top Hessian eigenvalues (Krizhevsky et al., 2017). In this work, we summarize the average trace of Hessian and \(\frac{\mathcal{G}_{\mathbf{x_{q}}}^{\prime}}{|\mathcal{G}_{\mathbf{x_{q}}}|}\) as scaling factor:
\[\delta=\frac{\text{Tr}(\mathbf{H})/N}{G}, \tag{8}\]
where \(N\) is the number of diagonal elements in \(\mathbf{H}\) and \(G\) is an average over the absolute values of gradients, _i.e._, \(\mathbb{E}[|\mathcal{G}_{\mathbf{x_{q}}}|]\).
```
Input: A GNN \(f_{gmn}\), bipartite graph A, bit-width \(b\), regularizer \(\alpha\). Output: Model parameters \(\Theta\) of \(f_{gmn}\):
1 Initialize \(\Theta\);
2foreach mini-batchdo /* Forward pass */
3 Compute node embeddings \(\mathbf{e}_{u}\) and \(\mathbf{e}_{t}\) by Eq. (2);
4 Normalize outputs \(\hat{\mathbf{e}}_{u}=\frac{\text{clip}(\mathbf{e}_{u})\cdot\mu}{\lambda}\) (same for \(\hat{\mathbf{e}}_{t}\));
5 Quantize values \(\hat{\mathbf{e}}_{u}=\text{round}(\hat{\mathbf{e}}_{u})\) (same for \(\hat{\mathbf{e}}_{t}\));
6 Post-scaling quantized values \(\mathbf{q}_{u}=\hat{\mathbf{e}}_{u}\odot\Delta\) (same for \(\mathbf{q}_{t}\));
7 Compute the BPR loss by Eq. (9); /* Backward propagation */
8 Compute the gradients \(\mathcal{G}_{\mathbf{x_{q}}}\) and \(\mathcal{G}_{\mathbf{x_{q}}}\) via standard SGD;
9 Adjust the gradients \(\mathcal{G}_{\mathbf{x_{q}}}\) and \(\mathcal{G}_{\mathbf{x_{q}}}\) by Eq. (6);
10\(\mathcal{G}_{\mathbf{x_{q}}}=\mathcal{G}_{\mathbf{x_{q}}}\odot\left(1+\delta \cdot\text{sign}(\mathcal{G}_{\mathbf{x_{q}}})\odot(\hat{\mathbf{e}}_{u}- \hat{\mathbf{e}}_{u})\right)\),
11\(\mathcal{G}_{\mathbf{x_{q}}}=\mathcal{G}_{\mathbf{x_{q}}}\odot\left(1+\delta \cdot\text{sign}(\mathcal{G}_{\mathbf{x_{q}}})\odot(\hat{\mathbf{e}}_{t}- \hat{\mathbf{e}}_{t})\right)\).
12 Compute the trace of Hessian by Hutchinson method (Krizhevsky et al., 2017);
13 Update GNN parameters \(\Theta\) and the scaling factor \(\delta\) by Eq. (8);
14
15 end for return\(\Theta\)
```
**Algorithm 1** HQ-GNN
We compute the trace of Hessian via Hutchinson's method (Krizhevsky et al., 2017). Given a random vector \(\mathbf{v}\), whose elements are i.i.d. sampled from a Rademacher distribution such that \(\mathbb{E}[\mathbf{v}\mathbf{v}^{\top}]=\mathbf{I}\). Then, we have:
\[\text{Tr}(\mathbf{H}) =\text{Tr}(\mathbf{H}\mathbb{E}[\mathbf{v}\mathbf{v}^{\top}])= \mathbb{E}[\text{Tr}(\mathbf{H}\mathbf{v}\mathbf{v}^{\top}])\] \[=\mathbb{E}[\mathbf{v}^{\top}\mathbf{H}\mathbf{v}]\approx \frac{1}{m}\sum_{i=1}^{m}({\mathbf{v}^{(i)}}^{\top}\mathbf{H}\mathbf{v}^{(i)}),\]
where \(\mathbf{I}\) is the identity matrix. The trace of \(\mathbf{H}\) can be estimated by \(\mathbb{E}[\mathbf{v}^{\top}\mathbf{H}\mathbf{v}]\), where the expectation can be obtained by drawing \(m\) random vectors. Note that we can first compute \(\mathbf{H}\mathbf{v}\), then \(\mathbf{v}^{\top}\mathbf{H}\mathbf{v}\) is a simple inner product between \(\mathbf{v}\) and \(\mathbf{H}\mathbf{v}\). Also, we can obtain \(\mathbf{H}\mathbf{v}\) efficiently without computing an exact Hessian matrix as follows:
\[\frac{\partial(\mathcal{G}_{\mathbf{x_{q}}}^{\top}\mathbf{v})}{\partial \mathbf{x_{q}}}=\frac{\partial\mathcal{G}_{\mathbf{x_{q}}}^{\top}}{ \partial\mathbf{x_{q}}}\mathbf{v}+\mathcal{G}_{\mathbf{x_{q}}}^{\top}\frac{ \partial\mathbf{v}}{\partial\mathbf{x_{q
where the first equality is the chain rule, while the second is due to the independence of \(\mathbf{v}\) and \(\mathbf{x_{q}}\). As such, the cost of Hessian matrix-vector multiply is the same as one gradient back-propagation.
### Model Optimization
#### 3.5.1. Loss function
Based on the \(b\)-bit representations \(\mathbf{q}_{u}\) and \(\mathbf{q}_{i}\) from Eq. (5), we can adopt the inner product to estimate the user's preference towards the target item as: \(\hat{y}_{ui}=\langle\mathbf{q}_{u},\mathbf{q}_{i}\rangle\). Also, we use Bayesian Personalized Ranking loss to optimize the model (Han et al., 2017):
\[\mathcal{L}_{\text{BPR}}(\mathbf{\Theta})=\sum_{(u,i)\in\mathcal{O}^{+},(u,j) \neq\mathcal{O}^{+}}-\ln\sigma\left(\hat{y}_{ui}-\hat{y}_{uj}\right)+\alpha \|\mathbf{\Theta}\|_{F}^{2}, \tag{9}\]
where \(\sigma(\cdot)\) denotes the sigmoid function, \(\mathbf{\Theta}\) denotes the model parameters of GNNs, and \(\alpha\) controls the \(L_{2}\) regularization strength. Finally, we briefly summarize our HQ-GNN in Algorithm 1.
#### 3.5.2. Complexity
Compared to vanilla GNN, HQ-GNN has an extra time cost to perform gradient adjustments in Eq. (6). The computation of Hessian Trace only requires one gradient back-propagation, which is significantly faster than training the GNN encoder itself (Han et al., 2017). Thus, HQ-GNN has the same training complexity as its GNN encoder. However, during the inference, we can use integer-only node embeddings (without post-scaling) to generate the top-\(k\) candidates, which has both lower memory footprint and faster inference speed compared to the vanilla GNN.
## 4. Experiments
### Experimental Settings
#### 4.1.1. **Datasets**
We evaluate our method on four public datasets (Gowalla et al., 2017; Gowalla et al., 2018; Gowalla et al., 2019): Gowalla3, Yelp-20184, Amazon-book5, and Alibaba6. Their statistics are summarized in Table 1. For each dataset, we randomly select 80% of historical interactions of each user to construct the training set, and treat the remaining as the test set. From the training set, we randomly select 10% of interactions as the validation set to tune the hyper-parameters.
Footnote 3: [https://snap.stanford.edu/data/loc-gowalla.html](https://snap.stanford.edu/data/loc-gowalla.html)
Footnote 4: [https://www.yelp.com/dataset](https://www.yelp.com/dataset)
#### 4.1.2. **Baselines and Evaluations**
To verify the effectiveness of HQ-GNN, we mainly compare with graph-based models: NGCF (Gowalla et al., 2017), LightGCN (Gowalla et al., 2017), HashNet (Han et al., 2017) and HashGNN (Han et al., 2017). For HashNet, HashGNN and HQ-GNN, we can choose any GNN encoder to compute the continuous node embeddings in Eq. (2). The comparison against other methods (_e.g._, factorization machines) is omitted, since most of them are outperformed by LightGCN. We choose the widely-used Recall@k and NDCG@k as the evaluation metrics (Gowalla et al., 2017; Gowalla et al., 2019; Gowalla et al., 2019). We simply set \(k=50\) in all experiments (Han et al., 2017).
#### 4.1.3. **Implementation Details**
For all baselines, the embedding size of user/item is searched among \(\{16,32,64,128\}\). The hyper-parameters (_e.g._, batch size, learning rate) of baselines are initialized as their original settings and are then carefully tuned to achieve the optimal performance. For HQ-GNN, we search \(L_{2}\) regularizer \(\alpha\) within \(\{10^{-5},10^{-4},10^{-3},10^{-2},10^{-1}\}\). In addition, we determine the upper/lower thresholds (Eq. (3)) by exponential moving averages (Gowalla et al., 2017), and set the number of bits \(b=1\) in Eq. (5) for fair comparisons with binary hash methods: HashNet (Han et al., 2017) and HashGNN (Han et al., 2017).
### Experimental Results
#### 4.2.1. **Overall Performance**
We present a comprehensive performance comparison between full-precision GNNs and quantization-aware GNNs. We summarize the results in terms of Recall@50 and NDCG@50 for different datasets in Table 2. From the table, we have two major observations: 1) Among all 1-bit GNNs, our proposed HQ-GNN consistently outperforms both HashNet and HashGNN by a large margin on all four datasets. Clearly, this reveals that our HQ-GNNs provide a meaningful gradient adjustments for non-differentiable quantized function. For example, for LightGCN encoder, HQ-GNN has on average 15.80% improvement with respect to Recall@50 and over 15.63% improvement with respect to NDCG@50, comparing to the state-of-the-art HashGNN. 2) It is not surprised that full-precision GNNs perform better than quantization-aware GNNs in all cases. However, quantization-aware GNNs benefit from both lower memory footprint and faster inference speed comparing to vanilla GNN.
In terms of memory and inference speed, we have observed similar results as those reported in HashNet (Han et al., 2017) and HashGNN (Han et al., 2017). This is because our HQ-GNN, with \(b=1\), inherits all the benefits of HashGNN. For instance, using binarized embeddings (1 bit) can significantly reduce memory usage as compared to using FP32 embeddings. Moreover, the inference speed of our HQ-GNNs is approximately 3.6 times faster than that of full-precision GNNs because the Hamming distance between two binary embeddings can be calculated efficiently (Han et al., 2017). These features make our HQ-GNN more desirable for large-scale retrieval applications in the industry.
#### 4.2.2. **Compared to GTE**
The STE method propagates the same gradient from an output to an input of the discretizer, assuming that the derivative of the discretizer is equal to 1. In contrast, our GSTE method adopts the Hessian to refine the gradients. To evaluate the effectiveness of our GSTE method, we chose LightGCN as the backbone and quantized its embeddings into 1 bit. The performance on different datasets is summarized in Table 3. From the table, it is clear that our GSTE method performs better than STE for 1-bit quantization, with improvements ranging from 14.7% to 24.5%.
Regarding running time, during the training stage, our GSTE method requires computing the trace of Hessian using Hutchinson's method, which is however fast. From Table 3, we can see that our GSTE method is slightly slower than STE, which is negligible in practice. During inference, both our GSTE and STE methods have the same speed as both use 1-bit quantized embeddings for retrieval, and the trace of Hessian is not needed in the inference stage.
The left of Figure 1 also displays the training curves of GSTE and STE, and we clearly observe that training quantized LightGCN with GSTE is better than STE in terms of stability. This highlights
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & Gowalla & Yelp2018 & Amazon-Book & Alibaba \\ \hline
[User] & 29,858 & 31,668 & 52,643 & 106,042 \\
[Item] & 40,981 & 38,048 & 91,599 & 53,591 \\
[Interaction] & 1,027,370 & 1,561,406 & 2,984,108 & 907,407 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Dataset statistics.
the effectiveness of utilizing Hessian information in the training process. The right of Figure 1 shows the impact of quantization levels by varying \(b\) within \(\{1,2,3,4\}\) for both GSTE and STE. As can be seen, aggressive quantization (less than 2-bit precision) can lead to significant degradation in the accuracy. When \(b=4\), HQ-GNN obtains 98.5% performance recovery of LightGCN. Comparing STE and GSTE, our GSTE consistently performance better than STE in all cases. In summary, HQ-GNN strikes a good balance between latency and performance.
## 5. Conclusion
Training graph neural networks on large-scale user-item bipartite graphs has been a challenging task due to the extensive memory requirement. To address this problem, we propose HQ-GNN that explores the issue of low-bit quantization of graph neural networks for large-scale recommendations. Additionally, we introduce a Generalized Straight-Through Estimator to solve the gradient mismatch problem that arises during the training of quantized networks. HQ-GNN is flexible and can be applied to various graph neural networks. The effectiveness of our proposed method is demonstrated through extensive experiments on real-world datasets.
|
2301.02319 | Localized nonlinear excitations of a columnar chain of coronene
molecules | The nonlinear dynamics of a one-dimensional molecular crystal in the form of
a chain of planar coronene molecules is analyzed. Using molecular dynamics, it
is shown that a chain of coronene molecules supports acoustic solitons,
rotobreathers, and discrete breathers. An increase in the size of planar
molecules in a chain leads to an increase in the number of internal degrees of
freedom. This results in an increase in the rate of emission of phonons from
spatially localized nonlinear excitations and a decrease in their lifetime.
Presented results contribute to the understanding of the effect of the
rotational and internal vibrational modes of molecules on the nonlinear
dynamics of molecular crystals. | Alexander V. Savin, Sergey V. Dmitriev | 2023-01-05T22:33:39Z | http://arxiv.org/abs/2301.02319v1 | # Localized nonlinear excitations of a columnar chain of coronene molecules
###### Abstract
The nonlinear dynamics of a one-dimensional molecular crystal in the form of a chain of planar coronene molecules is analyzed. Using molecular dynamics, it is shown that a chain of coronene molecules supports acoustic solitons, rotobreathers, and discrete breathers. An increase in the size of planar molecules in a chain leads to an increase in the number of internal degrees of freedom. This results in an increase in the rate of emission of phonons from spatially localized nonlinear excitations and a decrease in their lifetime. Presented results contribute to the understanding of the effect of the rotational and internal vibrational modes of molecules on the nonlinear dynamics of molecular crystals.
## I Introduction
Molecular crystals can have a quasi-one-dimensional morphology, for example, fullerene nanowhiskers consisting of fullerene molecules [1], a columnar structure of carbon nanotori [2; 3], B\({}_{42}\) molecules [4], \(n\)-coronene molecules [5; 6; 7; 8], columnar discotic liquid crystals [9; 10; 11] and many others. Finite-size particles of molecular crystals have rotational degrees of freedom that can give rise to such counterintuitive effects as negative thermal expansion [12; 13; 14; 15; 16] and auxeticity (negative Poisson's ratio) [17; 18; 19; 20; 21].
Quasi-one-dimensional crystals can support various spatially localized nonlinear excitations, their study is important and is often considered in connection with the transfer of energy, mass and information. If the molecules that make up quasi-one-dimensional crystals, in addition to translational, also have rotational and internal vibrational degrees of freedom, then the variety of localized excitations supported by them increases.
Let us note the most intensively studied spatially localized excitations in nonlinear lattices and crystals.
_Compressive acoustic solitons_ are typically excited in solids or metamaterials under shock loading [22; 23; 24; 25]. Acoustic solitons propagating at a speed exceeding the speed of longitudinal sound were described in carbon nanotube bundles [26], black phosphorene [27], graphene and boron nitride [28]. It is shown that the attenuation of compressive waves in black phosphorene occurs faster than in graphene and boron nitride due to the greater number of degrees of freedom in the translational cell of phosphorene, which provides more channels for energy emission [27].
_Rotobreathers_ are dynamical modes with a single rotating particle while neighboring particles oscillate with the amplitude decreasing exponentially with distance from the rotating particle [29; 30; 31; 32]. The works [33; 34] are devoted to the analysis of the stability of rotobreathers. The effect of rotobreathers on heat capacity [29], thermal conductivity [35; 36], and slow relaxation [37] was analyzed within the framework of one-dimensional rotator lattices. Rotobreathers were considered in a damped driven rotator lattice [38] and in the lattices with geometrical nonlinearities [39; 40]. The method of molecular dynamics [41] was used to describe the precession of a rotating fullerene inside a fullerite crystal. The work [42] shows the effect of C\({}_{60}\) fullerite crystal deformation on the rotational dynamics and shift of the center of mass of a single C\({}_{60}\) molecule. In the works [43; 44; 45] rotobreathers in the form of carbon nanotubes rotating around their axis in a carbon nanotube bundle were studied. The dynamics of a fullerene molecule rotating in a fullerite crystal was studied in [46].
_Discrete breathers_ or _intrinsic localized modes_ are the large-amplitude, spatially localized vibrational modes in defect-free nonlinear lattices [47; 48; 49]. Discrete breathers are ubiquitous in nonlinear lattices and are investigated in models described by the discrete nonlinear Schrodinger equation [50], in Josephson superconducting junctions [51; 52], in granular crystals [53], in a mass-spring chain [54], and in magnetic systems [55; 56; 57]. Interatomic interactions are non-linear, so different crystals support discrete breathers [58; 59; 60; 61]. In real discrete systems, e.g. in crystals, one deals with quasi-breathers that are not exactly periodic single-frequency modes [62]. A discrete breather in the form of a single fullerene molecule oscillating with a large amplitude in a fullerite crystal [46] and a single oscillating carbon nanotube in a nanotube bundle [45] were studied by the method of molecular dynamics.
Most popular approaches to the study of nonlinear excitations in molecular crystals are the use of molecular dynamics [2; 3] and coarse-grained models [63; 7; 64; 5].
The aim of this study is to analyze the effect of internal vibrational degrees of freedom on the robustness
of various spatially localized nonlinear excitations in a quasi-one-dimensional chain of \(n\)-coronene molecules with \(n=2\), \(3\), and \(4\)[5; 6; 7]. As the index \(n\) increases, the size of the molecules and, consequently, the number of internal degrees of freedom also increase.
In Sec. II, the structure of the \(n\)-coronene and the molecular dynamics model used in this study are described. The spectrum of small-amplitude vibrations of the \(n\)-coronene is analyzed in Sec. III. Sections from IV to VI present the results of studying spatially localized nonlinear excitations in the chains of \(n\)-coronene molecules, namely, acoustic solitons, rotobreathers, and discrete breathers, respectively. Our conclusions are formulated in Sec. VII.
## II Model
The \(n\)-coronene molecule C\({}_{6n^{2}}\)H\({}_{6n}\) can be considered as a graphene flake. Therefore, to describe the dynamics of a coronene molecular crystal, one can use the force field previously used for graphene nanoribbons.
To simplify the modeling, valence-bonded CH groups of atoms at the edges of disk molecules will be considered as a single carbon atom of mass \(13m_{p}\), while all other inner carbon atoms have the mass \(12m_{p}\), where \(m_{p}=1.6601\times 10^{-27}\) kg is the proton mass.
The Hamiltonian of one molecule can be written as
\[H_{0}=\sum_{i=1}^{N_{0}}\Big{[}\frac{1}{2}M_{i}(\dot{\mathbf{u}}_{i},\dot{ \mathbf{u}}_{i})+P_{i}\Big{]}, \tag{1}\]
where \(i\) is the number of an atom, \(N_{0}=6n^{2}\) is the number of atoms in the molecule, \(M_{i}\) is the mass of the \(i\)th atom (there are \(6n^{2}-6n\) inner carbon atoms of mass \(12m_{p}\) and \(6n\) edge carbon atoms of mass \(13m_{p}\)), \(\mathbf{u}_{i}=(x_{i}(t),y_{i}(t),z_{i}(t))\) is the three-dimensional vector describing the position of \(i\)th atom at the time \(t\). The term \(P_{i}\) describes the interaction of the carbon atom with the index \(i\) with the neighboring atoms. We emphasize that the inner and edge atoms differ only in their masses, and their interaction with each other is described by the same potential. The potential depends on variations in bond length, bond angles, and dihedral angles between the planes formed by three neighboring carbon atoms and it can be written in the form
\[P=\sum_{\Omega_{1}}U_{1}+\sum_{\Omega_{2}}U_{2}+\sum_{\Omega_{3}}U_{3}+\sum_{ \Omega_{4}}U_{4}+\sum_{\Omega_{5}}U_{5}, \tag{2}\]
where \(\Omega_{j}\), with \(j=1\), \(2\), \(3\), \(4\), \(5\), are the sets of configurations describing different types of interactions between neighbors. Members of these sets are shown in Fig. 2, and all their rotated and mirrored versions should be taken into account.
Potential \(U_{1}(\mathbf{u}_{n},\mathbf{u}_{m})\) describes the energy due to change in the length of a valence bond between atoms with the indexes \(n\) and \(m\), as shown in Fig. 2(a). The potential \(U_{2}(\mathbf{u}_{n},\mathbf{u}_{m},\mathbf{u}_{k})\) describes the deformation energy of the angle between the valence bonds \(\mathbf{u}_{n}\mathbf{u}_{m}\), and \(\mathbf{u}_{m}\mathbf{u}_{k}\), see Fig. 2(b). Potentials \(U_{j}(\mathbf{u}_{n},\mathbf{u}_{m},\mathbf{u}_{k},\mathbf{u}_{l})\), \(j=3\), \(4\), and \(5\), describe the deformation energy associated with a change in the angle between the planes \(\mathbf{u}_{n}\mathbf{u}_{m}\mathbf{u}_{k}\) and \(\mathbf{u}_{l}\mathbf{u}_{k}\mathbf{u}_{m}\), as shown in Figs. 2(c-e), respectively.
We use the potentials employed in the modeling of the dynamics of large polymer macromolecules [65; 66] for the valence bond coupling,
\[U_{1}(\mathbf{u}_{1},\mathbf{u}_{2})\!=\!\epsilon_{1}\{\exp[-\alpha_{0}(\rho -\rho_{0})]\!-\!1\}^{2},\;\rho\!=\!|\mathbf{u}_{2}\!-\!\mathbf{u}_{1}|, \tag{3}\]
where \(\epsilon_{1}\) is the energy of the valence bond and \(\rho_{0}\) is the equilibrium length of the bond; the potential of the valence angle is
\[U_{2}(\mathbf{u}_{1},\mathbf{u}_{2},\mathbf{u}_{3})=\epsilon_{2 }(\cos\varphi-\cos\varphi_{0})^{2}, \tag{4}\] \[\cos\varphi=(\mathbf{u}_{3}-\mathbf{u}_{2},\mathbf{u}_{1}-\mathbf{ u}_{2})/(|\mathbf{u}_{3}-\mathbf{u}_{2}|\cdot|\mathbf{u}_{2}-\mathbf{u}_{1}|),\]
where the equilibrium value of the angle is \(\cos\varphi_{0}=\cos(2\pi/3)=-1/2\); the potential of the dihedral angle is
\[U_{j}(\mathbf{u}_{1},\mathbf{u}_{2},\mathbf{u}_{3},\mathbf{u}_{4 })=\epsilon_{j}(1+z_{j}\cos\phi), \tag{5}\] \[\cos\phi=(\mathbf{v}_{1},\mathbf{v}_{2})/(|\mathbf{v}_{1}|\cdot| \mathbf{v}_{2}|),\]
Figure 1: Vertical chain of \(10\)\(n\)-coronene molecules C\({}_{6n^{2}}\)H\({}_{6n}\): (a) \(n=2\) (coronene C\({}_{24}\)H\({}_{12}\)); (b) \(n=3\) (circumcoronene C\({}_{54}\)H\({}_{18}\)); (c) \(n=4\) (dicircumcoronene C\({}_{96}\)H\({}_{24}\)). Carbon atoms (gray) form planar disk molecules, and hydrogen atoms are located at the edges of the disks (shown in light gray). The vertical axis of the chain is parallel to the \(z\) axis, the planar molecules are parallel to the \(xy\) plane. The positions of neighboring molecules in the chain differ by the shift along the \(z\) axis and the relative rotation of the molecules in the \(xy\) plane (shift \(\Delta z\) and twist \(\Delta\phi\) steps of the chain).
Figure 2: (Color online) Different types of interactions between neighboring atoms belonging to the sets \(\Omega_{j}\), \(j=1\), \(2\), \(3\), \(4\), \(5\). (a) Valence interactions \(j=1\), (b) valence angles \(j=2\), (c-e) different dihedral angles \(j=3\), \(4\), and \(5\), respectively.
\[\mathbf{v}_{1}=(\mathbf{u}_{2}-\mathbf{u}_{1})\times(\mathbf{u}_{3}- \mathbf{u}_{2}),\] \[\mathbf{v}_{2}=(\mathbf{u}_{3}-\mathbf{u}_{2})\times(\mathbf{u}_{3 }-\mathbf{u}_{4}),\]
where the sign \(z_{j}=1\) for \(j=3,4\) (the equilibrium value of the torsional angle \(\phi\) is \(\phi_{0}=\pi\)) and \(z_{j}=-1\) for \(j=5\) (\(\phi_{0}=0\)).
The values of the potential parameters are \(\epsilon_{1}=4.9632\) eV, \(\rho_{0}=1.418\) A, \(\alpha_{0}=1.7889\) A\({}^{-1}\), \(\epsilon_{2}=1.3143\) eV, and \(\epsilon_{3}=0.499\) eV. They are found from the frequency spectrum of small-amplitude oscillations of a graphene sheet [67]. According to previous study [68], the energy \(\epsilon_{4}\) is close to the energy \(\epsilon_{3}\), whereas \(\epsilon_{5}\ll\epsilon_{4}\) (\(|\epsilon_{5}/\epsilon_{4}|<1/20\)). Therefore, we set \(\epsilon_{4}=\epsilon_{3}=0.499\) eV and assume \(\epsilon_{5}=0\), the latter means that we omit the last term in the sum Eq. (2). More detailed discussion and motivation of our choice of the interaction potentials Eqs. (3-5) can be found in earlier publication [69].
The interaction of two coronene molecules is described by the potential
\[W(\mathbf{X}_{1},\mathbf{X}_{2})=\sum_{i=1}^{N_{0}}\sum_{j=1}^{N_{0}}V(r_{ij}), \tag{6}\]
where the \(3N_{0}\)-dimensional vector \(\mathbf{X}_{k}=\{\mathbf{u}_{k,i}\}_{i=1}^{N_{0}}\) (\(k=1,2\)) defines the coordinates of atoms of the \(k\)-th molecules (vector \(\mathbf{u}_{k,i}\) specifies the coordinates of the \(i\)-th atom of the \(k\)-th molecule), \(r_{ij}=|\mathbf{u}_{2,j}-\mathbf{u}_{1,i}|\) is the distance between atoms. Nonvalence interactions of the carbon atoms are described by the (6,12) Lennard-Jones potential
\[V(r)=\epsilon_{c}\{[(r_{c}/r)^{6}-1]^{2}-1\}, \tag{7}\]
where \(\epsilon_{c}=0.002757\) eV, \(r_{c}=3.807\) A [70].
Hamiltonian of a chain of \(N\) molecules (see Fig. 1) can be presented in the form
\[H=\sum_{n=1}^{N}\left[\frac{1}{2}(\mathbf{M}\mathbf{X}_{n}, \hat{\mathbf{X}}_{n})+P(\mathbf{X}_{n})\right]\] \[+\sum_{n=1}^{N-1}W(\mathbf{X}_{n},\mathbf{X}_{n+1})+\sum_{n=1}^{ N-2}W(\mathbf{X}_{n},\mathbf{X}_{n+2}), \tag{8}\]
where the first sum includes the kinetic and potential energies of \(n\)-th molecule. The second and the third sums describe the interaction between nearest and next-nearest molecules, respectively. Here the vector \(\mathbf{X}_{n}=\{\mathbf{u}_{n,i}\}_{i=1}^{N_{0}}\) specifies the coordinates of the atoms of \(n\)-th molecule, \(\mathbf{M}\) is the diagonal matrix of atom masses, \(P(\mathbf{X}_{n})\) is the energy of \(n\)-th molecule, \(W(\mathbf{X}_{n},\mathbf{X}_{k})\) is the interaction energy of \(n\)-th and \(k\)-th molecules.
## III The dispersion curves of small-amplitude oscillations
Let us consider the structure of a symmetric (spiral) stack of planar \(n\)-coronene molecules with the symmetry axis parallel to the \(z\) axis - see Fig. 1. In the ground state of such a chain, the atomic coordinates of each successive molecule are obtained from the coordinates of the previous molecule by translation along the \(z\) axis by a shift \(\Delta z\) and rotation around the same axis by an angle \(\Delta\phi\). These are the shift and twist parameters:
\[x_{n+1,j} =x_{n,j}\cos(\Delta\phi)+y_{n,j}\sin(\Delta\phi),\] \[y_{n+1,j} =-x_{n,j}\sin(\Delta\phi)+y_{n,j}\cos(\Delta\phi), \tag{9}\] \[z_{n+1,j} =z_{n,j}+\Delta z,\] \[\quad i=1,...,N_{0},\ n=0,\pm 1,\pm 2,...\]
Thus, the energy of the ground state is a function of \(3N_{0}\) coordinates of \(N_{0}\) atoms of the first molecule \(\mathbf{X}_{1}=\{\mathbf{u}_{1,j}\}_{j=1}^{N_{0}}\), and the two geometry parameters, \(\Delta z\) and \(\Delta\phi\), where \(\mathbf{u}_{1,j}=(x_{1,j},y_{1,j},z_{1,j})\) is the vector position of \(j\)th atom of the first molecule.
Finding the ground state reduces to the following minimization problem:
\[E=P(\mathbf{X}_{1})+W(\mathbf{X}_{1},\mathbf{X}_{2})+W(\mathbf{X }_{1},\mathbf{X}_{3})\] \[\rightarrow\min:\{\mathbf{u}_{1,j}\}_{j=1}^{N_{0}},\Delta\phi, \Delta z. \tag{10}\]
The problem (III) was solved numerically by the conjugate gradient method. The values of the shift \(\Delta z\) and the twist \(\Delta\phi\) steps of the chain of \(n\)-coronene molecules are presented in Table 1.
A vertical chain of molecules is a multistable system. Numerical analysis shows that for \(n\)-coronene molecules with \(n\leq 4\), the spiral structure defined by Eq. (III) is the most energy-favorable ground state.
For analysis of small-amplitude oscillations of spiral chain it is convenient to use local cylindrical coordinates \(\mathbf{v}_{n,j}=(v_{n,j,1},v_{n,j,2},v_{n,j,3})\), given by the following expressions:
\[x_{n,j} =x_{n,j}^{0}+v_{n,j,1}\cos(\phi_{n,j})+v_{n,j,2}\sin(\phi_{n,j}),\] \[y_{n,j} =y_{n,j}^{0}-v_{n,j,1}\sin(\phi_{n,j})+v_{n,j,2}\cos(\phi_{n,j}), \tag{11}\] \[z_{n,j} =z_{n,j}^{0}+v_{n,j,3},\]
with \(\mathbf{u}_{n,j}^{0}=(x_{n,j}^{0},y_{n,j}^{0},z_{n,j}^{0})\), (\(n=0,\pm 1,\pm 2,...\); \(j=1,...,N_{0}\)) being coordinates of the atoms in the helix ground state, and \(\phi_{n,j}\) being angular coordinate of the atom (\(n,j\)). With these new coordinates the Hamiltonian of the molecular chain Eq. (III) has the following
\begin{table}
\begin{tabular}{c c c c c c c} \(n\) & \(\Delta z\) (Å) & \(\Delta\phi\) (\({}^{\circ}\)) & \(\omega_{op}\) (cm\({}^{-1}\)) & \(\omega_{ip}\) (cm\({}^{-1}\)) & \(v_{\text{f}}\) (m/s) & \(v_{\text{f}}\) (m/s) \\ \hline
2 & 3.445 & 30.0 & 841.6 & 1549.3 & 217 & 3170 \\
3 & 3.411 & 18.6 & 883.7 & 1580.4 & 195 & 3449 \\
4 & 3.396 & 12.6 & 894.0 & 1591.3 & 250 & 3591 \\ \hline \end{tabular}
\end{table}
Table 1: Values of shift \(\Delta z\) and twist \(\Delta\phi\) parameters, maximum frequencies of out-of-plane \(\omega_{op}\) and in-plane \(\omega_{ip}\) vibrations, velocities of torsion \(v_{t}\) and longitudinal \(v_{l}\) sound for a spiral stack of \(n\)-coronene C\({}_{6n^{2}}\)H\({}_{6n}\) molecules.
form
\[H=\sum_{n}\Big{[}\frac{1}{2}(\mathbf{M}\dot{\mathbf{v}}_{n},\dot{\mathbf{v}}_{n})+ P(\mathbf{v}_{n},\mathbf{v}_{n+1},\mathbf{v}_{n+2})\Big{]}, \tag{12}\]
where \(\mathbf{v}_{n}=\{(v_{n,j,1},v_{n,j,2},v_{n,j,3})\}_{j=1}^{N_{0}}\) is a \(3N_{0}\)-dimensional vector, \(\mathbf{M}\) is \(3N_{0}\)-dimensional diagonal mass matrix.
From the Hamiltonian Eq. (12) the following system of equations of motion can be derived:
\[-\mathbf{M}\ddot{\mathbf{v}}_{n}=P_{1}(\mathbf{v}_{n},\mathbf{v}_ {n+1},\mathbf{v}_{n+2})\] \[+P_{2}(\mathbf{v}_{n-1},\mathbf{v}_{n},\mathbf{v}_{n+1})+P_{3}( \mathbf{v}_{n-2},\mathbf{v}_{n-1},\mathbf{v}_{n}), \tag{13}\]
where \(P_{i}(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})=\partial P/\partial \mathbf{v}_{i}\), \(i=1,2,3\). Within the linear approximation, the system Eq. (13) obtains the form
\[-\mathbf{M}\dot{\mathbf{v}}_{n}=B_{1}\mathbf{v}_{n}+B_{2}\mathbf{v}_{n+1}+B_{ 2}^{*}\mathbf{v}_{n-1}+B_{3}\mathbf{v}_{n+2}+B_{3}^{*}\mathbf{v}_{n-2}, \tag{14}\]
where the matrix elements are given as
\[B_{1}=P_{11}+P_{22}+P_{33},\ \ B_{2}=P_{12}+P_{23},\ \ B_{3}=P_{13},\]
and the partial derivative matrix is given as
\[P_{ij}=\frac{\partial^{2}P}{\partial\mathbf{v}_{i}\partial\mathbf{v}_{j}}( \mathbf{0},\mathbf{0},\mathbf{0}),\ \ i,j=1,2,3.\]
The solution to the system of linear equations Eq. (14) can be found in the standard form
\[\mathbf{v}_{n}=A\mathbf{w}\exp[i(qn-\omega t)], \tag{15}\]
where \(A\) is the linear mode amplitude, \(\mathbf{w}\) is the eigenvector, \(\omega\) is the phonon frequency with the dimensionless wave number \(q\in[0,\pi]\). Substituting Eq. (15) into the system Eq. (14), we arrive at the following \(3N_{0}\)-dimensional eigenvalue problem:
\[\omega^{2}\mathbf{M}\mathbf{w}=\mathbf{C}(q)\mathbf{w}, \tag{16}\]
where Hermitian matrix
\[\mathbf{C}(q)=B_{1}+B_{2}\exp(iq)+B_{2}^{*}\exp(-iq)\] \[+B_{3}\exp(2iq)+B_{3}^{*}\exp(-2iq).\]
Using the substitution \(\mathbf{w}=\mathbf{M}^{-1/2}\mathbf{e}\), problem Eq. (16) can be rewritten in the form
\[\omega^{2}\mathbf{e}=\mathbf{M}^{-1/2}\mathbf{C}(q)\mathbf{M}^{-1/2}\mathbf{e} \tag{17}\]
where \(\mathbf{e}\) is the normalized eigenvector, \((\mathbf{e},\mathbf{e})=1\).
Thus, to obtain the dispersion curves \(\omega_{j}(q)\), it is necessary to find the eigenvalues and eigenvectors of the Hermitian matrix Eq. (17) of size \(3N_{0}\times 3N_{0}\) for each fixed wavenumber \(0\leq q\leq\pi\). As a result, we obtain \(3N_{0}\) branches of the dispersion relation \(\{\omega_{j}(q)\}_{j=1}^{3N_{0}}\).
The planar structure of molecules in a spiral chain leads to the division of its small-amplitude vibrations into two-classes: out-of-plane vibrations, when atoms vibrate orthogonally to the molecular plane (all atoms move along the \(z\) axis) and in-plane vibrations (all atoms move in the \(xy\) plane). Two thirds of the branches correspond to in-plane vibrations, while only one-third corresponds to out-of-plane vibrations. The dispersion curves are shown in Figs. 3 to 5.
Figure 3: Structure of 72 dispersion curves of a spiral chain of coronene molecules C\({}_{24}\)H\({}_{12}\) for (a) out-of-plane and (b) in-plane vibrations. Black dots denote modes leading to the formation of discrete breathers – localized nonlinear oscillations of one molecule in the chain.
Figure 4: Structure of 162 dispersion curves for a spiral chain of circumcoronene molecules C\({}_{54}\)H\({}_{18}\) for (a) out-of-plane and (b) in-plane vibrations. Black dots indicate modes that lead to the formation of discrete breathers – localized nonlinear vibrations of one molecule in the chain.
For the spiral chain of coronene molecules C\({}_{24}\)H\({}_{12}\), the dispersion curves of out-of-plane vibrations, see Fig. 3(a) and Fig. 5(a), lie in the frequency range \(0\leq\omega\leq\omega_{op}\), with the maximum frequency \(\omega_{op}=842\) cm\({}^{-1}\). One dispersion curve \(\omega_{l}(q)\) starts from the origin (\(q=0\), \(\omega=0\)), it describes the displacement of planar molecules along the chain axis without internal deformations (longitudinal acoustic vibrations of the chain). The tangent of this dispersion curve at the origin gives the velocity of longitudinal sound waves
\[v_{l}=\Delta z\lim_{q\to 0}\frac{\omega_{l}(q)}{q}.\]
The dispersion curves of in-plane oscillations, see Fig. 3(b) and Fig. 5(b), lie in the frequency range \(0\leq\omega\leq\omega_{ip}\) with the maximum frequency \(\omega_{ip}=1549\) cm\({}^{-1}\). One dispersion curve \(\omega_{l}(q)\) starts from the origin and describes torsional acoustic oscillations (rotation of planar molecules around the chain axis). The speed of long-wave torsional vibrations (speed of torsional sound) is
\[v_{t}=\Delta z\lim_{q\to 0}\frac{\omega_{t}(q)}{q}.\]
In addition, one dispersion curve approaches the \(q\) axis tangentially. This curve describes the optical bending vibrations of the chain. The frequency spectrum of in-plane oscillations is characterized by the presence of a gap in the low-frequency region. For a chain of coronene molecules, the gap is from 10 to 203 cm\({}^{-1}\) [see Fig. 5(b)], and for a chain of circumcoronene molecules, from 9 to 141 cm\({}^{-1}\) [see Fig. 4(b)].
The values of the maximum frequencies \(\omega_{op}\), \(\omega_{ip}\) and the speeds of sound \(v_{l}\), \(v_{t}\) are given in Table 1. As can be seen from the table, the speed of longitudinal sound is 15 times greater than the speed of torsional sound.
Figure 5: Dispersion curves in the low-frequency region for a spiral chain of coronene molecules C\({}_{24}\)H\({}_{12}\) for (a) out-of-plane and (b) in-plane vibrations (three gray bands show the frequency spectrum of the rotobreathers). The dashed straight lines define the tangents to the dispersion curves emerging from the zero point, corresponding to the velocities of the longitudinal \(v_{l}\) and torsion \(v_{t}\) sound.
Figure 6: Formation of a supersonic acoustic soliton in a spiral chain of (a) coronene, (b) circumcoronene, and (c) dicircircur coronene molecules produced by longitudinal local compression at the end of the chain with amplitude \(a_{z}=0.4\) Å. The distribution of energy in the chain \(E_{n}(t)\) at different times is shown. The number of molecules in the chain is \(N=500\). The dotted lines show the trajectory of motion with the velocity of longitudinal sound \(v_{l}\) to demonstrate the supersonic motion of solitons.
## IV Acoustic solitons
The interaction of neighboring planar molecules is determined by the sum of interactions of all pairs of their atoms Eq. (6), which are described by the Lennard-Jones potential Eq. (7). The Lennard-Jones potential at small interatomic distances is characterized by the hard-type anharmonicity. Therefore, one can expect the possibility of propagation of compressive longitudinal acoustic solitons moving at a speed exceeding the velocity of longitudinal sound \(v_{l}\).
To test the existence of supersonic acoustic solitons, we simulate the propagation of initial local longitudinal compression along a chain of molecules. Consider a spiral chain of \(N=500\) molecules. Let us take the ground state of the chain and at \(t=0\) shift the first two molecules along the \(z\) axis by \(a_{z}\). As a result, local longitudinal compression occurs at the end of the chain. Having fixed the position of these two molecules in the shifted state, let us consider the propagation of local compression along the chain.
To simulate the dynamics of a chain with fixed ends, we numerically integrate the system of equations of motion corresponding to the Hamiltonian of the chain Eq. (8)
\[\mathbf{M}\ddot{\mathbf{X}}_{n} =-\frac{\partial H}{\partial\mathbf{X}_{n}},\ n=3,4,...,N-2, \tag{18}\] \[\dot{\mathbf{X}}_{n} \equiv\mathbf{0},\ \ n=1,2,N-1,N,\]
with the initial conditions
\[\mathbf{X}_{n}(0) =\mathbf{X}_{n}^{0}+a_{z}\mathbf{e}_{z},\ \ n=1,2\] \[\mathbf{X}_{n}(0) =\mathbf{X}_{n}^{0},\ \ n=3,4,...,N, \tag{19}\] \[\dot{\mathbf{X}}_{n}(0) =\mathbf{0},\ \ n=1,2,....,N,\]
where the \(3N_{0}\)-dimensional vector \(\mathbf{X}_{n}=\left\{(x_{n,j},y_{n,j},z_{n,j})\right\}_{j=1}^{N_{0}}\) defines the coordinates of the atoms of \(n\)-th molecule, vectors \(\{\mathbf{X}_{n}^{0}\}_{n=1}^{N}\) defines ground state of molecular chain, \(\mathbf{e}_{z}\) is a unit vector directed along the \(z\) axis, \(a_{z}>0\) is the amplitude of the initial compression of the chain end.
Numerical integration of the system of equations of motion (18) showed that the initial longitudinal compression of the chain edge with an amplitude \(a_{z}\leq 0.6\) A for
Figure 7: Distribution of longitudinal compression during the motion of an acoustic soliton along a chain of \(N=500\) molecules of (a) coronene, (b) circumcoronene, (c) dicircumcoronene. The distribution of relative longitudinal displacements \(\rho_{n}\) of chain molecules at time \(t=40\) ps is shown for the amplitude of the initial local compression of the chain end \(a_{z}=0.4\) Å. The vertical dotted lines show the position of the front of the acoustic phonon wave packet propagating with the velocity \(v_{l}\).
Figure 8: Dependence of (a) energy \(E\) of an acoustic soliton and (b) longitudinal compression of the chain \(A_{z}\) produced by an acoustic soliton propagating in a chain of coronene molecules on its dimensionless velocity \(s=v/v_{l}\). Markers show numerical values, solid curves show approximations obtained by the least squares method \(E(s)=3.36(s-1)^{1.7}\) eV and \(A_{z}(s)=0.93(s-1)^{0.5}\) Å.
coronene molecules always leads to the formation of a supersonic acoustic soliton and a subsonic wave packet of long-wavelength longitudinal acoustic phonons - see Fig. 6 (a) and 7 (a). A local area of compression is formed in the chain, which moves along it with a constant supersonic speed \(v>v_{l}\), keeping its shape. When moving, the soliton breaks away from the wave packet of phonons. This allows us to find its energy \(E\) and the longitudinal compression of the chain \(A_{z}\):
\[E=\sum_{n}E_{n},\,A_{z}=\sum_{n}\rho_{n},\,\rho_{n}=\frac{1}{N_{0}}\sum_{j=1}^ {N_{0}}(z_{n+1,j}-z_{n,j}-\Delta_{z}),\]
where the summation is carried out only over the soliton localization region.
Dependencies of the soliton energy \(E\) and chain compression \(A_{z}\) produced by the soliton on its dimensionless velocity \(s=v/v_{l}\) are shown in Fig. 8. As can be seen from the figure, with increasing velocity, the soliton energy increases as \((s-1)^{1.7}\), and the compression as \((s-1)^{1/2}\).
In chains of circumcoronene and dicircumcoronene molecules, local longitudinal compression of the chain end also leads to the formation of a supersonic localized compression region. But the motion of this region is accompanied by the emission of phonons. As a result, the energy and velocity of the soliton decrease monotonically, see Figs. 6(b,c) and 7(b,c). The larger the molecule, the more noticeable the emission of phonons. Therefore, it can be concluded that a chain of \(n\)-coronene molecules admits the existence of an exact acoustic soliton of longitudinal compression only for \(n=2\), while for \(n>2\) there is only a soliton-like excitation with a finite lifetime.
## V Rotobreathers
The structure of planar molecules allows their rotation in chains around the \(z\) axis. The \(n\)-coronene molecule has the shape of a regular hexagon, a rotation of one molecule by \(60^{\circ}\) will transfer the chain to an equivalent state. If we fix the positions of all molecules and rotate only one molecule as a rigid body, then the rotation potential \(E(\varphi)\) (dependence of the chain energy on the angle of rotation of one molecule \(\varphi\)) can be obtained. This potential is a periodic function with period \(\pi/3\), see Fig. 9. In the approximation of absolutely rigid valence bonds, free rotation requires overcoming energy barriers of height 0.26, 0.34 and 0.66 eV for the chain of coronene, circumcoronene, and dicircumcoronene molecules, respectively. These barriers are overcome at molecular rotation frequencies above \(\omega_{0}=2.19\), 1.11 and 0.87 cm\({}^{-1}\). Thus, the topology of the chain allows the existence of rotobreathers (localized rotations of molecules).
In the approximation of absolutely rigid molecules, their chains allow the existence of a rotobreathers with an infinite frequency spectrum lying above frequency \(\omega_{0}\). The \(n\)-coronene molecule is not an absolutely rigid body, it has \(3N_{0}-6\) vibrational modes. The presence of internal vibrations in a rotator (in our case, a planar \(n\)-coronene molecule) leads to the appearance of band gaps (lacuane) in the frequency spectrum of the rotobreather [44]. At frequencies within these band gaps, the rotation leads to resonance with the natural oscillations of the rotators and the emission of phonons. Therefore, the presence of internal vibrational modes in molecules should lead to a significant narrowing of the frequency spectrum of the rotobreather.
To find the rotobreather, we simulate the rotation of one molecule at different initial frequencies in a chain of \(N=100\) molecules. A viscous friction at the ends of the chain is introduced, which ensures the absorption of phonons emitted by the rotator. To do this, we numerically integrate the system of equations of motion
\[\mathbf{M}\ddot{\mathbf{X}}_{n}=-\frac{\partial H}{\partial\mathbf{X}_{n}}, \,\,\,\,n=N_{t}+1,...,N-N_{t}, \tag{20}\]
\[\mathbf{M}\ddot{\mathbf{X}}_{n}=-\frac{\partial H}{\partial\mathbf{X}_{n}}- \gamma\mathbf{M}\dot{\mathbf{X}}_{n},\,\,\,\,n\leq N_{t},\,\,n>N-N_{t}\]
with the friction coefficient \(\gamma=1/t_{r}\), \(t_{r}=10\) ps, \(N_{t}=30\).
Let us take the ground state of the chain and excite the rotation of the central molecule \(n_{c}=N/2\) with the frequency \(\omega\), i.e. take the initial conditions in the form
\[\{\mathbf{X}_{n}(0)=\mathbf{X}_{n}^{0}\}_{n=1}^{N},\,\,\dot{ \mathbf{X}}_{n}(0)=\mathbf{0},\,\,\,\,n\neq n_{c} \tag{21}\] \[\{\dot{x}_{n_{c},j}=-\omega y_{n_{c},j}^{0},\,\,\dot{y}_{n_{c},j} =\omega x_{n_{c},j}^{0},\,\,\dot{z}_{n_{c},j}=0\}_{j=1}^{N_{0}}.\]
Thus, we set the rotation of one rotator in the chain with the initial energy
\[E=\frac{1}{2}\omega^{2}\sum_{j=1}^{N_{0}}M_{j}({x_{n_{c},j}^{0}}^{2}+{y_{n_{c },j}^{0}}^{2}).\]
Figure 9: Change in the energy of the chain \(E\) as the function of the rotation angle \(\varphi\) of one molecule rotating around the \(z\) axis in the chain of coronene, circumcoronene, and dicircumcoronene (curves 1, 2, and 3, respectively). Only one molecule rotates quasi-statically while the rest of the molecules remain in their equilibrium positions.
Friction at the ends of the chain will ensure the absorption of phonons emitted by the rotator. Therefore, depending on the value of the frequency \(\omega\), the rotator either stops, having lost all the energy for phonon emission, or reaches a stationary rotation mode with a constant frequency without phonon emission (rotobreather mode). The change in the rotator energy \(E\) for various initial values of the frequency \(\omega\) in the chain of coronene and circumcoronene molecules is shown in Figs. 10 and 11, respectively.
As can be seen from Fig. 10, for a chain of coronene molecules, there are only three frequency ranges at which a rotation at constant frequency of one molecule can occur without emitting phonons: [3.96, 4.54], [8.28, 9.09], and [16.33, 16.71] cm\({}^{-1}\). Thus, in the chain of coronene molecules, the rotobreather has a frequency spectrum consisting of only three narrow intervals, see also Fig. 5(b), where the frequency spectrum of the rotobreather is shown by gray bands. Rotation with other frequencies leads to the emission of phonons.
Simulation of the dynamics of a rotator in a chain of circumcoronene molecules showed that rotobreathers do not exist in this chain. Here, at all values of the rotation frequency, the rotator emits phonons and completely loses energy, see Fig. 11. There is only one frequency \(\omega=22.6\) cm\({}^{-1}\) at which the radiation becomes less intense, but does not completely disappear. In a chain of dicircumcoronene molecules, the rotation of the rotator at all frequencies leads to an even stronger emission of phonons and no rotobreather is formed. The absence of a rotobreather in the chains of circumcoronene and dicircumcoronene molecules is explained by a denser frequency spectrum of natural vibrations of molecules. Here, in contrast to the coronene molecules, the rotation of the rotator at all frequencies resonates with the natural vibrations of the molecules.
## VI Discrete breathers
An isolated \(n\)-coronene molecule consists of \(N_{0}=6n^{2}\) atoms. It has \(3N_{0}-6\) natural oscillations with non-zero frequencies, \(\{\omega_{j}\}_{j=7}^{3N_{0}}\). The first six eigenmodes have a zero frequency \(\omega_{1}=...=\omega_{6}=0\), they correspond to the motion of a molecule as a rigid body (three translational and three rotational degrees of freedom). Eigenmodes with non-zero frequencies are of two types: \(N_{0}-2\) out-of-plane vibrations, when atoms move orthogonally to the molecular plane, and \(2N_{0}-4\) in-plane vibrations, when atoms move in the molecular plane.
The coronene molecule has 22 out-of-plane vibrations with frequencies 64.6, 117.7,..., 839.2 cm\({}^{-1}\) and
Figure 10: Change in time of the energy of one rotator in the chain of coronene molecules for different values of the initial rotation frequency of central molecule, varying in the range from \(\omega=3\) to 22 cm\({}^{-1}\) with a step of 0.25 cm\({}^{-1}\).
Figure 11: Change in time of the energy of one rotator in the chain of circumcoronene molecules for different values of the initial rotation frequency of central molecule, varying in the range from \(\omega=2\) to 13.75 cm\({}^{-1}\) with a step of 0.25 cm\({}^{-1}\). The dashed line shows the energy corresponding to the rotation frequency \(\omega=22.6\) cm\({}^{-1}\), at which the weakest phonon emission occurs.
44 in-plane vibrations with frequencies 203.1, 236.3,..., 1546.2 cm\({}^{-1}\). The circumcoronene molecule has 52 out-of-plane vibrations with frequencies 32.3, 60.9,..., 881.0 cm\({}^{-1}\) and 104 in-plane vibrations with frequencies 140.5, 162.0,..., 1576.3 cm\({}^{-1}\). Let us check whether the excitation of a high-amplitude natural oscillation of one molecule can lead to the appearance of a discrete breather in the chain - a nonlinear oscillation localized on one molecule.
To find discrete breathers, we simulate high-amplitude natural vibrations of one central molecule in a chain of \(N=100\) molecules. At the ends of the chain, viscous friction is introduced, which ensures the absorption of phonons emitted by vibrations of the central molecule. The system of equations of motion Eq. (20) is integrated numerically with the initial conditions
\[\mathbf{X}_{n}(0)=\mathbf{X}_{n}^{0},\ \dot{\mathbf{X}}_{n}(0)=A\mathbf{e}_{j} \delta_{n,n_{c}},\ n=1,...,N, \tag{22}\]
where \(A\) defines the magnitude of the initial velocity of atoms of the central molecule, \(\mathbf{e}_{j}\) is the unit eigenvector of the \(j\)th eigenmode of an isolated molecule (\(j=7\),...,\(3N_{0}\)), \(n_{c}=N/2\). The value of \(A\) determines the vibrational energy of the molecule and it is chosen sufficiently large to enter the regime of anharmonicity.
The dependencies of the vibrational energy of the central molecule on time \(t\) are shown in Fig. 12. Numerical integration of the system of equations of motion Eq. (20) with the initial conditions Eq. (22) showed that three dynamics scenarios are possible: very fast damping of oscillations (see Fig. 12, curve 3), slow damping (curves 1 and 4) and the formation of undamped oscillations (curves 2 and 5). The first two scenarios are typical for out-of-plane vibrations, the last one - for in-plane vibrations.
The frequencies of the resulting discrete breathers are shown by black dots in Figs. 3 and 4. Of all the out-of-plane eigenmodes, only the oscillation with the maximum frequency can lead to the formation of a discrete breather. For a chain of coronene molecules out of 44 in-plane vibrations 24 can lead to the formation of a discrete breather, and for a chain of circumcoronene molecules out of 104 in-plane vibrations 31 produce discrete breathers.
The undamped vibrations are localized strictly on one molecule. The oscillations are anharmonic, their frequency depends on the amplitude. A characteristic feature of localized oscillations (discrete breathers) is a linear decrease in their frequency with increasing energy, see Fig. 13. As the oscillation amplitude increases, the energy of the breather increases and the frequency decreases. Thus, \(n\)-coronene chains support gap discrete breathers with a soft type of anharmonicity. The energy of a discrete breather in a chain of coronene molecules can reach 0.37 eV, and the width of the frequency spectrum can reach 6 cm\({}^{-1}\).
## VII Conclusions
The linear phonon spectrum and nonlinear spatially localized excitations, such as acoustic solitons, rotobreathers, and discrete breathers in chains of \(n\)-coronene molecules, are studied by the method of molecular dynamics. Three members of the \(n\)-coronene were considered, namely coronene, circumcoronene and dicircumcoronene (\(n=2\), 3 and 4 respectively). These molecules include respectively \(N_{0}=24\), 54, and 96 carbon atoms and have \(3N_{0}-6\) vibrational degrees of freedom.
The size of molecules plays an important role in chain dynamics. The spectra of low-amplitude vibrations of chains of coronene and circumcoronene molecules are shown in Figs 3 and 4, respectively. It can be seen that the maximum frequencies of out-of-plane and in-plane vibrations are approximately the same for chains of coronene and circumcoronene molecules, but the spectrum of the latter is denser, since the number of degrees
Figure 12: Dependence of the energy of vibrations of the central molecule of a chain of coronene molecules on time at the initial excitation of the \(j\)-th natural vibration: (curve 1) \(j=17\), \(\omega_{j}=236.3\); (curve 2) \(j=21\), \(\omega_{j}=278.8\); (curve 3) \(j=23\), \(\omega_{j}=329.5\); (curve 4) \(j=33\), \(\omega_{j}=435.2\); (curve 5) \(j=47\), \(\omega_{j}=839.2\) cm\({}^{-1}\). The initial atomic velocity used to excite the vibrational mode in the central molecule is \(A=10\) Å/ps.
Figure 13: Dependence of the energy \(E\) on the frequency \(\omega\) for a discrete breather based on the \(j\) eigenmode of the coronene molecule: (a) \(j=47\), \(\omega_{j}=839.2\); (b) \(j=67\), \(\omega_{j}=1470.0\) (c) \(j=69\), \(\omega_{j}=1491.3\) cm\({}^{-1}\).
of freedom is greater. The spectrum of a chain of dicircumcoronene molecules is even denser.
It was found that a chain of coronene molecules supports the propagation of acoustic compressive solitons, which practically do not emit energy when moving at supersonic speed, see Fig. 6(a) and Fig. 7(a). Similar excitations in chains of circumcoronene and dicircumcoronene molecules constantly lose energy, emitting low-amplitude phonons, see Fig. 6(b,c) and Fig. 7(b,c). This is because spiral chains have lower symmetry in the stacking of larger molecules and more channels to radiate energy due to the greater number of vibrational degrees of freedom.
A similar picture was observed for rotobreathers. Only in a chain of coronene molecules a single molecule can rotate with frequencies in certain ranges [shown in gray in Fig. 5(b)], radiating no energy. In chains of circumcoronene and dicircumcoronene molecules, a molecule rotating at any frequency excites low-amplitude phonons, constantly loses its energy, and eventually stops rotating. The explanation lies in more resonances with a denser phonon spectrum in chains with larger molecules.
As for discrete breathers, they are supported by all three considered molecular chains. Discrete breathers are in the form of single molecule vibrating at large amplitude and radiating no energy. The frequencies of discrete breathers are marked with black dots in Figs. 3 and 4 for chains of coronene and circumcoronene molecules, respectively. A discrete breather with out-of-plane oscillations, see panels (a), is created only by the highest-frequency out-of-plane mode. On the other hand, a number of in-plane vibrational modes create discrete breathers, see panels (b). The frequency of discrete breathers decreases with an increase in their energy, i.e. soft-type anharmonicity is realized, see Fig. 13.
The results presented in this study illustrate the role of the internal degrees of freedom of particles in the nonlinear dynamics of molecular chains.
**ACKNOWLEDGMENTS**
Computational facilities were provided by the Interdepartmental Supercomputer Center of the Russian Academy of Sciences. The work of A.V.S. (statement of the problem, numerical simulations, and writing the manuscript) was supported by the Russian Science Foundation, Grant No. 21-12-00229. S.V.D. thanks the financial support provided by the Grants Council of the President of the Russian Federation grant NSh-4320.2022.1.2 (discussion of the results, writing the manuscript).
|
2310.13555 | Electrically charged black holes in gravity with a background
Kalb-Ramond field | We derive the exact solutions for electrically charged black holes both in
the absence and presence of a cosmological constant in the gravitational theory
with Lorentz violation induced by a background Kalb-Ramond (KR) field. The
corresponding thermodynamic properties are investigated. It is found that the
standard first law of thermodynamics and the Smarr formula remain valid for the
charged KR black holes. Nevertheless, the Lorentz-violating effect influences
their ranges of local thermodynamic stability and the first- and second-order
phase transition points. Furthermore, to examine the impact of Lorentz
violation on the motion of test particles in the spacetime, we analyze the
shadow and the innermost stable circular orbit (ISCO) of these black holes. Our
results reveal that both the shadow and ISCO radii exhibit a high sensitivity
to the Lorentz-violating parameter $\ell$, with a decrease observed as $\ell$
increases. | Zheng-Qiao Duan, Ju-Ying Zhao, Ke Yang | 2023-10-20T14:57:53Z | http://arxiv.org/abs/2310.13555v3 | # Electrically charged black holes in gravity with a background Kalb-Ramond field
###### Abstract
We derive the exact solutions for electrically charged black holes both in the absence and presence of a cosmological constant in the gravity theory with Lorentz violation induced by a background Kalb-Ramond (KR) field. The corresponding thermodynamic properties are investigated. It is found that the standard first law of thermodynamics and the Smarr formula remain valid for the charged KR black holes. Nevertheless, the Lorentz-breaking effect influences their ranges of local thermodynamic stability and the first- and second-order phase transition points. Furthermore, to examine the impact of Lorentz violation on the motion of test particles in the spacetime, we analyze the shadow and the innermost stable circular orbit (ISCO) of these black holes. Our results reveal that both the shadow and ISCO radii exhibit a high sensitivity to the Lorentz-violating parameter \(\ell\), with a decrease observed as \(\ell\) increases.
pacs: 04.70.-s, 04.50.Kd
Introduction
Einstein's general relativity (GR) successfully explains a wide range of gravitational phenomena and is one of the two pillars of modern theoretical physics, along with quantum mechanics. It serves as the foundation for modern cosmology and provides us an accurate description of the evolution of our Universe. The GR extends the principles of special relativity to include the gravity, and maintains the local Lorentz symmetry at each point in spacetime manifold. Although current experiments and observations support Lorentz symmetry as a fundamental symmetry in the nature, some theoretical studies, particularly the string theory [1], loop quantum gravity [2], Horava-Lifshitz gravity [3], noncommutative field theory [4], suggest the possibility of Lorentz symmetry breaking (LSB) above some energy scales. These studies provide valuable insights into our understanding of the nature of spacetime and fundamental principles of physics.
A general theoretical framework for studying the LSB is the Standard-Model Extension, which extends the Standard Model of particle physics to include the GR and possible violations of Lorentz symmetry [5]. Within this framework, a simple theory exhibiting LSB phenomenon is the bumblebee model [1; 6; 7; 8; 9]. The theory extends GR by incorporating a vector field \(B_{\mu}\), known as the bumblebee field, which non-minimally couples to gravity. When the bumblebee field acquires a nonzero vacuum expectation value (VEV), it serves as a fixed background field that spontaneously breaks the Lorentz symmetry. The bumblebee model has been intensively studied in various areas recently, including black hole physics [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35], wormholes [36], and gravitational waves [37; 38].
The KR field is a rank-two antisymmetric tensor arising naturally from the spectrum of bosonic string theory [39], and its properties have been extensively studied in various contexts [40; 41; 42; 43]. An alternative approach to trigger the spontaneously LSB involves the consideration of a KR field \(B_{\mu\nu}\) that non-minimally couples to gravity [44]. When the KR field acquires a nonzero VEV, it leads to the spontaneous breaking of Lorentz symmetry. A Schwarzschild-like solution was derived in the theory in Ref. [45]. The corresponding particle motion, gravitational lensing and energy processes around the Schwarzschild-like black hole was investigated in Ref. [46]. In Ref. [47], a rotating KR black hole solution was obtained and the corresponding deflection of light and shadow cast by the black hole were investigated. In Refs. [48; 49], the traversable wormhole solutions were derived, and
the corresponding gravitational lensing by the wormhole was discussed in Ref. [50]. A consequence of LSB on the Bianchi type I cosmology was considered in Ref. [51]. In Ref. [52], a correct Schwarzschild-like solution was derived in this theory and the Schwarzschild-(A)dS-like solution was obtained as well by relaxing the vacuum conditions. The corresponding shadows and quasinormal frequencies of these black holes were investigated in Ref. [53].
In this work, we are interested in constructing the electrically charged, static, and spherically symmetric black hole solutions in the gravity theory with Lorentz violation triggered by a nonzero VEV background of KR field. The layout of the paper is as follows: In Sect. II, we introduce the theory and incorporate an interaction term between the KR field and the electromagnetic field. In Sect. III, we solve the theory to achieve analytical solutions for the electrically charged black holes both in the absence and presence of a cosmological constant. Further, several basic thermodynamic properties of the charged KR black holes are analyzed in Sect. IV. Moreover, the impact of the Lorentz violating effect on the motion of test particles near these black holes is investigated in Sect. V. Finally, brief conclusions are present.
## II Lorentz-violating gravity with a background KR field
The KR field, denoted as \(B_{\mu\nu}\), is a rank-two antisymmetric tensor field that obeys the condition \(B_{\mu\nu}=-B_{\nu\mu}\). Its field strength is defined as a 3-form, denoted as \(H_{\mu\nu\rho}\equiv\partial_{[\mu}B_{\nu\rho]}\). The field strength is invariant under the gauge invariance \(B_{\mu\nu}\to B_{\mu\nu}+\partial_{[\mu}\Gamma_{\nu]}\). It is convenient to decompose the KR field to be \(B_{\mu\nu}=\tilde{E}_{[\mu}v_{\nu]}+\epsilon_{\mu\nu\alpha\beta}v^{\alpha} \tilde{B}^{\beta}\) with \(\tilde{E}_{\mu}v^{\mu}=\tilde{B}_{\mu}v^{\mu}=0\), where \(v^{\alpha}\) is a timelike 4-vector [44; 45]. Therefore, the spacelike pseudo-vector fields \(\tilde{E}_{\mu}\) and \(\tilde{B}_{\mu}\) can be interpreted respectively as the pseudo-electric and pseudo-magnetic fields in analogy with Maxwell electrodynamics.
The action for the theory, which includes gravity non-minimally coupled to a self-interacting KR field, is given by [44; 45]
\[S = \frac{1}{2}\int d^{4}x\sqrt{-g}\bigg{[}R-2\Lambda-\frac{1}{6}H^{ \mu\nu\rho}H_{\mu\nu\rho}-V(B^{\mu\nu}B_{\mu\nu}\pm b^{2})+\xi_{2}B^{\rho\mu }B^{\nu}{}_{\mu}R_{\rho\nu}+\xi_{3}B^{\mu\nu}B_{\mu\nu}R\bigg{]} \tag{1}\] \[+\int d^{4}x\sqrt{-g}\mathcal{L}_{\rm M},\]
where \(\Lambda\) represents the cosmological constant, \(\xi_{2,3}\) are the non-minimal coupling constants between gravity and the KR field, and we have set \(8\pi G=1\) for convenience. To achieve the
charged solutions, we consider the matter Lagrangian \(\mathcal{L}_{\rm M}\) to be the electromagnetic field, given by \(\mathcal{L}_{\rm M}=-\frac{1}{2}F^{\mu\nu}F_{\mu\nu}+\mathcal{L}_{\rm int}\), where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) represents the field strength of the electromagnetic field, and \(\mathcal{L}_{\rm int}\) represents the interaction between the electromagnetic field and the KR field.
The potential \(V(B^{\mu\nu}B_{\mu\nu}\pm b^{2})\) depends on \(B^{\mu\nu}B_{\mu\nu}\) in order to maintain the theory's invariance under the observer local Lorentz transformation. As the cosmological constant \(\Lambda\) is counted separately, the potential is set to zero at its minimum. The minimum is determined by the condition \(B^{\mu\nu}B_{\mu\nu}=\mp b^{2}\), with the sign \(\pm\) chosen such that \(b^{2}\) is a positive constant. Correspondingly, the KR field acquires a nonzero VEV, denoted as \(\langle B_{\mu\nu}\rangle=b_{\mu\nu}\). Due to the non-minimal coupling of the KR field to gravity, the nonzero VEV background \(b_{\mu\nu}\) spontaneously breaks particle local Lorentz invariance. In the vacuum configuration, the interaction term \(\xi_{3}B^{\mu\nu}B_{\mu\nu}R=\mp\xi_{3}b^{2}R\) in the action (1) can be absorbed into the Einstein-Hilbert terms by redefining variables.
Furthermore, we take the assumption that the vacuum configuration of the KR field exhibits a pseudo-electric configuration, where the only non-vanishing components are given by \(b_{10}=-b_{01}=\tilde{E}(r)\), where the pseudo-electric field \(\tilde{E}(r)\) is determined by the constant norm condition \(b^{\mu\nu}b_{\mu\nu}=\mp b^{2}\)[45]. Consequently, this configuration automatically vanishes the KR field strength, i.e., \(H_{\lambda\mu\nu}=0\).
In order to achieve the electrically charged black hole solutions, we consider an electrostatic vector potential \(A_{\mu}=-\Phi(r)\delta^{t}_{\mu}\) in the usual manner. However, it is important to note that a consistent charged black hole solution cannot be supported solely by a free electromagnetic field. Therefore, it becomes necessary to include the interaction between the electromagnetic field and the KR field. To incorporate the interaction, one approach is to modify the KR field strength \(H_{\mu\nu\rho}\) by adding a \(U(1)\) electromagnetic Chern-Simons three-form, i.e., \(\tilde{H}_{\mu\nu\rho}=H_{\mu\nu\rho}+A_{[\mu}F_{\nu\rho]}\)[54]. However, for the vacuum KR configuration and the electrostatic vector potential, it is found that all the interactions in the modified kinetic term \(\tilde{H}^{\mu\nu\rho}\tilde{H}_{\mu\nu\rho}=H^{\mu\nu\rho}H_{\mu\nu\rho}+H^{ \mu\nu\rho}A_{[\mu}F_{\nu\rho]}+A^{[\mu}F^{\nu\rho]}A_{[\mu}F_{\nu\rho]}\) still vanish. Therefore, in order to introduce a nontrivial contribution to the spacetime dynamics, we instead consider an interaction term as the form
\[\mathcal{L}_{\rm int}=-\eta B^{\alpha\beta}B^{\gamma\rho}F_{\alpha\beta}F_{ \gamma\rho}, \tag{2}\]
where \(\eta\) is a coupling constant. This interaction term allows for the existence of the electrically charged black hole solutions.
The modified Einstein equations is obtained by varying the action (1) with respect to the metric \(g^{\mu\nu}\), given by
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+\Lambda g_{\mu\nu}=T^{\rm M}_{\mu\nu}+T^{\rm KR }_{\mu\nu}, \tag{3}\]
where \(T^{\rm M}_{\mu\nu}\) is the energy-momentum tensor of the electromagnetic field, derived as
\[T^{\rm M}_{\mu\nu}=\alpha\left(4F_{\mu\alpha}F_{\nu}^{\ \alpha}-g_{\mu\nu}F^{ \alpha\beta}F_{\alpha\beta}\right)+\eta\left(8B^{\alpha\beta}B_{\nu}^{\ \gamma}F_{\alpha\beta}F_{\mu\gamma}-g_{\mu\nu}B^{\alpha\beta}B^{\gamma\rho}F_{ \alpha\beta}F_{\gamma\rho}\right), \tag{4}\]
and \(T^{\rm M}_{\mu\nu}\) is an effective energy-momentum tensor of KR field, given by
\[T^{\rm KR}_{\mu\nu} = \frac{1}{2}H_{\mu\alpha\beta}H^{\alpha\beta}_{\nu}-\frac{1}{12}g _{\mu\nu}H^{\alpha\beta\rho}H_{\alpha\beta\rho}+2V^{\prime}B_{\alpha\mu}B^{ \alpha}_{\ \nu}-g_{\mu\nu}V \tag{5}\] \[+\xi_{2}\bigg{[}\frac{1}{2}g_{\mu\nu}B^{\alpha\gamma}B^{\beta}_{ \ \gamma}R_{\alpha\beta}-B^{\alpha}_{\ \mu}B^{\beta}_{\ \nu}R_{\alpha\beta}-B^{\alpha\beta}B_{\nu\beta}R_{\mu \alpha}-B^{\alpha\beta}B_{\mu\beta}R_{\nu\alpha}\] \[+\frac{1}{2}\nabla_{\alpha}\nabla_{\mu}\left(B^{\alpha\beta}B_{ \nu\beta}\right)+\frac{1}{2}\nabla_{\alpha}\nabla_{\nu}\left(B^{\alpha\beta}B _{\mu\beta}\right)-\frac{1}{2}\nabla^{\alpha}\nabla_{\alpha}\left(B_{\mu}^{ \ \gamma}B_{\nu\gamma}\right)\] \[-\frac{1}{2}g_{\mu\nu}\nabla_{\alpha}\nabla_{\beta}\left(B^{ \alpha\gamma}B^{\beta}_{\ \gamma}\right)\bigg{]}.\]
Here, the prime represents the derivative with respect to the argument of the corresponding functions. Note that the total energy-momentum tensor \(T^{\rm KR}_{\mu\nu}+T^{\rm M}_{\mu\nu}\) is conserved due to the Bianchi identities.
The modified Maxwell equation is derived by varying the action (1) with respect to the vector potential \(A^{\mu}\), yielding
\[\nabla^{\nu}\left(F_{\mu\nu}+2\eta B_{\mu\nu}B^{\alpha\beta}F_{\alpha\beta} \right)=0. \tag{6}\]
It reduces to the standard Maxwell equation when the coupling constant \(\eta\) is set to zero.
## III Electrically charged black hole solutions
We consider the metric ansatz for a static and spherically symmetric spacetime given by
\[ds^{2}=-F(r)dt^{2}+G(r)dr^{2}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2}. \tag{7}\]
With the ansatz, the pseudo-electric field \(\tilde{E}(r)\) can be further expressed as \(\tilde{E}(r)=|b|\sqrt{\frac{A(r)B(r)}{2}}\). Consequently, the vacuum configuration of KR field satisfies the constant norm condition \(b^{\mu\nu}b_{\mu\nu}=-b^{2}\).
Under the vacuum configuration, it is advantageous to reformulate the modified Einstein equation (3) as
\[R_{\mu\nu} = T_{\mu\nu}^{\rm M}-\frac{1}{2}g_{\mu\nu}T^{\rm M}+\Lambda g_{\mu \nu}+V^{\prime}\left(2b_{\mu\alpha}b_{\nu}{}^{\alpha}+b^{2}g_{\mu\nu}\right) \tag{8}\] \[+\xi_{2}\bigg{[}g_{\mu\nu}b^{\alpha\gamma}b^{\beta}{}_{\gamma}R_{ \alpha\beta}-b^{\alpha}{}_{\mu}b^{\beta}{}_{\nu}R_{\alpha\beta}-b^{\alpha \beta}b_{\mu\beta}R_{\nu\alpha}-b^{\alpha\beta}b_{\nu\beta}R_{\mu\alpha}\] \[+\frac{1}{2}\nabla_{\alpha}\nabla_{\mu}\left(b^{\alpha\beta}b_{ \nu\beta}\right)+\frac{1}{2}\nabla_{\alpha}\nabla_{\nu}\left(b^{\alpha\beta}b_ {\mu\beta}\right)-\frac{1}{2}\nabla^{\alpha}\nabla_{\alpha}\left(b_{\mu}{}^{ \gamma}b_{\nu\gamma}\right)\bigg{]},\]
where \(T^{\rm M}=g^{\alpha\beta}T_{\alpha\beta}^{\rm M}\).
Further, with the ansatzes of the metric (7) and the electrostatic field, the field equations (8) can be written explicitly as
\[\frac{2F^{\prime\prime}}{F}-\frac{F^{\prime}}{F}\frac{G^{\prime}}{ G}-\frac{F^{\prime 2}}{F^{2}}+\frac{4}{r}\frac{F^{\prime}}{F}+\frac{4\Lambda G}{1- \ell}-\frac{4\left(1-2\eta b^{2}\right)\Phi^{\prime 2}}{(1-\ell)F} = 0, \tag{9a}\] \[\frac{2F^{\prime\prime}}{F}-\frac{F^{\prime}}{F}\frac{G^{\prime}} {G}-\frac{F^{\prime 2}}{F^{2}}-\frac{4}{r}\frac{G^{\prime}}{G}+\frac{4 \Lambda G}{1-\ell}-\frac{4\left(1-2\eta b^{2}\right)\Phi^{\prime 2}}{(1-\ell)F} = 0,\] (9b) \[\frac{2F^{\prime\prime}}{F}-\frac{F^{\prime}G^{\prime}}{FG}-\frac{ F^{\prime 2}}{F^{2}}+\frac{1+\ell}{\ell r}\left(\frac{F^{\prime}}{F}-\frac{G^{ \prime}}{G}\right)-\left(1-\Lambda r^{2}-b^{2}r^{2}V^{\prime}\right)\frac{2G}{ \ell r^{2}}\] \[+\frac{2(1-\ell)}{\ell r^{2}}-\frac{2\left(1-6\eta b^{2}\right) \Phi^{\prime 2}}{\ell F} = 0, \tag{9c}\]
where \(\ell\equiv\xi_{2}b^{2}/2\). In addition, the modified Maxwell equation (6) is written explicitly as
\[\left(1-2\eta b^{2}\right)\bigg{[}\Phi^{\prime\prime}+\frac{\Phi^{\prime}}{2} \left(\frac{4}{r}-\frac{F^{\prime}}{F}-\frac{G^{\prime}}{G}\right)\bigg{]}=0. \tag{10}\]
### Case: \(\Lambda=0\)
When the cosmological constant is absent, we take the assumption that \(V^{\prime}=0\), which corresponds to the case where the VEV is located at the local minimum of the potential. For instance, it can be simply realized by a potential of quadratic form, \(V=\frac{1}{2}\lambda X^{2}\), with \(X\equiv B^{\mu\nu}B_{\mu\nu}+b^{2}\) and \(\lambda\) a coupling constant [55].
By subtracting Eq. (9b) from Eq. (9a), we have the following relation
\[\frac{F^{\prime}}{F}=-\frac{G^{\prime}}{G}. \tag{11}\]
It simply yields
\[G(r)=F^{-1}(r), \tag{12}\]
where we have fixed the integration constant to be 1.
By substituting it into the modified Maxwell equation (10), the electrostatic potential is obtained as
\[\Phi(r)=\frac{c_{1}}{r}+c_{2}, \tag{13}\]
where the integration constant \(c_{2}\) can be set to zero by fixing the zero point of the potential to be zero at infinity. However, since the conserved current has been modified to be \(J^{\mu}=\nabla_{\nu}\left(F^{\mu\nu}+2\eta B^{\mu\nu}B^{\alpha\beta}F_{\alpha \beta}\right)\), the integration constant \(c_{2}\) can be determined using Stokes's theorem [56], i.e.,
\[Q = -\frac{1}{4\pi}\int_{\Sigma}dx^{3}\sqrt{\gamma^{(3)}}n_{\mu}J^{\mu} \tag{14}\] \[=-\frac{1}{4\pi}\int_{\partial\Sigma}d\theta d\phi\sqrt{\gamma^{ (2)}}n_{\mu}\sigma_{\nu}\left(F^{\mu\nu}+2\eta B^{\mu\nu}B^{\alpha\beta}F_{ \alpha\beta}\right)\] \[=\left(1-2b^{2}\eta\right)c_{1},\]
where \(\Sigma\) represents a 3-dimensional spacelike region with the induced metric \(\gamma^{(3)}_{ij}\), while its boundary \(\partial\Sigma\) is a two-sphere located at spatial infinity with the induced metric \(\gamma^{(2)}_{ij}=r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right)\). Accordingly, \(n_{\mu}=(1,0,0,0)\) denotes the unit normal vector associated with \(\Sigma\), and \(\sigma_{\mu}=(0,1,0,0)\) denotes the unit normal vector associated with \(\partial\Sigma\). Thus, the electrostatic potential is given by
\[\Phi(r)=\frac{Q}{\left(1-2b^{2}\eta\right)r}. \tag{15}\]
After subtracting Eq. (9c) from Eq. (9a) and substituting (12) and (18) into it, the metric function \(F(r)\) can be integrated out, resulting in
\[F(r)=\frac{1}{1-\ell}-\frac{2M}{r}+\frac{1+\ell-2(3-\ell)b^{2}\eta}{(1-\ell)^ {2}\left(1-2b^{2}\eta\right)^{2}\frac{Q^{2}}{r^{2}}}, \tag{16}\]
where the integration constant has been determined to recover the Schwarzschild-like solution from Ref. [52] when the electric charge \(Q\) vanishes.
Further, by substituting the obtained results into all the field equations, it is found that the solutions are consistent only if
\[\eta=\frac{\ell}{2b^{2}}. \tag{17}\]
Therefore, it is evident that in the case of Lorentz violation in spacetime, the interaction \(\mathcal{L}_{\rm int}\) is indispensable to achieve a charged black hole solution.
With the relation (17), the electrostatic potential \(\Phi(r)\) and the metric function \(F(r)\) can be further simplified as
\[\Phi(r) = \frac{Q}{\left(1-\ell\right)r}, \tag{18}\] \[F(r) = \frac{1}{1-\ell}-\frac{2M}{r}+\frac{Q^{2}}{\left(1-\ell\right)^{ 2}r^{2}}. \tag{19}\]
It is worth noting that the electrostatic potential \(\Phi(r)\) has been modified due to the Lorentz-breaking effect.
Consequently, the Reissner-Nordstrom-like (RN-like) metric is obtained as
\[ds^{2}=-\left(\frac{1}{1-\ell}-\frac{2M}{r}+\frac{Q^{2}}{\left(1-\ell\right)^{ 2}r^{2}}\right)dt^{2}+\frac{dr^{2}}{\frac{1}{1-\ell}-\frac{2M}{r}+\frac{Q^{2}} {\left(1-\ell\right)^{2}r^{2}}}+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2}. \tag{20}\]
The Lorentz-violating effect arising from the nonzero VEV of KR field is characterized by the dimensionless parameter \(\ell\), whose value is constrained to be very small based on the classical gravitational experiments within the Solar System [52]. When the Lorentz-violating parameter \(\ell\) vanishes, it reduces to the standard RN metric.
From the metric (20), the horizon radii reads
\[r_{\pm}=\left(1-\ell\right)\left(M\pm\sqrt{M^{2}-\frac{Q^{2}}{(1-\ell)^{3}}} \right). \tag{21}\]
Figure 1: Plots of the metric function \(F(r)\) and the parameter space \((Q/M,\ell)\) for black hole solutions and non-black hole solutions.
When \(\ell=0\), this expression recovers the result of the RN black hole. Fig. 1(a) illustrates that as the Lorentz-violating parameter \(\ell\) increases, the outer event horizon radius \(r_{+}\) decreases, while the inner Cauchy horizon radius \(r_{-}\) increases.
From the expression for the horizon radii (21), it is clear that the horizons exist only when the condition \(Q^{2}/M^{2}\leq(1-\ell)^{3}\) is satisfied, where the equality represents the case of extreme black holes. The corresponding parameter space \((Q/M,\ell)\) for black hole solutions and non-black hole solutions is illustrated in Fig. 1(b), where the colored region represents the black holes with horizons while the blank region represents the naked singularities. Comparing to the case of RN black holes, extremizing the charged KR black holes requires less charge for a positive \(\ell\), while it requires more charge for a negative \(\ell\).
It is worth noting that the metric functions approach to \(F(r)=1/G(r)\to 1/(1-\ell)\) as \(r\rightarrow\infty\). It can be straightforwardly verified that not all components of the Riemann tensor are zero in this case. Therefore, this indicates that the current spacetime is not asymptotically Minkowski.
### Case: \(\Lambda\neq 0\)
When the cosmological constant is present, it has been found that there is no solution that satisfies all the equations of motion under the assumption \(V^{\prime}(X)=0\), where \(X\equiv B^{\mu\nu}B_{\mu\nu}+b^{2}\). Therefore, following the same approach as in Bumblebee gravity [16], we impose a linear potential as the form \(V=\lambda X\), where \(\lambda\) is a Lagrange multiplier field [55]. In this case, the vacuum condition is relaxed to be \(V^{\prime}(X)=\lambda\). The equation of motion of the Lagrange-multiplier \(\lambda\) reads \(X=0\), so the on-shell \(\lambda\) guarantees that \(b_{\mu\nu}\) is the vacuum configuration. As a result, the on-shell value of \(\lambda\) is determined by the vacuum field equations (9) and (10). It is noting that the off-shell \(\lambda\) should have the same sign as \(X\) in order to keep the potential \(V\) positive [55].
Despite the presence of the cosmological constant, we can still derive the same relationships from the field equations (9a), (9b) and (10), i.e.,
\[G(r) = F^{-1}(r), \tag{22}\] \[\Phi(r) = \frac{Q}{(1-\ell)r}. \tag{23}\]
Furthermore, by subtracting Eq. (9c) from Eq. (9a) and substituting the relations (22) and
(23) into it, we obtain
\[F(r)=\frac{1}{1-\ell}-\frac{2M}{r}+\frac{1+\ell-2(3-\ell)\eta b^{2}}{(1-\ell)^{4} }\frac{Q^{2}}{r^{2}}-\frac{(1-3\ell)\Lambda+(1-\ell)b^{2}\lambda}{3(1-\ell)^{2} }r^{2}. \tag{24}\]
Finally, by substituting Eqs. (22) and (24) into all the field equations, one finds that the solutions are consistent only if
\[\eta = \frac{\ell}{2b^{2}}, \tag{25}\] \[\lambda = \frac{2\ell\Lambda}{(1-\ell)b^{2}}. \tag{26}\]
It is evident that the theory supports a RN-(A)dS-like black hole solution with a non-vanishing cosmological constant if only \(\eta\neq 0\) and \(\lambda\neq 0\).
Consequently, the metric function \(F(r)\) simplifies to
\[F(r)=\frac{1}{1-\ell}-\frac{2M}{r}+\frac{Q^{2}}{(1-\ell)^{2}r^{2}}-\frac{ \Lambda r^{2}}{3(1-\ell)}. \tag{27}\]
As a result, the RN-(A)dS-like metric is achieved as
\[ds^{2} = -\left(\frac{1}{1-\ell}-\frac{2M}{r}+\frac{Q^{2}}{(1-\ell)^{2}r^ {2}}-\frac{\Lambda r^{2}}{3(1-\ell)}\right)dt^{2}+\frac{dr^{2}}{\frac{1}{1- \ell}-\frac{2M}{r}+\frac{Q^{2}}{(1-\ell)^{2}r^{2}}-\frac{\Lambda r^{2}}{3(1- \ell)}} \tag{28}\] \[+r^{2}d\theta^{2}+r^{2}\sin^{2}\theta d\phi^{2}.\]
If the cosmological constant vanishes, the solution reduces to the RN-like one (20). If the electric charge vanishes, it reduces to Schwarzschild-(A)dS metric [52]. Furthermore, it degenerates to the RN-(A)dS metric when the Lorentz-violating parameter \(\ell\) is set to zero.
Figure 2: Plots of the metric function \(F(r)\) for the RN-AdS-like solution (a) and RN-dS-like solution (b).
For the RN-AdS-like black hole solution, it exhibits two horizons known as the outer event horizon and the inner Cauchy horizon. Fig. 2(a) illustrates that as the Lorentz-violating parameter \(\ell\) increases, the outer event horizon radius contracts, while the inner Cauchy horizon radius expands. When the two horizons coincide with each other, it corresponds to the formation of the extreme black hole. The corresponding parameter space \((Q/M,\Lambda M^{2},\ell)\) for the RN-AdS like solutions are depicted in Fig. 3(a), where the colored region represents black holes with horizons, the orange surface represents extreme black holes, and the blank region corresponds to naked singularities.
However, as shown in Fig. 2(b), the RN-dS-like black hole solution exhibits three horizons, i.e., the outermost cosmological horizon, the middle event horizon, and the innermost Cauchy horizon. As the Lorentz-violating parameter \(\ell\) increases, the event horizon radius contracts, while the radii of Cauchy horizon and cosmological horizon enlarge. Fig. 3(b) illustrates the parameter spaces \((Q/M,\Lambda M^{2},\ell)\) for the RN-dS-like solutions, where the colored region corresponds to black hole solutions. Within this figure, two distinct surfaces can be observed. The upper surface represents the formation of the extreme black holes, characterized by the coincidence of the Cauchy horizon and the event horizon. On the other hand, the lower surface corresponds to the situation that the cosmological horizon and the event horizon coincide with each other.
In particular, as illustrated in Fig. 1(b) and Fig. 3, the Lorentz-violating parameter \(\ell\) has
Figure 3: Plots of the parameter space \((Q/M,\Lambda M^{2},\ell)\) for black hole solutions and non-black hole solutions.
a significant impact on the parameter spaces of charged KR black hole solutions.
As \(r\) approaches infinity, the metric functions approximate \(F(r)=1/G(r)\rightarrow-\frac{\Lambda r^{2}}{3(1-\ell)}\). This behavior is consistent with the asymptotic properties of (A)dS spacetime, indicating that the spacetime approaches (A)dS at infinity. As a result of the additional contributions from the modified Einstein equations (8) when \(V^{\prime}(X)\) is nonzero, instead of the bare cosmological constant \(\Lambda\), the effective cosmological constant is given by \(\Lambda_{\rm eff}\equiv\frac{\Lambda}{1-\ell}\) in this case. Similarly, due to the nontrivial contribution arising from the interaction between the electromagnetic field and the KR field (2), it was observed that the bare electric charge \(Q\) is replaced by the effective charge \(Q_{\rm eff}=\frac{Q}{1-\ell}\) in the electrostatic potential \(\Phi(r)\).
## IV Thermodynamics
One of the intriguing aspects of black holes is that they can be viewed as thermodynamic systems, governed by the laws of black hole thermodynamics [57; 58] and exhibiting intriguing phase structures reminiscent of everyday thermodynamics [59; 60]. In this section, we study some basic thermodynamic properties of the charged KR black hole solutions. Due to the thermodynamics of asymptotically dS black holes is complicated and our understanding on it remains limited [61], we focus on the cases of the RN-like and RN-AdS-like black holes.
By solving \(F(r_{+})=0\) in Eq. (27), the mass of the RN-AdS-like black hole can be expressed with the radius of the event horizon \(r_{+}\),
\[M=\frac{r_{+}}{2}\left(\frac{1}{1-\ell}-\frac{r_{+}^{2}\Lambda_{\rm eff}}{3} \right)+\frac{Q_{\rm eff}^{2}}{2r_{+}}. \tag{29}\]
Correspondingly, the mass of the RN-like black hole is obtained by simply setting \(\Lambda_{\rm eff}=0\).
The effective cosmological constant plays the role of a thermodynamic pressure, given by [59]
\[P=-\frac{\Lambda_{\rm eff}}{8\pi}=-\frac{\Lambda}{8\pi(1-\ell)}. \tag{30}\]
In this case, the black hole mass \(M\) can be interpreted as a gravitational version of chemical enthalpy. Since the Lorentz-violating parameter \(\ell\) is a dimensionless constant, the enthalpy can be expressed as a function of entropy \(S\), pressure \(P\) and effective electric charge \(Q_{\rm eff}\), i.e., \(M=M\left(S,P,Q_{\rm eff}\right)\). Therefore, the first law of the RN-AdS-like black hole is given by
\[dM=\mathcal{T}dS+\mathcal{V}dP+\Phi dQ_{\rm eff}, \tag{31}\]
where \(\mathcal{T}\) is the Hawking temperature, \(\mathcal{V}\) the thermodynamic volume, and \(\Phi\) the electrostatic potential expressed as in Eq. (23).
By utilizing the metric (28) and Eq. (29), the temperature of the RN-AdS-like black hole is given by
\[\mathcal{T}=-\left.\frac{1}{4\pi}\frac{\partial g_{tt}}{\partial r}\right|_{r_ {+}}=\frac{1}{4\pi r_{+}}\left(\frac{1}{1-\ell}-r_{+}^{2}\Lambda_{\text{eff}} \right)-\frac{Q_{\text{eff}}^{2}}{4\pi r_{+}^{3}}. \tag{32}\]
The result of the RN-like black hole is obtained by directly setting \(\Lambda_{\text{eff}}=0\).
From the first law (31) and the Hawking temperature (32), the entropy can be shown to satisfy the standard Bekenstein-Hawking area-entropy relation, i.e.,
\[S=\int\,\left(\frac{dM}{\mathcal{T}}\right)_{P,Q_{\text{eff}}}=\int\frac{1}{ \mathcal{T}}\left(\frac{\partial M}{\partial r_{+}}\right)_{P,Q_{\text{eff}} }dr_{+}=\pi r_{+}^{2}=\frac{A_{+}}{4}, \tag{33}\]
where \(A_{\text{h}}=4\pi r_{\text{h}}^{2}\) is the area of the event horizon.
Moreover, from the first law (31) and the pressure (30), we can also calculate the thermodynamic volume, given by
\[\mathcal{V}=\left(\frac{\partial M}{\partial P}\right)_{S,Q_{\text{eff}}}= \left(\frac{\partial M}{\partial\Lambda}\right)_{S,Q_{\text{eff}}}\left(\frac {\partial\Lambda}{\partial P}\right)_{S,Q_{\text{eff}}}=\frac{4\pi r_{+}^{3} }{3}. \tag{34}\]
Now, with the aforementioned results, it can be straightforwardly verified that the Smarr formula holds in the same form as the RN-AdS black hole, i.e.,
\[M=2\mathcal{T}S-2\mathcal{V}P+\Phi Q_{\text{eff}}. \tag{35}\]
In order to analyze the thermodynamical stabilities of the charged KR black holes, we can evaluate their heat capacity, where a positive heat capacity indicates the local stability of the black hole. The heat capacity of the RN-AdS-like black holes is determined by
\[C_{P}=T\left(\frac{\partial S}{\partial\mathcal{T}}\right)_{P}=T\left(\frac{ \partial S}{\partial r_{+}}\right)_{P}\left(\frac{\partial r_{+}}{\partial \mathcal{T}}\right)_{P}=2\pi r_{+}^{2}\frac{\left(r_{+}^{2}\Lambda_{\text{eff }}-\frac{1}{1-\ell}\right)r_{+}^{2}+Q_{\text{eff}}^{2}}{\left(r_{+}^{2} \Lambda_{\text{eff}}+\frac{1}{1-\ell}\right)r_{+}^{2}-3Q_{\text{eff}}^{2}}. \tag{36}\]
It is found that that for \(l<1+12\Lambda Q^{2}\), the heat capacity is positive within the intervals \(r_{+}\subset\left(\sqrt{\frac{\sqrt{1-4(1-\ell)^{2}\Lambda_{\text{eff}}Q_{ \text{eff}}^{2}}-1}{-2(1-\ell)\Lambda_{\text{eff}}}},\sqrt{\frac{1-\sqrt{1+12 (1-\ell)^{2}\Lambda_{\text{eff}}Q_{\text{eff}}^{2}}}{-2(1-\ell)\Lambda_{\text {eff}}}}\right)\cup\left(\sqrt{\frac{1+\sqrt{1+12(1-\ell)^{2}\Lambda_{\text{ eff}}Q_{\text{eff}}^{2}}}{-2(1-\ell)\Lambda_{\text{eff}}}},\infty\right)\), while for \(l\geq 1+12\Lambda Q^{2}\), the heat capacity is positive for \(r_{+}\subset\left(\sqrt{\frac{\sqrt{1-4(1-\ell)^{2}\Lambda_{\text{eff}}Q_{ \text{eff}}^{2}}-1}{-2(1-\ell)\Lambda_{\text{eff}}}},\infty\right)\). As depicted in Fig. 4(a), an increase in the Lorentz-violating parameter \(\ell\) leads to an expansion of the ranges that guarantees local stability.
For the RN-like black holes, the heat capacity is obtained by setting \(\Lambda_{\text{eff}}=0\) in Eq. (36), yielding
\[C_{P}=-2\pi r_{+}^{2}\frac{r_{+}^{2}-(1-\ell)Q_{\text{eff}}^{2}}{r_{+}^{2}-3(1- \ell)Q_{\text{eff}}^{2}} \tag{37}\]
The heat capacity is positive only for the horizon radii within the range \(r_{+}\subset\left(\sqrt{1-\ell}Q_{\text{eff}},\sqrt{3(1-\ell)}Q_{\text{eff}}\right)\), or \(r_{+}\subset\left(\frac{Q}{\sqrt{1-\ell}},\frac{\sqrt{3}Q}{\sqrt{1-\ell}}\right)\), just as shown in Fig. 4(b). Similar to the observed behavior in RN-AdS-like black holes, the range of local stability expands as the Lorentz-violating parameter \(\ell\) increases.
Interestingly, if the electric charge is set to zero, the heat capacity of the RN-AdS-like and RN-like black holes is independent of the Lorentz-violating parameter \(\ell\).
Furthermore, it is well known that the charged AdS black holes allow for a first-order phase transition between small black holes and large black holes in the canonical ensemble [59]. For the RN-AdS-like black holes, the Gibbs free energy in the canonical ensemble can be calculated as
\[\mathcal{F}=M-\mathcal{T}S=\frac{r_{+}}{4}\left(\frac{1}{1-\ell}+\frac{r_{+}^ {2}\Lambda_{\text{eff}}}{3}\right)+\frac{3Q_{\text{eff}}^{2}}{4r_{+}}. \tag{38}\]
The \(\mathcal{F}-\mathcal{T}\) diagram is plotted in Fig. 5. As depicted in Fig. 5(a), the swallowtail behavior emerges for pressures below the critical pressure \(P_{\text{c}}\). This swallowtail behavior signifies a first-order phase transition. At the critical pressure \(P_{\text{c}}\), the phase transition becomes second-order, while above the critical pressure \(P_{\text{c}}\), no phase transition occurs. The critical point of
Figure 4: Plots of the heat capacity for the RN-AdS-like and RN-like black holes.
second-order phase transition can be analytically determined as
\[P_{\rm c} = \frac{1}{96\pi(1-\ell)^{2}Q_{\rm eff}^{2}}=\frac{1}{96\pi Q^{2}}, \tag{39}\] \[\mathcal{T}_{\rm c} = \frac{1}{3\sqrt{6}\pi(1-\ell)^{\frac{3}{2}}Q_{\rm eff}}=\frac{1}{3 \pi\sqrt{6(1-\ell)}Q},\] (40) \[r_{\rm c+} = \sqrt{6(1-\ell)}Q_{\rm eff}=\sqrt{\frac{6}{1-\ell}Q}. \tag{41}\]
It is evident that the critical pressure \(P_{\rm c}\) is independent of the Lorentz-violating parameter \(\ell\). However, both the critical temperature \(\mathcal{T}_{\rm c}\) and the critical size \(r_{\rm c+}\) of the black holes increase with \(\ell\).
The intersection point of the swallowtail represents the first-order phase transition point, corresponding to the coexistence phase of a large black hole and a small black hole. As illustrated in Fig. 5(b), it can be observed that, for a fixed charge and pressure below the critical pressure \(P_{\rm c}\), the first-order phase transition temperature increases with the Lorentz-breaking parameter \(\ell\).
## V Orbital motion of test particles
In this section, our focus is on studying the effects induced by Lorentz violation on the motion of massless photons and massive particles in the vicinity of the charged KR black holes.
Figure 5: Plots of the free energy for the RN-AdS-like and RN-like black holes.
The motion of a test particle with mass \(m\) along its geodesics is governed by the Lagrangian
\[\mathcal{L}=-\frac{1}{2}g_{\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta}=\eta/2, \tag{42}\]
where the dot represents the derivative with respect to the affine parameter \(\lambda\), and the constant \(\eta\) takes the value \(\eta=1\) for massive particles and \(\eta=0\) for massless photons.
Since the spacetime is spherically symmetric, without loss of generality, we restrict ourselves to the study of equatorial geodesics with \(\theta=\pi/2\). Consequently, the Lagrangian density (42) leads to the equation
\[F(r)\left(\frac{dt}{d\lambda}\right)^{2}-F(r)^{-1}\left(\frac{dr}{d\lambda} \right)^{2}-r^{2}\left(\frac{d\phi}{d\lambda}\right)^{2}=\eta. \tag{43}\]
Further, by taking into account two conserved quantities of the static and spherically symmetric spacetime, i.e., the energy per unit mass \(E=\frac{\partial\mathcal{L}}{\partial t}=F(r)\frac{dt}{d\lambda}\) and the angular momentum per unit mass \(L=-\frac{\partial\mathcal{L}}{\partial\dot{\phi}}=r^{2}\frac{d\phi}{d\lambda}\), we can derive
\[\dot{r}^{2}=E^{2}-F(r)\left(\eta^{2}+\frac{L^{2}}{r^{2}}\right)=E^{2}-V_{\rm eff }^{2}, \tag{44}\]
with the effective potential defined by \(V_{\rm eff}=\sqrt{F(r)\left(\eta^{2}+\frac{L^{2}}{r^{2}}\right)}\). Therefore, through an analysis of the effective potential, we can gain insights into the motion of particles in the vicinity of black holes.
### Shadow of massless photons
For the massless photons, it is interesting to study the formation of a black hole shadow, which is a result of the interaction between the intense gravitational field of the black hole and the surrounding light rays. When the light rays pass near to a black hole, they are deflected very strongly by its gravitational field. The photons with small orbital angular momentum are trapped by the black hole, and only those photons with large orbital angular momentum can escape from it. As a result, a distant observer observes a dark zone in the sky, known as the black hole shadow.
For the photon, where \(\eta=0\), we can rewrite Eq. (44) as
\[\dot{r}^{2}=E^{2}-\frac{L^{2}}{r^{2}}F(r)=E^{2}-V_{\rm eff}^{2}, \tag{45}\]
with the effective potential given by \(V_{\rm eff}=L\sqrt{F(r)}/r\).
For the circular photon orbit, the effective potential satisfies the conditions [62]
\[V_{\rm eff}=E,\quad\frac{\partial V_{\rm eff}}{\partial r}=0,\quad\mbox{and}\ \ \frac{ \partial^{2}V_{\rm eff}}{\partial r^{2}}<0. \tag{46}\]
Consequently, the radius of the circular photon orbit is determined by the implicit equation
\[r_{\rm ph}=2\frac{F(r_{\rm ph})}{F^{\prime}(r_{\rm ph})}. \tag{47}\]
Due to the inherent spherical symmetry, the photons can occupy all circular orbits, resulting in the formation of the photon sphere. Using the metric function (27), the radius \(r_{\rm ph}\) of the photon sphere can be obtained as
\[r_{\rm ph}=\frac{3(1-\ell)M}{2}\left[1+\sqrt{1-\frac{8Q^{2}}{9(1-\ell)^{3}M^{2 }}}\right]. \tag{48}\]
When \(\ell\to 0\), it recovers the result of the RN-(A)dS black hole [63], i.e.,
\[r_{\rm ph}=\frac{3M}{2}\left(1+\sqrt{1-\frac{8Q^{2}}{9M^{2}}}\right). \tag{49}\]
The shadow radius of the RN-(A)dS-like black-hole observed by a static observer at the position \(r_{\rm o}\) is given by [64]
\[r_{\rm sh}=\sqrt{\frac{F(r_{\rm o})}{F(r_{\rm ph})}}r_{\rm ph}=\sqrt{\frac{ \frac{1}{1-\ell}-\frac{2M}{r_{\rm o}}+\frac{Q^{2}}{(1-\ell)^{2}r_{\rm o}^{2}}- \frac{\Lambda r_{\rm o}^{2}}{3(1-\ell)}}{\frac{1}{1-\ell}-\frac{2M}{r_{\rm ph }}+\frac{Q^{2}}{(1-\ell)^{2}r_{\rm ph}^{2}}-\frac{\Lambda r_{\rm ph}^{2}}{3(1- \ell)}}}r_{\rm ph}. \tag{50}\]
By substituting the photon sphere radius (48) into this equation, we can obtain an explicit formula for the shadow radius of the RN-(A)dS-like black hole. Since the exact expression is somewhat cumbersome, we present some numerical results to illustrate the findings, as shown in Fig. 6.
For the RN-like black hole, \(F(r_{\rm o})\) approaches \(\frac{1}{1-\ell}\) for the observer located at infinity. In this case, the formula (50) simplifies to
\[r_{\rm sh}=\frac{3\sqrt{3}(1-\ell)M\left(1+\sqrt{1-\frac{8Q^{2}}{9(1-\ell)^{3} M^{2}}}\right)^{2}}{2\sqrt{2}\sqrt{1-\frac{2Q^{2}}{3(1-\ell)^{3}M^{2}}}+ \sqrt{1-\frac{8Q^{2}}{9(1-\ell)^{3}M^{2}}}}. \tag{51}\]
If we further set the charge \(Q\) to zero, we obtain the shadow radius of the Schwarzschild-like black hole, i.e.,
\[r_{\rm sh}=3\sqrt{3}(1-\ell)M. \tag{52}\]
From Eqs. (50), (51), and (52), it is evident that the Lorentz-violating parameter deforms the sizes of the black hole shadows. As shown in Fig. 6, the shadow size of these black holes decreases with the Lorentz-violating parameter \(\ell\).
The dependance of the shadow radius on additional parameters \(Q\) and \(\Lambda\) is illustrated in Fig. 7. It is evident that as the electric charge \(Q\) increases, both the RN-AdS-like and RN-dS-like black holes exhibit a reduction in their shadow radii. Moreover, when the magnitude of the cosmological constant \(\Lambda\) increases, the shadow radius of the RN-AdS-like black hole increases, whereas the shadow radius of the RN-dS-like black hole decreases. Especially, it reveals that the shadow radius is more sensitive to variations in the Lorentz-violating parameter \(\ell\) and the cosmological constant \(\Lambda\) than to the electric charge \(Q\).
When \(\ell=0\), the shadow radius (51) reduces to the result of the RN-(A)dS black hole [65], i.e.,
\[r_{\rm sh}=\frac{\left(3M+\sqrt{9M^{2}-8Q^{2}}\right)^{2}}{2\sqrt{2}\sqrt{3M^{ 2}-2Q^{2}+M\sqrt{9M^{2}-8Q^{2}}}}. \tag{53}\]
If we further set the charge \(Q\) to zero, the shadow radius will further degenerate into the result for the Schwarzschild black hole, i.e., \(r_{\rm sh}=3\sqrt{3}M\).
Figure 6: Plots of the shadow cast by the RN-AdS-like and RN-dS-like black holes observed in the celestial coordinates.
### ISCO of massive particles
The motion of massive particles in the vicinity of black holes provides an effective description for the extreme-mass-ratio-inspiral (EMRI) system. Among the particle trajectories, there exist a special type known as the stable circular orbits. These orbits correspond to the scenario where particles rotate along the local minimum of the effective potential well, satisfying the conditions \(V_{\rm eff}=E\), \(\frac{dV_{\rm eff}}{dr}=0\), and \(\frac{d^{2}V_{\rm eff}}{dr^{2}}>0\)[66]. The minimum radius of these circular orbits, corresponding to the critical case \(\frac{d^{2}V_{\rm eff}}{dr^{2}}=0\), is the so-called ISCO, which plays an important role in the study of realistic astrophysics.
For massive particles, where \(\eta=1\), Eq. (44) becomes
\[\dot{r}^{2}=E^{2}-F(r)\left(1+\frac{L^{2}}{r^{2}}\right)=E^{2}-V_{\rm eff}^{2}, \tag{54}\]
with the effective potential given by \(V_{\rm eff}=\sqrt{F(r)\left(1+\frac{L^{2}}{r^{2}}\right)}\).
From the conditions \(V_{\rm eff}=E\) and \(\frac{dV_{\rm eff}}{dr}=0\), we have
\[E^{2} = \frac{2F(r)^{2}}{2F(r)-rF^{\prime}(r)}=\frac{\left(\frac{1}{1- \ell}-\frac{2M}{r}+\frac{Q^{2}}{(1-\ell)^{2}r^{2}}-\frac{\Lambda r^{2}}{3(1- \ell)}\right)^{2}}{\frac{1}{1-\ell}-\frac{3M}{r}+\frac{2Q^{2}}{(1-\ell)^{2}r^ {2}}}, \tag{55}\] \[L^{2} = \frac{r^{3}F^{\prime}(r)}{2F(r)-rF^{\prime}(r)}=\frac{\left( \frac{M}{r}-\frac{Q^{2}}{(1-\ell)^{2}r^{2}}-\frac{\Lambda r^{2}}{3(1-\ell)} \right)r^{2}}{\frac{1}{1-\ell}-\frac{3M}{r}+\frac{2Q^{2}}{(1-\ell)^{2}r^{2}}}. \tag{56}\]
Figure 7: Plots of the shadow cast by the RN-AdS-like and RN-dS-like black holes observed in the celestial coordinates.
Consequently, with the critical condition \(\frac{d^{2}V_{\rm eff}}{dr^{2}}=0\), and Eqs. (55) and (56), the radius of ISCO can be solved from the equation,
\[r_{\rm ISCO}=\frac{3F(r_{\rm ISCO})F^{\prime}(r_{\rm ISCO})}{2F^{ \prime}(r_{\rm ISCO})^{2}-F(r_{\rm ISCO})F^{\prime\prime}(r_{\rm ISCO})}. \tag{57}\]
With the metric function \(F(r)\) of the RN-(A)dS-like black hole (27), it yields
\[\frac{12Q^{4}}{(1-\ell)^{4}r_{\rm ISCO}^{4}}-\left(\frac{9M}{r_{ \rm ISCO}}-\frac{4\Lambda r_{\rm ISCO}^{2}}{1-\ell}\right)\frac{3Q^{2}}{(1- \ell)^{2}r_{\rm ISCO}^{2}}-\left(\frac{1}{1-\ell}-\frac{6M}{r_{\rm ISCO}} \right)\frac{3M}{r_{\rm ISCO}}\] \[+\left(\frac{4}{1-\ell}-\frac{15M}{r_{\rm ISCO}}\right)\frac{ \Lambda r_{\rm ISCO}^{2}}{1-\ell}=0. \tag{58}\]
It is evident that the first term of the equation dominates for small \(r_{\rm ISCO}\) and it is positive. For the RN-like black hole, the dominant term for large \(r_{\rm ISCO}\) is \(-\frac{3M}{(1-\ell)r_{\rm ISCO}}\), which is negative. Thus, the equation always possesses roots in this case. For the RN-(A)dS-like black hole, the dominant term is \(\frac{4\Lambda r_{\rm ISCO}^{2}}{(1-\ell)^{2}}\), which depends solely on the sign of \(\Lambda\). As a result, the equation always possesses roots when the cosmological constant is negative. However, a root may not exist when the cosmological constant is positive. Therefore, we numerically solved the the parameter space of the existence of ISCO for the RN-dS-like black hole, and the results for different Lorentz-violating parameter \(\ell\) are presented in Fig. 8. It reveals that the parameter space expands with an increase in the Lorentz-violating parameter \(\ell\).
For the RN-like black hole (20), Eq. (58) can be solved analytically, yielding
\[r_{\rm ISCO}=2(1-\ell)M\frac{1-\frac{3Q^{2}}{4(1-\ell)^{3}M^{2}} +\Xi^{\frac{1}{3}}+\Xi^{\frac{2}{3}}}{\Xi^{\frac{1}{3}}}, \tag{59}\]
where \(\Xi\equiv 1-\frac{9Q^{2}}{8(1-\ell)^{3}M^{2}}+\frac{Q^{4}}{4(1-\ell)^{6}M^{4 }}+\frac{Q^{2}}{8(1-\ell)^{3}M^{2}}\sqrt{5-\frac{9Q^{2}}{(1-\ell)^{3}M^{2}}+ \frac{4Q^{4}}{(1-\ell)^{6}M^{4}}}\). If we further set the electric charge \(Q\) to be zero, the ISCO radius of the Schwarzschild-like black hole is
Figure 8: Plot of the parameter space of the existence of ISCO for the RN-dS-like black hole.
given by
\[r_{\rm ISCO}=6(1-\ell)M. \tag{60}\]
As shown in Fig. 9, the ISCO radii of the RN-like and Schwarzschild-like black holes decrease with both the Lorentz-violating parameter \(\ell\) and the electric charge \(Q\).
However, for the RN-(A)dS-like black hole, the ISCO radius cannot be determined analytically. Instead, we present some numerical results in Fig. 10. These results indicate that the ISCO radius of the RN-(A)dS-like black hole shrinks with an increase in the Lorentz-violating parameter \(\ell\) and the charge \(Q\), similar to the behavior observed in the case of the RN-like black hole. Moreover, Fig. 10(a) shows that the ISCO radius of the RN-AdS-like black hole decreases as the magnitude of the cosmological constant increases. However, Fig. 10(b) indicates that the ISCO radius of the RN-dS-like black hole expands with an increase in the magnitude of the cosmological constant.
It is worth noting that the results presented Figs. 9 and 10 reveal that the ISCO radius exhibits a higher sensitivity to variations in the Lorentz-violating parameter \(\ell\) compared to the cosmological constant \(\Lambda\) and electric charge \(Q\).
solutions, a non-minimal interaction term (2) between the electromagnetic field and KR field was introduced. In the absence of the Lorentz violation (\(\ell=0\)), these solutions reduce to the RN and RN-(A)dS metrics, respectively. It was found that the Lorentz-violating parameter \(\ell\) has a significant impact on the parameter spaces of black hole solutions. For instance, in contrast to the RN black hole, extremizing the charged KR black holes requires a smaller amount of electric charge when \(\ell\) is positive, while it necessitates a greater amount of electric charge when \(\ell\) is negative.
The thermodynamic properties of the RN-like and RN-AdS-like black holes were investigated. Our findings revealed that the standard first law of thermodynamics and the Smarr formula still hold, except for replacing the bare cosmological constant \(\Lambda\) with the effective cosmological constant \(\Lambda_{\text{eff}}\) and the bare charge \(Q\) with the effective charge \(Q_{\text{eff}}\). The locally stable ranges of the RN-like and RN-AdS-like black holes increase with the Lorentz-violating parameter \(\ell\). Furthermore, the phase transition of the RN-AdS-like black holes was investigated. It was found that the first-order phase transition temperature increases with the Lorentz-breaking parameter \(\ell\). Additionally, for the second-order phase transition, the critical pressure \(P_{\text{c}}\) is independent of the parameter \(\ell\). However, both the critical temperature \(\mathcal{T}_{\text{c}}\) and the critical size \(r_{\text{c+}}\) increase with the parameter \(\ell\).
Furthermore, in order to explore the impact of Lorentz violation on the motion of test particles near the black holes, we studied the shadow of the massless photons and ISCO
Figure 10: Plots of the ISCO radius for the RN-(A)dS like black hole for different parameters.
of the massive particles for the charged KR black holes. Our results demonstrate that the shadow and ISCO radii of these black holes decrease with the Lorentz-violating parameter \(\ell\). In particular, the radii exhibit a high sensitivity to variations in \(\ell\).
###### Acknowledgements.
We thank Si-Jiang Yang, Yun-Zhi Du, and Wen-Di Guo for helpful discussions. This work was supported by the National Natural Science Foundation of China under Grant No. 12005174.
|
2308.09747 | Pre-equilibrium photons from the early stages of heavy-ion collisions | We use QCD kinetic theory to compute photon production in the chemically
equilibrating Quark-Gluon Plasma created in the early stages of high-energy
heavy-ion collisions. We do a detailed comparison of pre-equilibrium photon
rates to the thermal photon production. We show that the photon spectrum
radiated from a hydrodynamic attractor evolution satisfies a simple scaling
form in terms of the specific shear viscosity $\eta/s$ and entropy density
$dS/d\zeta \sim {\scriptstyle \left(T\tau^{1/3}\right)^{3/2}}_\infty$. We
confirm the analytical predictions with numerical kinetic theory simulations.
We use the extracted scaling function to compute the pre-equilibrium photon
contribution in $\sqrt{s_{NN}}=2.76\,\text{TeV}$ 0-20\% PbPb collisions. We
demonstrate that our matching procedure allows for a smooth switching from
pre-equilibrium kinetic to thermal hydrodynamic photon production. Finally, our
publicly available implementation can be straightforwardly added to existing
heavy ion models. | Oscar Garcia-Montero, Aleksas Mazeliauskas, Philip Plaschke, Sören Schlichting | 2023-08-18T18:00:03Z | http://arxiv.org/abs/2308.09747v2 | # Pre-equilibrium photons from the early stages of heavy-ion collisions
###### Abstract
We use QCD kinetic theory to compute photon production in the chemically equilibrating Quark-Gluon Plasma created in the early stages of high-energy heavy-ion collisions. We do a detailed comparison of pre-equilibrium photon rates to the thermal photon production. We show that the photon spectrum radiated from a hydrodynamic attractor evolution satisfies a simple scaling form in terms of the specific shear viscosity \(\eta/s\) and entropy density \(dS/d\zeta\sim\left(\tau^{\,r+1/3}\right)^{3/2}_{\ \ \ \ \ \infty}\). We confirm the analytical predictions with numerical kinetic theory simulations. We use the extracted scaling function to compute the pre-equilibrium photon contribution in \(\sqrt{s_{NN}}=2.76\,\text{TeV}\) 0-20% PbPb collisions. We demonstrate that our matching procedure allows for a smooth switching from pre-equilibrium kinetic to thermal hydrodynamic photon production. Finally, our publicly available implementation can be straightforwardly added to existing heavy ion models.
###### Contents
* I Introduction
* II Photon production in QCD kinetic theory
* II.1 Elastic processes
* II.2 Inelastic processes
* II.3 Comparison to equilibrium AMY rates
* III Scaling laws for photon spectrum
* III.1 Scaling for evolution along a hydrodynamic attractor
* III.2 Scaling for ideal Bjorken expansion
* IV Photon production from non-equilibrium QGP evolution
* IV.1 Instantaneous rates
* IV.2 Scaling of the time-integrated photon spectrum
* V Phenomenology of the pre-equilibrium photons
* V.1 Photons from the pre-equilibrium stage
* V.2 Comparison to experimental data
* VI Summary and Outlook
* A Quark and gluon collision integrals in QCD kinetic theory
* A.1 Elastic collision integrals
* A.1.1 Elastic collision integrals
* A.1.2 Inelastic collision integrals
* B Direct photon production
* B.1 Prompt photons
* B.2 QGP radiation and hadron gas
## I Introduction
High-energy heavy-ion collisions produce an extremely hot and dense state of deconfined QCD matter. During the early stages of the collision, the QCD matter goes through a stage of kinetic and chemical equilibration, and the hydrodynamization of the Quark-Gluon Plasma (QGP) is swiftly achieved. Despite significant progress in the theoretical understanding of the early pre-equilibrium evolution [1; 2], this stage is veiled from direct experimental observation using hadronic observables by the memory loss of the equilibrating medium and by the complicated nature of hadronization. However, electromagnetic probes, i.e., photons and dileptons, provide a unique tool to extract information about the pre-equilibrium epoch as they can escape the deconfined medium without rescattering [3; 4].
Electromagnetic probes are produced during a heavy-ion collision through three main channels: hard scatterings in the first instants of the collision (prompt contribution), medium induced radiation (medium contribution), and the late-time hadronic decays (decay contribution). The sum of the first two channels (called the direct photons and dileptons) can be isolated by the subtraction of the decay products from the total yield.
By now it has been well established, that in order to describe the experimentally measured direct photon spectra in heavy ion collisions, it is essential to include the in-medium radiation. Specifically, the medium-induced photon radiation dominates over the prompt photon production at low and intermediate transverse momenta [5]. Beyond providing an additional source of photon production, the anisotropic expansion of the QGP correlates the photons to the collective flow of hadrons. This results in the well-known measurement of a non-zero photon elliptic flow [6; 7], i.e., the second Fourier component in the |
2305.17585 | Curious multisection identities by index factorization | This manuscript introduces a general multisection identity expressed
equivalently in terms of infinite double products and/or infinite double
series, from which several new product or summation identities involving
special functions including Gamma, hyperbolic trigonometric, polygamma, zeta
and Jacobi theta functions, are derived. It is shown that a parameterized
version of this multisection identity exists, a specialization of which
coincides with the standard multisection identity. | C. Vignat, M. Milgram | 2023-05-27T21:54:25Z | http://arxiv.org/abs/2305.17585v2 | # Curious multisection identities by index factorization
###### Abstract.
This manuscript introduces a general multisection identity expressed equivalently in terms of infinite double products and/or infinite double series, from which several new product or summation identities involving special functions including Gamma, hyperbolic trigonometric, polygamma, zeta and Jacobi theta functions, are derived. It is shown that a parameterized version of this multisection identity exists, a specialization of which coincides with the standard multisection identity.
## 1. Introduction
In the recent article [1], the second author derived the curious identity
\[\prod_{j\geq 1}\left(\frac{\tan\left(\frac{a}{2j}\right)}{\frac{a}{2^{j}}} \right)^{2^{j-1}}=\frac{a}{\sin a} \tag{1.1}\]
along with several other identities of the same flavor. Upon closer inspection, the first author noticed that this identity is the specialization of a more general relationship that can be stated as the even more curious identity
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1),2^{j}}}{a_{(2n).2^{j}}} \right)^{2^{j}}=\prod_{m\geq 1}a_{m} \tag{1.2}\]
that holds for any sequence of entities \(\{a_{m}\}\) such that the infinite product \(\prod_{m\geq 1}a_{m}\) is convergent. In (1.2) we have explicitly written each index \(m\) as the product of the two components of its factored form, that is, \(m=(2n).2^{j}\) or \(m=(2n-1).2^{j}\). The identity (1.1) is the specialization \(a_{m}=\left(1-\frac{a^{2}}{m^{2}\pi^{2}}\right)^{-1}\) of (1.2) ( see (3.38) and (8.23) below). Identity (1.2) can be viewed as a _structural identity_: it holds as a consequence of one way the terms are grouped by the components of the factorization of their index in the product, rather than of the specific values of the individual elements \(a_{m}\).
Other examples of structural identities are:
* for an arbitrary sequence \(\{a_{m}\}\,,\) (1.3) \[\prod_{m\geq 1}a_{2m-1}a_{2m}=\prod_{m\geq 1}a_{m},\] which is a simple application of the dissection principle;
* for an arbitrary summable double sequence \(\{a_{m,n}\}_{(m,n)\in\mathbb{Z}^{2}}\), (1.4) \[\sum_{(m,n)\in\mathbb{Z}^{2}}a_{m,n}=\sum_{m<n}a_{m,n}+\sum_{m>n}a_{m,n}+\sum _{m\in\mathbb{Z}}a_{m,m}\] corresponding to an obvious multisection of the two-dimensional integer lattice. For similar additive rather than multiplicative multisections, see [2] and [3].
In the following, each extension or specialization of (1.2) will be provided along with its double series equivalent expression; in the case of (1.2), this is
\[\sum_{j\geq 0,n\geq 1}2^{j}b_{(2n-1).2^{j}}-2^{j}b_{(2n).2^{j}}=\sum_{m\geq 1}b _{m}, \tag{1.5}\]
where \(\{b_{m}\}\) is an arbitrary summable sequence.
The remainder of this work is subdivided into a series of sections, the first of which, Section 2, develops the underlying abstractions and notation, followed by Section 3 that generalizes the previous abstractions with a series of specializations and examples. In the following two Sections 4 and 5 the groundwork is laid for an extension of this work, firstly into the realm of finite multisections and secondly into the field of multiply indexed entities, where a specialized application is used to prove a relationship between Lambert series. In the next Section 6 it is shown how a generating function approach is relevant; this is followed by Section 7 where an application involving
the q-calculus is developed. Section 8 presents a large number of examples that involve special functions as special cases, followed by Section 9 where two Gamma function identities are presented. Section 10 summarizes the paper and suggests directions for future investigation.
## 2. **Abstractions and Notation**
Let us now introduce some further notation:
* the \(b-\)_valuation_\(\nu_{b}\left(m\right)\) of a positive integer \(m\) is defined as the integer (2.1) \[\nu_{b}\left(m\right)=\max\left\{k\in\mathbb{N}:m=0\mod b^{k}\right\},\] representing the largest exponent in the factorization of the integer \(m\) relative to an integer \(b\), (not to be confused with any of the element(s) \(b_{m}\),) which will be referred to as the _base_. For example:
* \(\nu_{2}(3)=0\), because \(3=3.2^{0}\) is not an integral multiple of \(2\);
* \(\nu_{3}(6)=1\) because \(1\) is the largest exponent of \(3\) in the factorization \(6=2.3\)1; Footnote 1: Notice that the equality \(C_{b}=\mathbb{N}\) is a simple consequence of the fact that each set \(\{(bn-k)\,.b^{j},n\geq 1,j\geq 0\}\) is the set of integers having, in their base \(b\) representation, \(j\) trailing zeros and their \(j+1\)st digit equal to \(b-k\). Hence these sets form a partition of \(\mathbb{N}\).
* \(\nu_{5}(100)=2\) because of the decomposition \(100=4.5^{2}\) relative to base \(5\)
* and \(\nu_{2}(100)=2\) because of the decomposition \(100=25.2^{2}\) relative to base \(2\). Notice that as a consequence of this definition, for any positive integer \(m\), there is a unique representation
\[m=k.b^{\nu_{b}\left(m\right)},\ b\nmid k. \tag{2.2}\]
* for a multiset \(S\) that contains \(j\) occurrences of the integer \(m\), we use the notation (2.3) \[S=\{\ldots,m^{\left\{j\right\}},\ldots\},\] the notation \(m^{\left\{0\right\}}\) meaning by convention that the integer \(m\) does not appear in the multiset \(S\).
All identities in this article are based on the following principle: define the multisets \(C_{b}\) and \(D_{b}\) as
\[C_{b}=\cup_{0<k<b}\{(bn-k)\,.b^{j},n\geq 1,j\geq 0\} \tag{2.4}\]
and
\[D_{b}=\{(bn)\,.b^{j},n\geq 1,j\geq 0\}. \tag{2.5}\]
For example, in the case \(b=2\), \(C_{2}\) collects all integers, and
\[D_{2}=\{2,4,4,6,8,8,8,10,12,12,14,16,16,16,16,\ldots\}=\{2,4^{\left\{2\right\} \right\},6,8^{\left\{3\right\}},10,12^{\left\{2\right\}},14,16^{\left\{4\right\} },\ldots\}. \tag{2.6}\]
In the case \(b=3\), \(C_{3}\) collects all integers and
\[D_{3}=\{3,6,9^{\left\{2\right\}},12,15,18^{\left\{2\right\}},21,24,27^{\left\{ 3\right\}},\ldots\}. \tag{2.7}\]
Defining the multiset
\[E_{b}=\{m^{\left\{\nu_{b}\left(m\right)\right\}},m\in\mathbb{N}\}, \tag{2.8}\]
we compute
\[E_{2}=\{m^{\left\{\nu_{2}\left(m\right)\right\}},m\in\mathbb{N}\}=\{2,4^{ \left\{2\right\}},6,8^{\left\{3\right\}},10,12^{\left\{2\right\}},14,16^{ \left\{4\right\}},\ldots\} \tag{2.9}\]
and
\[E_{3}=\{m^{\left\{\nu_{3}\left(m\right)\right\}},m\in\mathbb{N}\}=\{3,6,9^{ \left\{2\right\}},12,15,18^{\left\{2\right\}},21,24,27^{\left\{3\right\}}, \ldots\}, \tag{2.10}\]
and it appears that \(D_{2}=E_{2}\) and \(D_{3}=E_{3}\) while1\(C_{2}=C_{3}=\mathbb{N}\).
Footnote 1: Notice that the equality \(C_{b}=\mathbb{N}\) is a simple consequence of the fact that each set \(\{(bn-k)\,.b^{j},n\geq 1,j\geq 0\}\) is the set of integers having, in their base \(b\) representation, \(j\) trailing zeros and their \(j+1\)st digit equal to \(b-k\). Hence these sets form a partition of \(\mathbb{N}\).
This result is in fact true for any base \(b\): as will be shown in Section 3.1, for an arbitrary base \(b\geq 2\),
\[C_{b}=\mathbb{N}\text{ and }D_{b}=E_{b}. \tag{2.11}\]
More precisely, any integer \(m\geq 1\) appears once in \(C_{b}\) (i.e. for \(j=\nu_{b}\left(m\right)\)) and \(\nu_{b}\left(m\right)\) times 2 in \(D_{b}\) (i.e. once for each value of \(j\) such that \(0\leq j\leq\nu_{b}\left(m\right)-1\)).
Footnote 2: if \(\nu_{b}\left(m\right)=0\), this means that \(m\) does not appear in the multiset \(D_{b}\)
As a consequence, for two arbitrary functions \(\varphi\) and \(\chi\) such that the following (creatively chosen) infinite products exist, we have the general formula (see (3.25) below)
\[\prod_{j\geq 0,n\geq 1}\left(\prod_{k=1}^{b-1}a_{(nb-k)\,.b^{j}}^{\varphi(j)}\right) a_{(nb)\,.b^{j}}^{\chi(j)}=\prod_{m\geq 1}a_{m}^{\varphi(\nu_{b}\left(m\right))+\sum_{k=0}^{ \nu_{b}\left(m\right)-1}\chi(k)} \tag{2.12}\]
whose validity follows from the equality of the collected exponent of each element \(a_{m}\) on both sides of the identity and the fact that multiplication is associative. Since this identity is structural, i.e. a consequence of the fact that the two multisets of summation or multiplication indices \(C_{b}\cup D_{b}\) and \(\mathbb{N}\cup E_{b}\) coincide, it also translates into a sum form3: for an arbitrary sequence \(b_{m}\) such that the following sums exist, it is given by (see (3.26) below)
Footnote 3: this remark allows us to obtain a sum form without assuming that \(b_{m}=\log(a_{m})\)
\[\sum_{j\geq 0,n\geq 1}\left(\left(\sum_{k=1}^{b-1}\varphi\left(j\right)b_{(nb-k),b^{j}}\right)+\chi\left(j\right)b_{(nb),b^{j}}\right)=\sum_{m\geq 1}\left( \varphi\left(\nu_{b}\left(m\right)\right)+\sum_{k=0}^{\nu_{b}\left(m\right)-1} \chi\left(k\right)\right)b_{m}. \tag{2.13}\]
Different choices of the functions \(\varphi\) and \(\chi\), for example such that
\[\varphi\left(\nu_{b}\left(m\right)\right)+\sum_{k=0}^{\nu_{b}\left(m\right)-1 }\chi\left(k\right)=1, \tag{2.14}\]
produce some of the identities that will be studied in this article.
We close this introduction by noting that: (i) the product form
\[\prod_{n\in C_{b}\cup D_{b}}a_{n}=\prod_{n\in\mathbb{N}\cup E_{b}}a_{n} \tag{2.15}\]
and the sum form
\[\sum_{n\in C_{b}\cup D_{b}}b_{n}=\sum_{n\in\mathbb{N}\cup E_{b}}b_{n} \tag{2.16}\]
in formulas (2.12) and (2.13) respectively can be replaced by any symmetric form, such as for example
\[\sum_{\begin{subarray}{c}n_{1}<n_{2}\\ n_{1}\cdot n_{2}\leq C_{b}\cup D_{b}\end{subarray}}b_{n_{1}}b_{n_{2}}=\sum_{ \begin{subarray}{c}n_{1}<n_{2}\\ n_{1},n_{2}\in\mathbb{N}\cup E_{b}\end{subarray}}b_{n_{1}}b_{n_{2}} \tag{2.17}\]
and (ii) at a fundamental level we are effectively introducing a form of multiplicative telescoping 4 similar to that noted in [1, Eq. (2.40)]. For another example see Remark 6.1 below.
Footnote 4: cancellation between different elements in a product
## 3. **Generalizations**
Playing with (1.2) suggested the following generalization.
**Proposition 3.1**.: For an arbitrary value \(q\in\mathbb{C}\) and for an arbitrary sequence \(\{a_{m}\}\) such that \(\prod_{m\geq 1}a_{m}\) exists, we have
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1),2^{j}}}{a_{(2n),2^{j}}^{q-1}} \right)^{q^{j}}=\prod_{m\geq 1}a_{m} \tag{3.1}\]
and its series version: for an arbitrary sequence \(\{b_{m}\}\) such that \(\sum_{m\geq 1}b_{m}\) exists,
\[\sum_{j\geq 0,n\geq 1}q^{j}b_{(2n-1),2^{j}}-q^{j}\left(q-1\right)b_{(2n),2^{j} }=\sum_{m\geq 1}b_{m}. \tag{3.2}\]
A proof of this identity is provided in Section 3.1. A proof in the particular case \(b_{m}=t^{m}\) can be found in Section 6. Specializations of identity (3.1) are given next.
**Example 3.1.1**.: For \(q=0\), identity (3.1) reduces to the usual dissection identity
\[\prod_{n\geq 1}a_{2n-1}a_{2n}=\prod_{n\geq 1}a_{n}. \tag{3.3}\]
Proof.: Notice that
\[q^{j}=\begin{cases}1&\text{if }j=0\\ 0&\text{else}\end{cases} \tag{3.4}\]
so that the right-hand side reduces to
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1),2^{j}}}{a_{(2n),2^{j}}^{q-1}} \right)^{q^{j}}=\prod_{n\geq 1}\frac{a_{(2n-1)}}{a_{2n}^{-1}}=\prod_{n\geq 1}a_{2n-1 }a_{2n}=\prod_{n\geq 1}a_{n}. \tag{3.5}\]
**Remark 3.1**.: The fact that we recover the usual dissection identity for \(q=0\) shows that identity (3.1) can be considered as a parameterized extension of the usual dissection formula.
**Example 3.1.2**.: The case \(q=1\) of identity (3.1) produces
\[\prod_{j\geq 1,n\geq 1}a_{(2n-1).2^{j}}=\prod_{n\geq 1}a_{2n}. \tag{3.6}\]
Proof.: Start from
\[\prod_{m\geq 1}a_{m}=\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1).2^{j }}}{a_{2n.2^{j}}^{q-1}}\right)^{q^{j}} =\prod_{j\geq 0,n\geq 1}a_{(2n-1).2^{j}}\] \[=\prod_{n\geq 1}a_{2n-1}\left(\prod_{j\geq 1,n\geq 1}a_{(2n-1).2^{j }}\right) \tag{3.7}\]
from which we deduce
\[\prod_{j\geq 1,n\geq 1}a_{(2n-1).2^{j}}=\prod_{n\geq 1}a_{2n}. \tag{3.8}\]
This result is interpreted as follows: consider an arbitrary even number \(m\). By (2.2), there is a unique way to write
\[m=p.2^{\nu_{2}(m)} \tag{3.9}\]
with \(p\) an odd number: first compute its valuation \(\nu_{2}\left(m\right)\) according to (2.1) and then consider
\[p=\frac{m}{2^{\nu_{2}(m)}} \tag{3.10}\]
which by definition is an odd number.
**Example 3.1.3**.: In the case \(q=-1\), (3.1) produces
\[\prod_{j\geq 0,n\geq 1}\frac{a_{(2n-1).2^{2j}}a_{2n.2^{j}}^{2}}{a_{(2n-1).2^{2 j+1}}a_{2n.2^{2j+1}}^{2}}=\prod_{m\geq 1}a_{m}. \tag{3.11}\]
Proof.: For \(q=-1\), we have
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1).2^{j}}}{a_{2n.2^{j}}^{q-1}} \right)^{q^{j}}=\prod_{j\geq 0,n\geq 1}\left(a_{(2n-1).2^{j}}a_{2n.2^{j}}^{2} \right)^{\left(-1\right)^{j}}. \tag{3.12}\]
Separating the terms with even values of \(j\) (with a \((+1)\) exponent) from those with odd values of \(j\) (with a \((-1)\) exponent) produces the result.
The previous results can be extended to the base \(3\) case as follows.
**Corollary 3.1.1**.: _For any \(q\in\mathbb{C},\)_
\[\prod_{n\geq 1,j\geq 0}\left(\frac{a_{(3n-1).3^{j}}a_{(3n-2).3^{j}}}{a_{3n.3^{j} }^{q-1}}\right)^{q^{j}}=\prod_{m\geq 1}a_{m} \tag{3.13}\]
_The case \(q=1\) produces_
\[\prod_{n\geq 1,j\geq 0}a_{(3n-1).3^{j}}a_{(3n-2).3^{j}}=\prod_{m\geq 1}a_{m} \tag{3.14}\]
_from which we deduce the identity_
\[\prod_{n\geq 1,j\geq 1}a_{(3n-1).3^{j}}a_{(3n-2).3^{j}}=\prod_{m\geq 1}a_{3m}. \tag{3.15}\]
_The case \(q=0\) produces, as previously, the usual multisection identity_
\[\prod_{n\geq 1}a_{3n}a_{3n-1}a_{3n-2}=\prod_{m\geq 1}a_{m}. \tag{3.16}\]
The arbitrary base \(b\geq 2\) case follows.
**Proposition 3.2**.: For any integer \(n\geq 2\) and any \(q\in\mathbb{C}\),
\[\prod_{n\geq 1,j\geq 0}\left(\frac{a_{(bn-1).bj}a_{(bn-2).bj}\dots a_{(bn-(b-1)).bj }}{a_{bn.bj}^{q-1}}\right)^{q^{j}}=\prod_{m\geq 1}a_{m} \tag{3.17}\]
The case \(q=1\) produces
\[\prod_{n\geq 1,j\geq 0}a_{(bn-1).bj}a_{(bn-2).bj}\dots a_{(bn-(b-1)).bj}= \prod_{m\geq 1}a_{m} \tag{3.18}\]
from which we deduce the identity
\[\prod_{n\geq 1,j\geq 1}a_{(bn-1).bj}a_{(bn-2).bj}\dots a_{(bn-(b-1)).bj}= \prod_{m\geq 1}a_{bm}. \tag{3.19}\]
The case \(q=0\) produces, as previously, the usual multisection identity
\[\prod_{n\geq 1}a_{(bn).nj}a_{(bn-1).bj}a_{(bn-2).bj}\dots a_{(bn-(b-1)).bj}= \prod_{m\geq 1}a_{m}. \tag{3.20}\]
### A general case
A general version of identity (1.2) is given next.
**Proposition 3.3**.: For two functions \(\varphi\) and \(\chi\), and with \(\nu_{2}\left(m\right)\) being the \(2-\)valuation of \(m\), assuming that the infinite products are convergent, then
\[\prod_{j\geq 0,n\geq 1}a_{(2n-1).2j}^{\varphi(j)}a_{(2n).2j}^{\chi(j)}=\prod_{m \geq 1}a_{m}^{\varphi(\nu_{2}\left(m\right))+\sum_{k=0}^{\nu_{2}\left(m\right)-1} \chi\left(k\right)} \tag{3.21}\]
or equivalently
\[\sum_{j\geq 0,n\geq 1}\varphi\left(j\right)b_{(2n-1).2j}+\chi\left(j\right)b_{ (2n).2j}=\sum_{m\geq 1}\left(\varphi\left(\nu_{2}\left(m\right)\right)+\sum_{k=0}^{ \nu_{2}\left(m\right)-1}\chi\left(k\right)\right)b_{m}. \tag{3.22}\]
The base \(3\) case is
\[\prod_{j\geq 0,n\geq 1}a_{(3n-2).3^{j}}^{\varphi(j)}a_{(3n-1).3^{j}}^{\varphi(j)}a_ {(3n).3^{j}}^{\chi(j)}=\prod_{m\geq 1}a_{m}^{\varphi(\nu_{3}\left(m\right))+ \sum_{k=0}^{\nu_{3}\left(m\right)-1}\chi\left(k\right)} \tag{3.23}\]
or
\[\sum_{j\geq 0,n\geq 1}\varphi\left(j\right)b_{(3n-2).3j}+\varphi\left(j\right)b_{ (3n-1).3j}+\chi\left(j\right)b_{(3n).3j}=\sum_{m\geq 1}\left(\varphi\left(\nu_{3} \left(m\right)\right)+\sum_{k=0}^{\nu_{3}\left(m\right)-1}\chi\left(k\right) \right)b_{m}. \tag{3.24}\]
and the arbitrary base \(b\) case, with \(b\geq 2\), is
\[\prod_{j\geq 0,n\geq 1}\left(\prod_{k=1}^{b-1}a_{(nb-k).bj}^{\varphi(j)}\right)a_ {(nb).bj}^{\chi(j)}=\prod_{m\geq 1}a_{m}^{\varphi(\nu_{k}\left(m\right))+ \sum_{k=0}^{\nu_{k}\left(m\right)-1}\chi\left(k\right)} \tag{3.25}\]
or equivalently
\[\sum_{j\geq 0,n\geq 1}\left(\left(\sum_{k=1}^{b-1}\varphi\left(j\right)b_{(nb-k).bj}\right)+\chi\left(j\right)b_{(nb).bj}\right)=\sum_{m\geq 1}\left(\varphi \left(\nu_{b}\left(m\right)\right)+\sum_{k=0}^{\nu_{b}\left(m\right)-1}\chi \left(k\right)\right)b_{m}. \tag{3.26}\]
Proof.: The term \(a_{m}\) appears once in the sequence
\[\left\{a_{(2n-1).2^{j}}\right\}_{j\geq 0,n\geq 1} \tag{3.27}\]
for \(j=\nu_{2}\left(m\right)\) and appears \(\nu_{2}\left(m\right)\) times in the sequence
\[\left\{a_{(2n).2^{j}}\right\}_{j\geq 0,n\geq 1} \tag{3.28}\]
for \(j=0,1,\dots,\nu_{2}\left(m\right)-1\) successively and therefore appears with equal cumulative exponent (or coefficient) on both sides of (3.25) or (3.26) respectively. More generally, the term \(a_{m}\) appears once in the sequence
\[\left\{a_{(bn-k).bj}\right\}_{j\geq 0,n\geq 1} \tag{3.29}\]
for \(j=\nu_{b}\left(m\right)\) and appears \(\nu_{b}\left(m\right)\) times in the sequence
\[\left\{a_{\left(bn\right).b^{j}}\right\}_{j\geq 0,n\geq 1} \tag{3.30}\]
for \(j=0,1,\ldots,\nu_{b}\left(m\right)-1\) successively.
**Example 3.3.1**.: Here are a few specializations of the previous formula
1. In the case \(\varphi\left(j\right)=\chi\left(j\right)=j,\) (3.31) \[\prod_{j\geq 0,n\geq 1}\left(a_{\left(2n-1\right).2^{j}}a_{\left(2n \right).2^{j}}\right)^{j}=\prod_{m\geq 1}a_{m}^{\frac{\nu_{2}\left(m\right) \left(\nu_{2}\left(m\right)+1\right)}{2}}\] or equivalently (3.32) \[\sum_{j\geq 0,n\geq 1}j\left(b_{\left(2n-1\right).2^{j}}+b_{\left(2n \right).2^{j}}\right)=\sum_{m\geq 1}\frac{\nu_{2}\left(m\right)\left(\nu_{2} \left(m\right)+1\right)}{2}b_{m}.\]
2. In the case \(\varphi\left(j\right)=j,\chi\left(j\right)=2j,\) (3.33) \[\prod_{j\geq 0,n\geq 1}\left(a_{\left(2n-1\right).2^{j}}a_{\left(2n \right).2^{j}}^{2}\right)^{j}=\prod_{m\geq 1}a_{m}^{\nu_{2}^{2}\left(m\right)}\] or equivalently (3.34) \[\sum_{j\geq 0,n\geq 1}j\left(b_{\left(2n-1\right).2^{j}}+2b_{\left(2n \right).2^{j}}\right)=\sum_{m\geq 1}\nu_{2}^{2}\left(m\right)b_{m}.\]
3. In the case \(\varphi\left(j\right)=\chi\left(j\right)=1,\) (3.35) \[\prod_{j\geq 0,n\geq 1}a_{\left(2n-1\right).2^{j}}a_{\left(2n\right).2^{j}}= \prod_{m\geq 1}a_{m}^{\nu_{2}\left(m\right)+1}\] or equivalently (3.36) \[\sum_{j\geq 0,n\geq 1}b_{\left(2n-1\right).2^{j}}+b_{\left(2n\right).2^{j}}= \sum_{m\geq 1}\left(\nu_{2}\left(m\right)+1\right)b_{m}.\]
4. The case \(\varphi\left(j\right)=q^{j},\chi\left(j\right)=\left(1-q\right)q^{j}\) corresponds to the parameterized identity (3.37) \[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{\left(2n-1\right).2^{j}}}{a_{\left(2n \right).2^{j}}^{q-1}}\right)^{q^{j}}=\prod_{m\geq 1}a_{m}\] which is (3.1), and the case \(\varphi\left(j\right)=2^{j},\chi\left(j\right)=-2^{j}\) corresponds to the original identity (3.38) \[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{\left(2n-1\right).2^{j}}}{a_{\left(2n \right).2^{j}}}\right)^{2^{j}}=\prod_{m\geq 1}a_{m}\] which is (1.2).
5. A last case is the choice \(\chi\left(j\right)=j^{2p}\) for an integer \(p\geq 1\) and \(\varphi\left(j\right)=-\frac{B_{2p+1}\left(j\right)}{2p+1}\) with \(B_{2p+1}\left(x\right)\) the Bernoulli polynomial of degree \(2p+1,\) defined by the generating function \(\sum_{n\geq 0}B_{n}(x)\frac{z^{n}}{n!}=\frac{z}{ez-1}ez^{xx}.\) This choice produces the identity (3.39) \[\prod_{j\geq 0,n\geq 1}a_{\left(2n-1\right).2^{j}}^{\frac{B_{2p+1}\left(j \right)}{2p+1}}=\prod_{j\geq 0,n\geq 1}a_{\left(2n\right).2^{j}}^{j\text{ }}\] or (3.40) \[\sum_{j\geq 0,n\geq 1}\frac{B_{2p+1}\left(j\right)}{2p+1}b_{\left(2n-1 \right).2^{j}}=\sum_{j\geq 0,n\geq 1}j^{2p}b_{\left(2n\right).2^{j}}\]
**Example 3.3.2**.: Some additional interesting examples are:
1. the case \(\varphi\left(j\right)=-j^{2}+j+1\), \(\chi\left(j\right)=2j\) produces (3.41) \[\prod_{j\geq 0,n\geq 1}a_{\left(2n-1\right).2j}^{-j^{2}+j+1}a_{\left(2n \right).2j}^{2j}=\prod_{m\geq 1}a_{m}\] or (3.42) \[\sum_{n\geq 1,j\geq 0}\left(-j^{2}+j+1\right)b_{\left(2n-1\right).2j}+ \left(2j\right)b_{\left(2n\right).2j}=\sum_{m\geq 1}b_{m}.\]
2. the telescoping choice \(\varphi\left(k\right)=\frac{1}{k+1}\), \(\chi\left(k\right)=\frac{1}{\left(k+1\right)\left(k+2\right)}\) produces (3.43) \[\prod_{j\geq 0,n\geq 1}a_{\left(2n-1\right).2j}^{\frac{1}{j+1}}a_{\left(2n \right).2j}^{\frac{1}{\left(j+1\right)\left(j+2\right)}}=\prod_{m\geq 1}a_{m}\] or (3.44) \[\sum_{j\geq 0,n\geq 1}\frac{1}{j+1}b_{\left(2n-1\right).2j}+\frac{1}{\left(j+ 1\right)\left(j+2\right)}b_{\left(2n\right).2j}=\sum_{m\geq 1}b_{m}\]
3. the base 3 case of the previous identity is (3.45) \[\prod_{j\geq 0,n\geq 1}a_{\left(3n-1\right).3j}^{\frac{1}{j+1}}\,a_{\left(3n-2 \right).3j}^{\frac{1}{j+1}}\,a_{\left(3n\right).3j}^{\frac{1}{\left(j+1\right) \left(j+2\right)}}=\prod_{m\geq 1}a_{m}\] or (3.46) \[\sum_{j\geq 0,n\geq 1}\frac{1}{j+1}b_{\left(3n-1\right).3j}+\frac{1}{j+1}b_{ \left(3n-2\right).3j}+\frac{1}{\left(j+1\right)\left(j+2\right)}b_{\left(3n \right).3j}=\sum_{m\geq 1}b_{m}\]
4. the choice \(\varphi\left(k\right)=1-\frac{k\left(k+3\right)}{4\left(k+1\right)\left(k+2 \right)}\), \(\chi\left(k\right)=\frac{1}{\left(k+1\right)\left(k+2\right)\left(k+3\right)}\) produces (3.47) \[\prod_{j\geq 0,n\geq 1}a_{\left(2n-1\right).2j}^{1-\frac{\left(j+3\right)}{ \left(j+1\right)\left(j+2\right)}}a_{\left(2n\right).2j}^{\frac{1}{\left(j+1 \right)\left(j+2\right)\left(j+3\right)}}=\prod_{m\geq 1}a_{m}\] or (3.48) \[\sum_{j\geq 0,n\geq 1}\left(1-\frac{j\left(j+3\right)}{4\left(j+1\right) \left(j+2\right)}\right)b_{\left(2n-1\right).2j}+\frac{1}{\left(j+1\right) \left(j+2\right)\left(j+3\right)}b_{\left(2n\right).2j}=\sum_{m\geq 1}b_{m}\]
**Remark 3.2**.: We close this section with two specializations
* the specialization \(\chi\left(k\right)=0\) produces generating functions for the valuation function \(\nu_{b}(n)\) of the type (3.49) \[\prod_{j\geq 0,n\geq 1}\left(\prod_{k=1}^{b-1}a_{\left(nb-k\right),b^{j}}^{ \varphi\left(j\right)}\right)=\prod_{m\geq 1}a_{m}^{\varphi\left(\nu_{b}(m) \right)}\] or (3.50) \[\sum_{j\geq 0,n\geq 1}\varphi\left(j\right)\sum_{k=1}^{b-1}b_{\left(nb-k \right),b^{j}}=\sum_{m\geq 1}\varphi\left(\nu_{b}\left(m\right)\right)b_{m};\] for example, in the case \(b=2\), (3.51) \[\prod_{j\geq 0,n\geq 1}a_{\left(2n-1\right).2j}^{\varphi\left(j\right)}=\prod_{m \geq 1}a_{m}^{\varphi\left(\nu_{2}\left(m\right)\right)}\] or equivalently (3.52) \[\sum_{j\geq 0,n\geq 1}\varphi\left(j\right)b_{\left(2n-1\right).2j}=\sum_{m \geq 1}\varphi\left(\nu_{2}\left(m\right)\right)b_{m}.\]
* the choice \(b_{m}=t^{m}\) in the previous identity produces a generating function for the sequence \(\left\{\varphi\left(\nu_{2}\left(m\right)\right)\right\}\) in the form (3.53) \[\sum_{j\geq 0}\varphi\left(j\right)\frac{t^{2^{j}}}{1-t^{2^{j+1}}}=\sum_{m \geq 1}\varphi\left(\nu_{2}\left(m\right)\right)t^{m}\]
* the choice \(b_{m}=\frac{1}{m^{s}}\) produces the identity between Dirichlet series (3.54) \[\left(1-2^{-s}\right)\zeta\left(s\right)\sum_{j\geq 0}\frac{\varphi\left(j \right)}{2^{sj}}=\sum_{m\geq 1}\frac{\varphi\left(\nu_{2}\left(m\right)\right)}{m^{s}};\] choosing \(\varphi\left(j\right)=\cos\left(2\pi jx\right)\) produces (3.55) \[\sum_{m\geq 1}\frac{\cos\left(2\pi x\nu_{2}\left(m\right)\right)}{m^{s}}= \frac{2^{s}-1}{2^{s+1}}\frac{\cos\left(2\pi x\right)-2^{s}}{\cos\left(2\pi x \right)-\frac{2^{s}+2^{-s}}{2}}\zeta\left(s\right).\] The specializations \(x=\frac{1}{4}\) and \(x=\frac{1}{2}\) successively produce, for \(s>1\), the identities (3.56) \[\sum_{m\geq 1}\frac{\cos\left(\frac{\pi}{2}\nu_{2}\left(m\right)\right)}{m^{s} }=\frac{4^{s}-2^{s}}{4^{s}+1}\zeta\left(s\right)\] and (3.57) \[\sum_{m\geq 1}\frac{\left(-1\right)^{\nu_{2}\left(m\right)}}{m^{s}}= \frac{2^{s}-1}{2^{s}+1}\zeta\left(s\right),\] an identity that should be compared to the Dirichlet series (see entry A007814 in [4]) (3.58) \[\sum_{m\geq 1}\frac{\nu_{2}(m)}{m^{s}}=\frac{\zeta(s)}{2^{s}-1}\]
* the specialization \(\varphi\left(k\right)=0\) produces identities of the type (3.59) \[\prod_{j\geq 0,n\geq 1}a_{(nb).b^{j}}^{\chi\left(j\right)}=\prod_{m\geq 1}a_{m =0}^{\sum_{k=0}^{\nu_{k}\left(m\right)-1}\chi\left(k\right)}\] or (3.60) \[\sum_{j\geq 0,n\geq 1}\chi\left(j\right)b_{(bn).b^{j}}=\sum_{m\geq 1}\left( \sum_{k=0}^{\nu_{k}\left(m\right)-1}\chi\left(k\right)\right)b_{m};\] for example, if \(b=2\), (3.61) \[\prod_{j\geq 0,n\geq 1}a_{(2n).2^{j}}^{\chi\left(j\right)}=\prod_{m\geq 1}a_{m =0}^{\sum_{k=0}^{\nu_{2}\left(m\right)-1}\chi\left(k\right)}\] or equivalently (3.62) \[\sum_{j\geq 0,n\geq 1}\chi\left(j\right)b_{(2n).2^{j}}=\sum_{m\geq 1}\left( \sum_{k=0}^{\nu_{2}\left(m\right)-1}\chi\left(k\right)\right)b_{m}.\] The specialization \(b_{m}=\frac{1}{m^{s}}\) and \(\chi\left(j\right)=t^{j}\) yields the Dirichlet series (3.63) \[\sum_{m\geq 1}\frac{t^{\nu_{2}\left(m\right)}}{m^{s}}=\left(\frac{2^{s}-1}{2^{s }-t}\right)\zeta\left(s\right),\] the special case \(t=-1\) of which reduces to (3.57) while the case \(t=e^{2\pi ix}\) reduces to (3.55).
**Remark 3.3**.: Identity (3.57) is in fact easy to recover directly from the observation that \(\nu_{2}(m)=0\) for \(m\) odd while \(\nu_{2}(m)\) is even or odd according to \(j=0\) or not in the factorization \(m:=(2n-1).2^{j}.\) Employing this fact allows the odd terms of the sum in (3.55) to be evaluated, leading to
\[\sum_{m=1}^{\infty}\!\!\frac{\cos(2\,\pi\,x\,\nu_{2}(2\,m))}{\left(2\,m\right)^ {s}}=-\frac{\left(\cos(2\,\pi\,x)\left(2^{s}-1\right)-1+2^{-s}\right)\zeta(s) }{2\,\cos(2\,\pi\,x)\,2^{s}-4^{s}-1}. \tag{3.64}\]
Finally, take the limit \(s\to 1\), with \(x=1/6\) to find
\[\sum_{m=1}^{\infty}\!\!\frac{\cos(\pi\,\nu_{2}\left(2\,m\right)/3)}{m}=\frac{ \ln(2)}{3}\,. \tag{3.65}\]
## 4. **A finite version**
A sum or product over a finite range for the index \(j\) of (3.51) is stated next.
**Proposition 4.1**.: For any integer \(J\geq 1,\) the following identity holds
\[\prod_{j\geq J,n\geq 1}a_{(2n-1).2^{j}}^{\varphi(j)}=\prod_{p\geq 1}a_{p.2^{J}}^{ \varphi(J+\nu_{2}(p))}. \tag{4.1}\]
Proof.: Choosing an integer \(J\geq 1\) and replacing the function \(\varphi\left(j\right)\) with its truncated version
\[\varphi_{J}\left(j\right)=\begin{cases}\varphi\left(j\right)&j\geq J\\ 0&\text{else}\end{cases} \tag{4.2}\]
in (3.51) produces the identity
\[\prod_{j\geq J,n\geq 1}a_{(2n-1).2^{j}}^{\varphi(j)}=\prod_{p\geq 1}a_{p.2^{J} }^{\varphi\left(\nu_{2}\left(p,2^{J}\right)\right)}. \tag{4.3}\]
Indeed, using
\[\prod_{j\geq 0,n\geq 1}a_{(2n-1).2^{j}}^{\varphi_{J}\left(j\right)}=\prod_{m \geq 1}a_{m}^{\varphi_{J}\left(\nu_{2}\left(m\right)\right)} \tag{4.4}\]
we look for the values of the index \(m\) such that \(\nu_{2}\left(m\right)\geq J,\) which are exactly
\[m=2^{J}p,\text{ }p\in\mathbb{N} \tag{4.5}\]
so that
\[\prod_{j\geq J,n\geq 1}a_{(2n-1).2^{j}}^{\varphi(j)}=\prod_{p\geq 1}a_{p.2^{J} }^{\varphi\left(\nu_{2}\left(p,2^{J}\right)\right)}. \tag{4.6}\]
Since moreover \(\nu_{2}\left(p.2^{J}\right)=J+\nu_{2}\left(p\right),\) we deduce the result.
**Corollary 4.1.1**.: _The specialization \(\varphi\left(j\right)=1\) produces_
\[\prod_{j\geq J,n\geq 1}a_{(2n-1).2^{j}}=\prod_{p\geq 1}a_{p.2^{J}} \tag{4.7}\]
_or equivalently_
\[\prod_{0\leq j<J,n\geq 1}a_{(2n-1).2^{j}}=\prod_{p\geq 1,p\neq 0\mod 2^{J}}a_{p} \tag{4.8}\]
_or its sum version_
\[\sum_{j\geq J,n\geq 1}b_{(2n-1).2^{j}}=\sum_{p\geq 1,p\neq 0\mod 2^{J}}b_{p}\,. \tag{4.9}\]
## 5. **The case of a double-indexed sequence**
**Proposition 5.1**.: Consider a double-indexed sequence \(\left\{a_{p,q}\right\}\). We have
\[\prod_{k\geq 0,m\geq 1}\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1).2^{j},(2m-1).2^{k}}a_{(2n).2^{j},(2m).2^{k}}}{a_{(2n).2^{j},(2m-1).2^{k}}a_{(2n-1).2^{j},(2 m).2^{k}}}\right)^{2^{j+k}}=\prod_{p\geq 1,q\geq 1}a_{p,q}. \tag{5.1}\]
The sum version is, for an arbitrary sequence \(\left\{b_{p,q}\right\},\)
\[\sum_{j,k\geq 0}\sum_{m,n\geq 1}2^{j+k}\left[b_{(2n-1).2^{j},(2m-1).2^{k}}+b_{(2 n).2^{j},(2m).2^{k}}-b_{(2n).2^{j},(2m-1).2^{k}}-b_{(2n-1).2^{j},(2m).2^{k}} \right]=\sum_{p,q\geq 1}b_{p,q}. \tag{5.2}\]
Proof.: For a fixed value of \(q\) we have
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1).2^{j},q}}{a_{(2n).2^{j},q}} \right)^{2^{j}}=\prod_{p\geq 1}a_{p,q}. \tag{5.3}\]
Denote
\[B_{q}=\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1).2^{j},q}}{a_{(2n).2^{j},q}} \right)^{2^{j}} \tag{5.4}\]
so that
\[\prod_{q\geq 1}B_{q}=\prod_{p\geq 1,q\geq 1}a_{p,q}. \tag{5.5}\]
Using
\[\prod_{q\geq 1}B_{q}=\prod_{k\geq 0,m\geq 1}\left(\frac{b_{(2m-1),2^{k}}}{b_{(2m).2^{k}}}\right)^{2^{k}} \tag{5.6}\]
we deduce
\[\prod_{p,q\geq 1}a_{p,q} =\prod_{k\geq 0,m\geq 1}\left(\frac{\prod_{j\geq 0,n\geq 1}\left( \frac{a_{(2n-1),2^{j},(2m-1),2^{k}}}{a_{(2n),2^{j},(2m-1),2^{k}}}\right)^{2^{j} }}{\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1),2^{j},(2m),2^{k}}}{a_{(2n),2^{j},(2m ),2^{k}}}\right)^{2^{j}}}\right)^{2^{k}}\] \[=\prod_{k\geq 0,m\geq 1}\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1),2^{j},(2m-1),2^{k}\mathcal{Q}_{(2n),2^{j},(2m).2^{k}}}}{a_{(2n),2^{j},(2m-1),2^{k}\mathcal{Q}_{(2n-1),2^{j},(2m).2^{k}}}}\right)^{2^{j+k}} \tag{5.7}\]
This result is now used to produce a multisection identity between Lambert series.
**Corollary 5.1.1**.: _The two Lambert series_
\[f\left(q\right)=\sum_{n\geq 1}n\frac{q^{n}}{1-q^{n}}\text{ and }g\left(q\right)=\sum_{n\geq 1}n \frac{q^{n}}{1+q^{n}} \tag{5.8}\]
_are related as_
\[f\left(q\right)=\sum_{j,k\geq 0}2^{2j+k}\left[g\left(q^{2^{j+k}}\right)-4g \left(q^{2^{j+k+1}}\right)\right]. \tag{5.9}\]
Proof.: Writing the Lambert series \(f\) as the double sum
\[f\left(q\right)=\sum_{m,n}nq^{nm},\qquad|q|<1, \tag{5.10}\]
the dissection formula (5.2) yields
\[f\left(q\right) =\sum_{j,k\geq 0}\sum_{n,m\geq 1}2^{j+k}\left[\left(2n-1\right)2^{j }q^{(2n-1).2^{j},(2m-1).2^{k}}-\left(2n-1\right)2^{j}q^{(2n-1).2^{j},(2m).2^ {k}}\right.\] \[+\left.\left(2n\right).2^{j}q^{(2n).2^{j}.(2m).2^{k}}-\left(2n \right)2^{j}q^{(2n).2^{j}.(2m-1).2^{k}}\right]. \tag{5.11}\]
Each of the four sums over \(n,m\geq 1\) is computed as follows
\[\sum_{n,m\geq 1}\left(2n-1\right)q^{(2n-1).2^{j}.(2m-1).2^{k}}=\sum_{n\geq 1} \left(2n-1\right)\frac{q^{(2n-1)2^{j+k}}}{1-q^{(2n-1)2^{j+k+1}}}, \tag{5.12}\]
\[\sum_{n,m\geq 1}\left(2n-1\right)q^{(2n-1).2^{j}.(2m).2^{k}}=\sum_{n\geq 1} \left(2n-1\right)\frac{q^{(2n-1)2^{j+k+1}}}{1-q^{(2n-1)2^{j+k+1}}}, \tag{5.13}\]
\[\sum_{n,m\geq 1}\left(2n\right)q^{(2n).2^{j}.(2m).2^{k}}=\sum_{n\geq 1}2n\frac{q^{ n.2^{j+k+2}}}{1-q^{n.2^{j+k+2}}}, \tag{5.14}\]
\[\sum_{n,m\geq 1}\left(2n\right)q^{(2n).2^{j}.(2m-1).2^{k}}=\sum_{n\geq 1} \left(2n\right)\frac{q^{n.2^{j+k+1}}}{1-q^{n.2^{j+k+2}}}. \tag{5.15}\]
With the notation \(q_{j,k}=q^{2^{j+k}},\) this yields
\[f\left(q\right) =\sum_{j,k\geq 0}2^{2j+k}\sum_{n\geq 1}\left[\left(2n-1\right)\frac{q _{j,k}^{2n-1}-q_{j,k}^{2\left(2n-1\right)}}{1-q_{j,k}^{2\left(2n-1\right)}}+ \left(2n\right)\frac{q_{j,k}^{4n}-q_{j,k}^{2n}}{1-q_{j,k}^{4n}}\right]\] \[=\sum_{j,k\geq 0}2^{2j+k}\sum_{n\geq 1}\left[\left(2n-1\right)\frac{q _{j,k}^{2n-1}}{1+q_{j,k}^{2n-1}}-\left(2n\right)\frac{q_{j,k}^{2n}}{1+q_{j,k}^ {2n}}\right]. \tag{5.16}\]
This inner sum coincides with the difference between the odd part \(h_{o}\) and the even part \(h_{e}\) of the function \(h\left(q\right)=\sum_{n\geq 1}n\frac{q_{j,k}^{2n}}{1+q_{j,k}^{2n}},\) this difference being also equal to \(h\left(q\right)-2h_{e}\left(q\right),\) i.e. to
\[\sum_{n\geq 1}\left[n\frac{q_{j,k}^{n}}{1+q_{j,k}^{n}}-4n\frac{q_{j,k}^{2n}}{1+q _{j,k}^{2n}}\right], \tag{5.17}\]
so that finally
\[f\left(q\right) =\sum_{j,k\geq 0}2^{2j+k}\sum_{n\geq 1}\left[n\frac{q^{n.2^{j+k}}}{ 1+q^{n.2^{j+k}}}-4n\frac{q^{n.2^{j+k+1}}}{1+q^{n.2^{j+k+1}}}\right]\] \[=\sum_{j,k\geq 0}2^{2j+k}\left(g\left(q^{2^{j+k}}\right)-4g \left(q^{2^{j+k+1}}\right)\right). \tag{5.18}\]
We notice that the previous result can be extended in a straightforward way to the following family of Lambert series; we skip the details.
**Corollary 5.1.2**.: _For an arbitrary real number \(\mu\geq 1,\) the two Lambert series_
\[f_{\mu}\left(q\right)=\sum_{n\geq 1}n^{\mu}\frac{q^{n}}{1-q^{n}}\text{ and }g_{\mu}\left(q\right)=\sum_{n\geq 1}n^{\mu}\frac{q^{n}}{1+q^{n}} \tag{5.19}\]
_are related as_
\[f_{\mu}\left(q\right)=\sum_{j,k\geq 0}2^{2j+k}\left[g_{\mu}\left(q^{2^{j+k}} \right)-2^{\mu+1}g_{\mu}\left(q^{2^{j+k+1}}\right)\right]. \tag{5.20}\]
## 6. **A generating functional approach**
### The base 2 case
#### 6.1.1. Two proofs of Teixeira's identity
The series version of (1.2) applied to the case \(a_{m}=t^{m}\) is
\[\sum_{m\geq 1}t^{m}=\sum_{k\geq 0,n\geq 1}2^{k}t^{\left(2n-1\right)2^{k}}-2^{k}t^ {\left(2n\right)2^{k}}\qquad|t|<1. \tag{6.1}\]
Summing over \(n\) in the right hand-side and using the simplification
\[\sum_{k\geq 0}2^{k}\frac{t^{2^{k}}\left(1-t^{2^{k}}\right)}{1-t^{2^{k+1}}}=\sum_ {k\geq 0}2^{k}\frac{t^{2^{k}}}{1+t^{2^{k}}} \tag{6.2}\]
produces
\[\frac{t}{1-t}=\sum_{k\geq 0}2^{k}\frac{t^{2^{k}}}{1+t^{2^{k}}}. \tag{6.3}\]
This identity is well-known and appears in Problem 10, Chapter II of [5, p. 118], where it is attributed to F.G. Teixeira [6]. We produce here two proofs of Texeira's identity (6.3).
First proof.: A simple proof of (6.3) is as follows: define
\[\varphi\left(t\right)=\frac{t}{1-t}, \tag{6.4}\]
notice that this function satisfies
\[\varphi\left(t\right)+\varphi\left(-t\right)=2\varphi\left(t^{2}\right) \tag{6.5}\]
and iterate this identity (see [1, Appendix A] for convergence details).
Second proof.: Another proof appeals to the theory of Lambert series: first, recall the definition of the Dirichlet \(\eta\) function
\[\eta\left(s\right)=\sum_{n\geq 1}\frac{\left(-1\right)^{n-1}}{n^{s}}=\left(1-2^ {1-s}\right)\zeta\left(s\right). \tag{6.6}\]
Then consider the following result provided in [7, Problem 11, p.300].
**Proposition** (Borwein).: _Given two sequences \(\left\{a_{n}\right\}\) and \(\left\{b_{n}\right\}\), then_
\[\eta\left(s\right)\sum_{n\geq 1}\frac{a_{n}}{n^{s}}=\sum_{n\geq 1}\frac{b_{n}}{n^ {s}} \tag{6.7}\]
_if and only if_
\[\sum_{n\geq 1}a_{n}\frac{x^{n}}{1+x^{n}}=\sum_{n\geq 1}b_{n}x^{n}. \tag{6.8}\]
Let us apply this result to the sequence \(\left\{a_{n}\right\}\) defined as
\[a_{n}=\begin{cases}2^{l}\text{ if }n=2^{l}\\ 0\text{ else }\end{cases} \tag{6.9}\]
to obtain, on the left-hand side of (6.8),
\[\sum_{n\geq 1}a_{n}\frac{x^{n}}{1+x^{n}}=\sum_{l\geq 1}2^{l}\frac{x^{2^{l}}}{1+ x^{2^{l}}} \tag{6.10}\]
and, on the left-hand side of (6.7),
\[\eta(s)\sum_{n\geq 1}\frac{a_{n}}{n^{s}}=\eta(s)\sum_{l\geq 0}\frac{1}{2^{l(s-1) }}=\zeta(s). \tag{6.11}\]
We deduce from (6.7) that
\[\sum_{n\geq 1}\frac{b_{n}}{n^{s}}=\zeta(s) \tag{6.12}\]
so that \(b_{n}=1,\ n\geq 1\) and Teixeira's identity (6.3) follows.
**Remark 6.1**.: A simple interpretation of Teixeira's identity (6.3) (providing a simple analogue of the identities discussed here) follows:
\[\frac{t}{1-t}=\sum_{n\geq 1}t^{n} \tag{6.13}\]
is the generating function \(\sum_{n\geq 1}c_{n}t^{n}\) of the sequence \(\left\{c_{n}\right\}=\left\{1,1,\dots\right\}:\) each integer \(n\) is counted with a unit weight. The first term \(\left(j=0\right)\) in the right-hand side of Teixeira's identity (6.3) is
\[\frac{t}{1+t}=\sum_{n\geq 1}\left(-1\right)^{n}t^{n} \tag{6.14}\]
so that all even integers are weighted by \(+1\) but the odd ones are weighted by \(\left(-1\right).\) The extra terms
\[\sum_{j\geq 1}2^{j}\frac{t^{2^{j}}}{1+t^{2^{j}}} \tag{6.15}\]
will correct these negative weights so that the final sum has all positive unit weights. More precisely:
\[\frac{t}{1+t} =t-t^{2}+t^{3}-t^{4}+t^{5}-t^{6}+t^{7}-t^{8}+t^{9}-t^{10}+t^{11}-t^ {12}\] \[\quad+t^{13}-t^{14}+t^{15}-t^{16}+t^{17}-t^{18}+t^{19}-t^{20}+O \left(t^{21}\right) \tag{6.17}\] \[\frac{t}{1+t}+2\frac{t^{2}}{1+t^{2}} =t+t^{2}+t^{3}-3t^{4}+t^{5}+t^{6}+t^{7}-3t^{8}+t^{9}+t^{10}+t^{11}- 3t^{12}\] \[\quad+t^{13}+t^{14}+t^{15}-3t^{16}+t^{17}+t^{18}+t^{19}-3t^{20}+O \left(t^{21}\right)\] (6.18) \[\frac{t}{1+t}+2\frac{t^{2}}{1+t^{2}}+4\frac{t^{4}}{1+t^{4}} =t+t^{2}+t^{3}+t^{4}+t^{5}+t^{6}+t^{7}-7t^{8}+t^{9}+t^{10}+t^{11}+ t^{12}\] \[\quad+t^{13}+t^{14}+t^{15}-7t^{16}+t^{17}+t^{18}+t^{19}+t^{20}+O \left(t^{21}\right) \tag{6.16}\]
so that all powers of \(t\) in
\[\sum_{j=0}^{J}2^{j}\frac{t^{2^{j}}}{1+t^{2^{j}}} \tag{6.19}\]
have unit weight except, those \(t\) with power \(k2^{J+1},\ k\geq 1\) that have weight \(1-2^{J+1}.\) As \(J\to\infty,\) only terms with unit weight remain.
#### 6.1.2. A generalization of Teixeira's identity
Now consider, for an arbitrary \(q\in\mathbb{C}\) such that \(q\neq 0,\) the functions
\[\varphi\left(z\right)=\frac{z}{1-z},\ \chi\left(z\right)=\frac{z-\left(q-1 \right)z^{2}}{1-z^{2}}; \tag{6.20}\]
they satisfy the identity
\[\frac{1}{q}\left(\varphi\left(z\right)-\chi\left(z\right)\right)=\varphi \left(z^{2}\right). \tag{6.21}\]
We deduce
\[\varphi\left(z\right)=\chi\left(z\right)+q\varphi\left(z^{2}\right)=\chi \left(z\right)+q\chi\left(z^{2}\right)+q^{2}\varphi\left(z^{4}\right)=\ldots \tag{6.22}\]
so that
\[\frac{z}{1-z} =\sum_{k\geq 0}q^{k}\chi\left(z^{2^{k}}\right)=\sum_{k\geq 0}q^{k} \frac{z^{2^{k}}-\left(q-1\right)z^{2^{k+1}}}{1-z^{2^{k+1}}}\] \[=\sum_{k\geq 0,n\geq 1}q^{k}\left(z^{\left(2n-1\right).2^{k}}- \left(q-1\right)z^{\left(2n\right).2^{k}}\right) \tag{6.23}\]
which is the special case \(a_{n}=z^{n}\) of the generalized identity
\[\sum_{n\geq 1}a_{n}=\sum_{k\geq 0,n\geq 1}q^{k}\left(a_{\left(2n-1\right).2^{k}} -\left(q-1\right)a_{\left(2n\right).2^{k}}\right). \tag{6.24}\]
### The arbitrary base \(b\) case
Consider the functions
\[\varphi\left(z\right)=\frac{z}{1-z},\ \chi\left(z\right)=\frac{z+z^{2}+\cdots+z^{b-1}- \left(q-1\right)z^{b}}{1-z^{b}}; \tag{6.25}\]
they satisfy
\[\varphi\left(z\right)-\chi\left(z\right)=q\varphi\left(z^{b}\right) \tag{6.26}\]
and, by iterating, we deduce
\[\varphi\left(z\right)=\chi\left(z\right)+q\chi\left(z^{b}\right)+q^{2}\chi \left(z^{b^{2}}\right)+\cdots=\sum_{j\geq 0}q^{j}\chi\left(z^{b^{j}}\right) \tag{6.27}\]
so that
\[\frac{z}{1-z} =\sum_{j\geq 0}q^{j}\frac{z^{b^{j}}}{1-z^{b^{j+1}}}+q^{j}\frac{z^{2.b^{j}}}{1-z^{b^{j+1}}}+\cdots+q^{j}\frac{z^{\left(b-1\right).b^{j}}}{1-z^{b ^{j+1}}}-\left(q-1\right)q^{j}\frac{z^{b.b^{j}}}{1-z^{b^{j+1}}}\] \[=\sum_{j\geq 0,n\geq 1}q^{j}z^{\left(n-1\right).b^{j}}+q^{j}z^{ \left(n-2\right).b^{j}}+\cdots+q^{j}z^{\left(b-\left(b-1\right).b^{j}\right)}- \left(q-1\right)q^{j}z^{\left(bn.b^{j}\right)}. \tag{6.28}\]
## 7. **A q-calculus application**
This section produces a \(q-\)calculus application of our main identity.
**Proposition 7.1**.: With the \(q-\)Pochhammer symbol
\[\left(a;q\right)_{\infty}=\prod_{k\geq 0}\left(1-aq^{k}\right), \tag{7.1}\]
we have the identities, for \(q\in\mathbb{C}\) such that \(\left|q\right|<1\),
\[\prod_{j\geq 0}\left(\frac{\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}}{ \left(q^{2^{j+1}};q^{2^{j+1}}\right)_{\infty}}\right)^{2^{j}}=\left(q;q\right)_ {\infty}, \tag{7.2}\]
and more generally, for any complex number \(p\),
\[\prod_{j\geq 0}\left(\frac{\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}}{ \left(q^{2^{j+1}};q^{2^{j+1}}\right)_{\infty}^{p-1}}\right)^{p^{j}}=\left(q;q \right)_{\infty} \tag{7.3}\]
and
\[\prod_{j\geq 0}\left(\frac{\left(q^{3^{j}};q^{3^{j+1}}\right)_{\infty} \left(q^{2.3^{j}};q^{3^{j+1}}\right)_{\infty}}{\left(q^{3^{j+1}};q^{3^{j+1}} \right)_{\infty}^{p-1}}\right)^{p^{j}}=\left(q;q\right)_{\infty}. \tag{7.4}\]
The special case \(p=1\) produces respectively
\[\prod_{j\geq 0}\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}=\left(q;q\right)_{\infty} \tag{7.5}\]
and
\[\prod_{j\geq 0}\left(q^{3^{j}};q^{3^{j+1}}\right)_{\infty}\left(q^{2.3^{j}};q^{ 3^{j+1}}\right)_{\infty}=\left(q;q\right)_{\infty}. \tag{7.6}\]
Proof.: In the identity
\[\prod_{n\geq 1,j\geq 0}\left(\frac{a_{\left(2n-1\right),2^{j}}}{a_{2n.2^{j}}} \right)^{2^{j}}=\prod_{n\geq 1}a_{n}, \tag{7.7}\]
set \(a_{n}=1-q^{n}\) so that the right-hand side is \(\left(q;q\right)_{\infty}\). Moreover,
\[\prod_{n\geq 1,j\geq 0}\left(1-q^{\left(2n-1\right).2^{j}}\right)=\prod_{n\geq 0,j\geq 0}\left(1-q^{\left(2n+1\right).2^{j}}\right)=\prod_{j\geq 0}\left(q^{2^{j}};q ^{2^{j+1}}\right)_{\infty} \tag{7.8}\]
and
\[\prod_{n\geq 1,j\geq 0}\left(1-q^{2n.2^{j}}\right)=\prod_{n\geq 0,j\geq 0}\left(1 -q^{\left(2n+2\right).2^{j}}\right)=\prod_{j\geq 0}\left(q^{2^{j+1}};q^{2^{j+1}} \right)_{\infty} \tag{7.9}\]
so that
\[\prod_{j\geq 0}\left(\frac{\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}}{ \left(q^{2^{j+1}};q^{2^{j+1}}\right)_{\infty}}\right)^{2^{j}}=\left(q;q\right) _{\infty}. \tag{7.10}\]
From the more general case (3.1), we deduce
\[\prod_{j\geq 0}\left(\frac{\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}}{ \left(q^{2^{j+1}};q^{2^{j+1}}\right)_{\infty}^{p-1}}\right)^{p^{j}}=\left(q;q \right)_{\infty}. \tag{7.11}\]
For example, \(p=3\) produces
\[\prod_{j\geq 0}\left(\frac{\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}}{ \left(q^{2^{j+1}};q^{2^{j+1}}\right)_{\infty}^{2}}\right)^{3^{j}}=\left(q;q \right)_{\infty}. \tag{7.12}\]
In the base 3 case, using (3.13) we have
\[\prod_{n\geq 1,j\geq 0}\left(1-q^{(3n-1).3^{j}}\right)=\prod_{n\geq 0,j\geq 0} \left(1-q^{(3n+2).3^{j}}\right)=\prod_{j\geq 0}\left(q^{2.3^{j}};q^{3^{j+1}}\right)_{\infty} \tag{7.13}\]
and
\[\prod_{n\geq 1,j\geq 0}\left(1-q^{(3n-2).3^{j}}\right)=\prod_{n\geq 0,j\geq 0} \left(1-q^{(3n+1).3^{j}}\right)=\prod_{j\geq 0}\left(q^{j};q^{j+1}\right)_{\infty} \tag{7.14}\]
and
\[\prod_{n\geq 1,j\geq 0}\left(1-q^{3n.3^{j}}\right)=\prod_{n\geq 0,j\geq 0} \left(1-q^{(3n+3).3^{j}}\right)=\prod_{j\geq 0}\left(q^{3^{j+1}};q^{3^{j+1}} \right)_{\infty} \tag{7.15}\]
so that
\[\prod_{j\geq 0}\left(\frac{\left(q^{3^{j}};q^{3^{j+1}}\right)_{\infty} \left(q^{2.3^{j}};q^{3^{j+1}}\right)_{\infty}}{\left(q^{3^{j+1}};q^{3^{j+1}} \right)_{\infty}^{p-1}}\right)^{p^{j}}=\left(q;q\right)_{\infty}. \tag{7.16}\]
This result can be extended to the more general case of the \(q-\)Pochhammer \(\left(a;q\right)_{\infty}\) defined in (7.1) by considering the sequence
\[a_{n}=1-aq^{n-1},\ n\geq 1. \tag{7.17}\]
**Proposition 7.2**.: The following multisection formula holds:
\[\prod_{j\geq 0}\left(\frac{\left(aq^{2^{j-1}};q^{2^{j+1}}\right)_{\infty}}{ \left(aq^{2^{j+1}-1};q^{2^{j+1}}\right)_{\infty}}\right)^{2^{j}}=\left(a;q \right)_{\infty}. \tag{7.18}\]
Proof.: We have
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a_{(2n-1).2^{j}}}{a_{(2n).2^{j }}}\right)^{2^{j}} =\prod_{j\geq 0,n\geq 1}\left(\frac{1-aq^{(2n-1).2^{j}-1}}{1-aq^{(2n ).2^{j}-1}}\right)^{2^{j}}\] \[=\prod_{j\geq 0}\left(\frac{\prod_{n\geq 0}\left(1-aq^{(2n+1).2^{j }-1}\right)}{\prod_{n\geq 0}\left(1-aq^{(2n+2).2^{j}-1}\right)}\right)^{2^{j}}\] \[=\prod_{j\geq 0}\left(\frac{\prod_{n\geq 0}\left(1-aq^{2^{j-1}}q^{ n.2^{j+1}}\right)}{\prod_{n\geq 0}\left(1-aq^{2^{j+1}-1}q^{n.2^{j+1}}\right)}\right)^{2^{j}}\] \[=\prod_{j\geq 0}\left(\frac{\left(aq^{2^{j}-1};q^{2^{j+1}}\right)_{ \infty}}{\left(aq^{2^{j+1}-1};q^{2^{j+1}}\right)_{\infty}}\right)^{2^{j}}. \tag{7.19}\]
**Remark 7.1**.: A combinatorial interpretation of identity (7.5) is as follows: rewrite (7.5) as
\[\frac{1}{\left(q;q\right)_{\infty}}=\prod_{j\geq 0}\frac{1}{\left(q^{2^{j}};q^{2^ {j+1}}\right)_{\infty}}. \tag{7.20}\]
The left-hand side is the generating function for the number of partitions \(p(n)\) of the integer \(n\). More generally, the infinite product
\[\prod_{i\in I}\frac{1}{1-q^{i}} \tag{7.21}\]
is the generating function for the number of partitions \(p_{I}(n)\) of \(n\) with parts belonging to the set \(I.\) Hence
\[\frac{1}{\left(q^{2^{j}};q^{2^{j+1}}\right)_{\infty}}=\prod_{i\in I_{j}}\frac{1 }{1-q^{i}} \tag{7.22}\]
is the generating function for the number of partitions of \(n\) with parts in the set \(S_{j}=\{2^{j}+\mathbb{N}.2^{j+1}\}\). It is easy to check that the set of subsets \(\{S_{j}\}_{j\geq 0}\) forms a partition of \(\mathbb{N}\), i.e.
\[\cup_{j\geq 0}S_{j}=\mathbb{N}\text{ and }S_{j}\cap S_{k}=\varnothing,\ j\neq k. \tag{7.23}\]
In fact, the partition \(\cup_{j\geq 0}S_{j}=\mathbb{N}\) has an elementary interpretation in base \(2\): since \(S_{j}=\{n\in\mathbb{N}:\nu_{2}(n)=j\}\), this partition decomposes the set of integers \(n\) according to their \(2-\)valuation.
## 8. **Applications**
We begin with a simple application of Example (3.1). Let
\[a(m)=1+x^{2}/m^{2} \tag{8.1}\]
so that
\[\prod_{j\geq 0,n\geq 1}\left(\frac{a((2n-1).2^{j})}{(a(2n).2^{j})^{q-1}} \right)^{q^{j}}=\prod_{m=1}^{\infty}(1+x^{2}/m^{2})=\sinh(\pi x)/x\pi \tag{8.2}\]
which, after substitution in the series equivalent case (3.2), becomes
\[\sum_{j\geq 0,n\geq 1}q^{j}\left(\ln(1+\frac{x^{2}}{(2n-1)^{2}2^{2j}})-(q-1) \ln(1+\frac{x^{2}}{2^{2j+2}n^{2}})\right)=\ln(\frac{\sinh(\pi x)}{x\pi})\,. \tag{8.3}\]
### By differentiation
It is interesting to differentiate (8.3) once with respect to \(x\), and, since there is no overall \(q\) dependence, letting \(q=1\) solves the double summation
\[\sum_{j\geq 0,n\geq 1}1/\big{(}n(n-1)2^{2+2j}+x^{2}+4^{j}\big{)}=(\pi x\coth( \pi x)-1)/2x^{2}. \tag{8.4}\]
However, since the inner sum is known [8], [9], that is
\[\sum_{n=1}^{\infty}1/(n(n-1)2^{2+2j}+x^{2}+4^{j})=\pi\tanh(\pi x/2^{(1+j)})/ \big{(}2^{j+2}x\big{)} \tag{8.5}\]
we eventually reproduce a listed (telescoping-based) identity [10, Eq. (43.6.4)]
\[\sum_{j=0}^{\infty}\tanh(x/2^{1+j})/2^{j}=2(x\coth(x)-1)/x. \tag{8.6}\]
Continuing, by differentiating (8.3) with respect to \(q\) and letting \(q=1\), we obtain
\[\sum_{j\geq 0,n\geq 1}j\ln(1+\frac{x}{2^{2j}\,(2n-1)^{2}})=\sum_{j\geq 0,n\geq 1 }\ln(1+\frac{x}{n^{2}\,2^{2j+2}}). \tag{8.7}\]
Again, differentiating (8.7) with respect to \(x\) yields the transformation
\[\sum_{j=1}^{\infty}\!\!\frac{j}{2^{j}}\tanh(\frac{x}{2^{1+j}})=\frac{2}{x}\! \sum_{j=0}^{\infty}(\frac{x}{2^{1+j}}\coth(\frac{x}{2^{1+j}})-1). \tag{8.8}\]
### By power series expansion
In the following, extensive use is made of the identity [11]
\[\prod_{k=1}^{\infty}\!\left(1+\left(\frac{x}{k+b}\right)^{n}\right)=\frac{ \Gamma(1+b)^{n}}{b^{n}+x^{n}}\!\prod_{k=1}^{n}\!\!\frac{1}{\Gamma\left(b-x\, \mathrm{e}^{\mathrm{i}\,\pi(2\,k+1)/n}\right)}\,, \tag{8.9}\]
valid for \(x\in\mathbb{C}\). See also [12, Eq (27)]. Consider the case
\[a(m)=1-x^{k}/m^{k},\hskip 28.452756ptm>1 \tag{8.10}\]
and recall for example, the simple identity
\[\ln\!\left(1-\frac{x^{k}}{\left(\left(3n-2\right)3^{j}\right)^{k}}\right)=-\sum \limits_{p=1}^{\infty}\frac{1}{p}\left(\frac{x^{k}}{\left(\left(3n-2\right)3^{ j}\right)^{k}}\right)^{p},\qquad|x|<3. \tag{8.11}\]
From (3.15) and (8.11) we have, after the (convergent) sums are reordered,
\[\sum\limits_{j\geq 1,n\geq 1}\left(\ln\!\left(a\!\left(\left(3n-1 \right).3^{j}\right)\right)+\ln\!\left(a\!\left(\left(3n-2\right).3^{j} \right)\right)\right)\] \[\qquad\qquad=\sum\limits_{p\geq 1,j\geq 1}\frac{\left(x/3^{j} \right)^{kp}}{p}\!\sum\limits_{n=1}^{\infty}\!\left(\left(\frac{1}{\left(3n-1 \right)^{k}}\right)^{p}+\left(\frac{1}{\left(3n-2\right)^{k}}\right)^{p} \right)\quad|x|<3. \tag{8.12}\]
The sum indexed by \(j\) is trivially evaluated, and the sum indexed by \(n\) is recognizable as a series representation of a special case of the Hurwitz Zeta function \(\zeta\left(s,a\right)\equiv\overset{\infty}{\underset{n=0}{\sum}}1/(n+a)^{s}\), leading to the identity
\[\sum\limits_{p=1}^{\infty}\!\!\frac{x^{kp}}{\left(3^{kp}-1\right)p}\left( \zeta\left(kp,\frac{2}{3}\right)+\zeta\left(kp,\frac{1}{3}\right)\right)=\ln \!\left(-x^{k}\overset{k-1}{\underset{j=0}{\prod}}\!\Gamma\!\left(-x\,{\rm e} ^{21\tau j/k}\right)\right),\quad|x|<1, \tag{8.13}\]
where the right-hand side arises from (8.9), and we have replaced \(x:=3x\). Because \(k\) and \(p\) are both integers and \(k>1\), (8.13) can also be rewritten using [13, Eq. 25.11.12] as
\[\sum\limits_{p=1}^{\infty}\!\!\frac{\left(\psi\!\left(k\,p-1,\frac{2}{3} \right)+\psi\!\left(k\,p-1,\frac{1}{3}\right)\right)\left(-x\right)^{kp}}{ \left(3^{k\,p}-1\right)\Gamma\!\left(k\,p+1\right)}=\frac{1}{k}\ln\left(-x^{k} \overset{k-1}{\underset{j=0}{\prod}}\!\Gamma\!\left(-x\,{\rm e}^{\frac{2\,i \,\pi\,j}{k}}\right)\right)\qquad|x|<1, \tag{8.14}\]
where \(\psi(n,x)\) is the polygamma function. For the case (8.13) using \(k=2\), we find
\[\sum\limits_{p=1}^{\infty}\!\!\frac{x^{2\,p}\left(\psi\!\left(2\,p-1,\frac{2} {3}\right)+\psi\!\left(2\,p-1,\frac{1}{3}\right)\right)}{\left(3^{2\,p}-1\right) \Gamma\!\left(2\,p+1\right)}=\frac{1}{2}\ln\!\left(\frac{x\,\pi}{\sin\left( \pi\,x\right)}\right)\qquad|x|<1, \tag{8.15}\]
for \(k=3\) we obtain
\[\sum\limits_{p=1}^{\infty}\!\!\frac{\left(\psi\!\left(3\,p-1,\frac{2}{3} \right)+\psi\!\left(3\,p-1,\frac{1}{3}\right)\right)\left(-1\right)^{p}\,x^{3 \,p}}{\left(3^{3\,p}-1\right)\Gamma\!\left(3\,p+1\right)}=\frac{1}{3}\ln\! \left(x^{2}\,\left|\Gamma\!\left(-\frac{i\,\sqrt{3}\,x}{2}+\frac{x}{2}\right) \right|^{2}\Gamma(1-x)\right) \tag{8.16}\]
and by setting \(x:=i\,x\) and \(k:=2k\) in (8.14) we discover
\[\sum\limits_{p=1}^{\infty}\!\!\frac{\left(-1\right)^{p}\,x^{2\,k\,p}\left(\psi \!\left(2\,k\,p-1,\frac{2}{3}\right)+\psi\!\left(2\,k\,p-1,\frac{1}{3}\right) \right)}{\Gamma\!\left(2\,k\,p+1\right)\left(3^{2\,k\,p}-1\right)}=\frac{\ln \!\left(x^{2\,k\,\prod\limits_{j=0}^{2\,k\,\prod\limits_{j=0}^{2\,k-1}}\Gamma\! \left(-i\,x\,{\rm e}^{\frac{i\,\pi\,j}{k}}\right)\right)}}{2\,k} \tag{8.17}\]
corresponding to the related case
\[a(m)=1+x^{k}/m^{k},\qquad\quad m>1\,. \tag{8.18}\]
Similarly let \(x:=ix\) and \(k:=2k+1\), to find
\[\sum\limits_{p=1}^{\infty}\!\!\frac{\left(\psi\!\left(\left(2\,k+1 \right)p-1,\frac{2}{3}\right)+\psi\!\left(\left(2\,k+1\right)p-1,\frac{1}{3} \right)\right)x^{\left(2\,k+1\right)p}\,{\rm e}^{-\frac{i\,\pi\,p}{2}}\left(-1 \right)^{k\,p}}{\left(3^{2\,k+1\right)p}-1\right)\Gamma\!\left((2\,k+1\right)p +1)}=\frac{1}{2}\ln\!\left(\frac{x\,\pi}{\sinh\left(\pi\,x\right)}\right)\,. \tag{8.19}\]
Choosing \(k=1\) in (8.17), or \(k=0\) in the real part of (8.19), gives
(8.20)
We continue with another simple example that utilizes the methods applied above. Let
\[a(m)=1/(1+x^{k}/m^{k}). \tag{8.21}\]
Then, according to (1.2)
\[\prod_{j=0}^{\infty}\!\!\left(\prod_{n=1}^{\infty}\!\!\!\left(1+\frac{x^{s}}{ \left(2^{j+1}\right)^{s}n^{s}}\right)\!\right)\!\left/\!\left(1+\frac{x^{s}}{ \left(2^{j+1}\right)^{s}\left(n-\frac{1}{2}\right)^{s}}\right)\right)^{2^{j}}= \prod_{m=1}^{\infty}\!\!\frac{1}{1+\frac{x^{s}}{m^{s}}}\,, \tag{8.22}\]
a result that can be tested numerically for \(s\geq 2\). In the case that \(s=n\) using (8.9) we find the general identity
\[\prod_{j=0}^{\infty}\!\!\left(\frac{\left(1+\left(-\frac{2^{j}}{x}\right)^{n} \right)}{\pi^{\frac{n}{2}}}\prod_{k=1}^{n}\!\frac{\Gamma\left(-\frac{1}{2}- \frac{x}{2^{j+1}}\,\mathrm{e}^{i\,\pi(2\,k+1)/n}\right)}{\Gamma\left(-\frac{x }{2^{j+1}}\,\mathrm{e}^{i\,\pi(2\,k+1)/n}\right)}\right)^{2^{j}}=\prod_{m=1}^{ \infty}\!\!\frac{1}{1+x^{n}/m^{n}} \tag{8.23}\]
If \(n=2\) in (8.23), after some simplification involving [13, Eqs. (5.4.3) and (5.4.4)], we arrive at the original curious and provocative result (1.1). In the case that \(n=4\) and \(x\in\mathbb{C}\), after further simplification and the redefinition \(x:=2\sqrt{2}\,x/\pi\), we find
\[\prod_{j=1}^{\infty}\!\!\left(\frac{2^{2\,j}}{2\,x^{2}}\tan\!\left(\frac{ \left(1+i\right)x}{2^{j}}\right)\tan\!\left(\frac{\left(1-i\right)x}{2^{j}} \right)\right)^{2^{j}}=\frac{16\,x^{4}}{\left(2\left(\cosh^{2}\left(x\right) \right)-1-\cos(2\,x)\right)^{2}}\,. \tag{8.24}\]
### A sum involving \(\zeta(k)\)
With the slight variation \(a(m)=1+x^{k}/m^{k}\) applied to (3.2) using \(q=1\) and analyzed as above, we find the identities
\[\sum_{j=1}^{\infty}\frac{(-1)^{j+1}\,\,x^{2\,j\,k}\,\zeta(2\,j\,k)}{j}=\ln\! \left(\frac{(-i)^{k}}{\pi^{k}\,x^{k}}\!\prod_{j=1}^{k}\!\sin\!\left(\left(-1 \right)^{\frac{2\,j-1}{2\,k}}\,\pi\,x\right)\right) \tag{8.25}\]
and
\[\sum_{j=1}^{\infty}\frac{(-1)^{j+1}\,\,x^{(2\,k+1)\,j}\,\zeta((2\,k+1)\,j)}{j} =\ln\!\left(\frac{1}{\Gamma(x+1)}\!\prod_{j=1}^{2\,k}\!\frac{1}{\pi}\,\sin\left( \pi\left(-1\right)^{\frac{j(2+2\,k)}{2\,k+1}}\,x\right)\Gamma\left(\left(-1 \right)^{\frac{j(2+2\,k)}{2\,k+1}+1}\,x\right)\right) \tag{8.26}\]
where the right-hand sides of both are available from (8.9). In the case \(k=1\), (8.26) reduces to
\[\sum_{k=1}^{\infty}\frac{x^{k}\,\zeta(3\,k)}{k}=\ln\!\left(\Gamma\!\left(1-x^{ \frac{1}{3}}\right)\Gamma\!\left(1+\frac{x^{\frac{1}{3}}\left(1+i\,\sqrt{3} \right)}{2}\right)\Gamma\!\left(1+\frac{x^{\frac{1}{3}}\left(1-i\,\sqrt{3} \right)}{2}\right)\right)\qquad|x|<1, \tag{8.27}\]
an identity known to Mathematica. See also [14, section 3.4]
### An infinite product study
Consider the case
\[a(m)=1+\mathrm{e}^{-\pi(2\,m+1)} \tag{8.28}\]
given that [10, Eq. (89.21.4)]
\[\prod_{m=1}^{\infty}\!\left(1+\mathrm{e}^{-\pi(2\,m+1)}\right)=\frac{2^{\frac{ 1}{4}}\,\mathrm{e}^{-\frac{\pi}{2\pi}}}{1+\mathrm{e}^{-\pi}}\,. \tag{8.29}\]
Apply (8.28) to (3.2) with \(q=1\) (see Example (3.1.2)) to obtain
\[\sum_{j\geq 0,k\geq 1}\frac{\mathrm{e}^{-k\,\pi}\left(-1\right)^{k}}{k\,\sinh(2\,k \,\pi\,2^{j})}=-2\,\ln\!\left(\frac{2^{\frac{1}{4}}\,\mathrm{e}^{-\pi/24}}{1+ \mathrm{e}^{-\pi}}\right) \tag{8.30}\]
after expanding the logarithmic terms in analogy to (8.11). The outer sum (over \(j\)) can be evaluated by applying the listed identity [10, Eq. (25.1.1)] after setting \(x:=ix\) and evaluating the limit \(n\to\infty\) in that identity, to yield
\[\sum_{j=0}^{\infty}\!\frac{1}{\sinh(2\,k\,\pi\,2^{j})}=\coth(2\,k\,\pi)-1+ \operatorname{csch}(2\,k\,\pi)\, \tag{8.31}\]
in which case (8.30) reduces, after some simplification, to
\[\sum_{k=1}^{\infty}\!\frac{(-1)^{k}\,\,\mathrm{e}^{-k\,\pi}\coth(k\,\pi)}{k}=\ln \!\left(\frac{(1+\mathrm{e}^{-\pi})\,\,\mathrm{e}^{\pi/12}}{\sqrt{2}}\right)\,. \tag{8.32}\]
Comparing (8.32) with a naive expansion of \(\ln\left(1+\mathrm{e}^{-\pi(2\,m+1)}\right)\) in (8.29), verifies the simple transformation
\[\sum_{k=1}^{\infty}\!\!\frac{(-1)^{k}\ \mathrm{e}^{-2\,k\,\pi}}{k\,\sinh(k\,\pi)}= \sum_{k=1}^{\infty}\!\!\frac{(-1)^{k}\ \mathrm{e}^{-k\,\pi}\coth(k\,\pi)}{k}+\ln\! \left(1+\mathrm{e}^{-\pi}\right), \tag{8.33}\]
so that, comparing (8.32) and (8.33) identifies
\[\sum_{k=1}^{\infty}\!\!\frac{(-1)^{k}\ \mathrm{e}^{-2\,k\,\pi}}{k\,\sinh(k\, \pi)}=\ln\!\left(\frac{(1+\mathrm{e}^{-\pi})^{2}\ \mathrm{e}^{\pi/12}}{\sqrt{2}}\right)\,. \tag{8.34}\]
Similarly, letting \(q=2\) in (3.2) leads to
\[\sum_{j=0}^{\infty}\!\!2^{j}\!\sum_{k=1}^{\infty}\!\!\frac{(-1)^{k}}{k}\!\sum _{n=1}^{\infty}\!\left(-\mathrm{e}^{-k\left(2\,n-1\right)2^{j}+1\right)\pi}+ \mathrm{e}^{-k\left(4\,n\,2^{j}+1\right)\pi}\right)=\ln\!\left(\frac{2^{\frac {1}{4}}\ \mathrm{e}^{-\frac{\pi}{2\pi}}}{1+\mathrm{e}^{-\pi}}\right)\,. \tag{8.35}\]
The innermost sum (over \(n\)) can be written in terms of hyperbolic functions, eventually producing the double sum identity
\[\sum_{k=1}^{\infty}\!\!\frac{(-1)^{k}}{k}\!\sum_{j=0}^{\infty}\!\!\frac{2^{j} \mathrm{e}^{-k\,\pi\left(2^{j}+1\right)}}{\cosh\left(k\,\pi\,2^{j}\right)}=\ln \!\left(\frac{(1+\mathrm{e}^{-\pi})^{2}\ \mathrm{e}^{\pi/12}}{\sqrt{2}}\right) \tag{8.36}\]
after transposing the two sums. Noting the equality of the right-hand sides of (8.34) and (8.36), suggests that the inner sum of (8.36) (over \(j\)) equates to the equivalent terms in the summand of (8.34), i.e.
\[\sum_{j=0}^{\infty}\!\!\frac{2^{j}\ \mathrm{e}^{-k\,\pi\left(2^{j}+1\right)}}{ \cosh(k\,\pi\,2^{j})}=\frac{\mathrm{e}^{-2\,k\,\pi}}{\sinh(k\,\pi)}\,, \tag{8.37}\]
a proof of which can be found in Appendix A.
### Application of (3.43)
Consider the case (3.43) using \(a_{m}=1+x^{2}/m^{2}\), giving
\[\prod_{j=0,n=1}^{\infty}\!\left(1+\frac{1}{\left(2\,n-1\right)^{2}\,2^{2\,j}} \right)^{\frac{1}{j+1}}\left(1+\frac{1}{4\,n^{2}\,2^{2\,j}}\right)^{\frac{1}{ (j+1)(j+2)}}=\frac{\sinh(\pi)}{\pi}\,. \tag{8.38}\]
From (8.9) we have
\[\prod_{n=1}^{\infty}\!\left(1+\frac{1}{\left(2\,n-1\right)^{2}\left(2^{j} \right)^{2}}\right)=\cosh\!\left(\frac{\pi}{2^{j+1}}\right) \tag{8.39}\]
and
\[\prod_{n=1}^{\infty}\!\left(1+\frac{1}{4\,n^{2}\left(2^{j}\right)^{2}}\right) =\frac{2^{j+1}}{\pi}\sinh\!\left(\frac{\pi}{2^{j+1}}\right) \tag{8.40}\]
leading to the identity
\[\prod_{j=0}^{\infty}\!\left(\cosh^{\frac{1}{j+1}}\left(\pi\,x/2^{j+1}\right) \right)\left(\frac{\sinh\!\left(\pi\,x/2^{j+1}\right)2^{j+1}}{\pi\,x}\right)^{ \frac{1}{(j+1)(j+2)}}=\frac{\sinh(\pi\,x)}{\pi\,x} \tag{8.41}\]
or its equivalent
\[\sum_{j=1}^{\infty}\!\!\frac{\ln\!\left(\cosh\!\left(\pi\,x/2^{j+1}\right) \right)+\ln\left(\frac{2^{j+1}}{\pi\,x}\sinh\left(\pi\,x/2^{j+1}\right)\right)/( j+2)}{j+1}=\frac{1}{2}\ln\!\left(\frac{2}{\pi\,x}\sinh\left(\frac{\pi\,x}{2} \right)\right)\,. \tag{8.42}\]
Differentiating (8.42) with respect to \(x\) yields
\[\sum_{j=1}^{\infty}\!\!\frac{2^{-j-1}}{j+1}\left(\tanh\!\left(\pi\,x\,2^{-j-1} \right)+\frac{1}{j+2}\coth\left(\pi\,x\,2^{-j-1}\right)\right)=\frac{\coth\! \left(\frac{\pi\,x}{2}\right)}{4} \tag{8.43}\]
and further differentiating produces
\[\sum_{j=1}^{\infty}\!\frac{2^{-2\,j-2}}{j+1}\left(\operatorname{sech}^{2}\left( \pi\,x\,2^{-j-1}\right)-\frac{1}{j+2}\!\operatorname{csch}^{2}\left(\pi\,x\,2^ {-j-1}\right)\right)=-\frac{\operatorname{csch}\!\left(\frac{\pi\,x}{2}\right)^ {2}}{8}\,. \tag{8.44}\]
In (8.41) setting \(x:=ix\) gives
\[\prod_{j=1}^{\infty}\!\!\left(2\,\cos\!\left(\pi\,x\,2^{-j-1}\right)\right)^{ \frac{1}{j+1}}\left(\frac{\sin\!\left(\pi\,x\,2^{-j-1}\right)}{2\,\pi\,x}\right) ^{\frac{1}{(j+1)(j+2)}}=\sqrt{\frac{2}{\pi\,x}\sin\!\left(\frac{\pi\,x}{2} \right)}\,,\qquad|x|<4. \tag{8.45}\]
Further, employing \(a_{m}=1+x^{3}/m^{3}\) with \(x\in\Re\), produces the identity
\[\prod_{j=0}^{\infty}\!\left(\frac{\pi^{\frac{3}{2}}}{\Gamma\!\left( \frac{1}{2}+2^{-j-1}\,x\right)|\Gamma\!\left(\frac{1}{2}-2^{-j-2}\,x\left(1-i \,\sqrt{3}\right)\right)|^{2}}\right)^{\frac{1}{j+1}}\left(\frac{1}{\Gamma(1+ 2^{-j-1}\,x)\left|\Gamma\!\left(1-2^{-j-2}\,x\left(1-i\,\sqrt{3}\right)\right) |^{2}}\right)^{\frac{1}{(j+1)(j+2)}}\\ =\frac{1}{\Gamma(x+1)\left|\Gamma\!\left(1-\frac{x}{2}\left(1-i \,\sqrt{3}\right)\right)|^{2}}\,\qquad. \tag{8.46}\]
### A recurring family
Consider the identity (3.40) using \(b(m)=\exp(-zm),\ Re(z)>0\). The sums over \(n\) on both sides can be evaluated in closed form, leading to the transformation
\[\sum_{j=1}^{\infty}\!\frac{B_{2p+1}(j)\,\mathrm{e}^{z2^{j}}}{\left(\mathrm{e} ^{z\,2^{j+1}}-1\right)\left(2\,p+1\right)}=\sum_{j=1}^{\infty}\!\!\frac{j^{2 \,p}}{\mathrm{e}^{z\,2^{j+1}}-1}\,, \tag{8.47}\]
since \(B_{2p+1}(0)=0\). We notice that, with \(f(j)=B_{2p+1}(j)/(2p+1)\) and \(g(j)=1/\!\left(1-e^{z\,2^{j}}\right)\) and with the forward difference operator \(\Delta f(j)=f(j+1)-f(j)\), we have
\[\Delta f(j)=j^{2p},\ \Delta g(j)=\frac{e^{z\,2^{j}}}{e^{z\,2^{j+1}}-1} \tag{8.48}\]
so that (8.47) can be interpreted as a summation by parts identity
\[\sum_{j\geq 1}f(j)\Delta g(j)=-\sum_{j\geq 1}g(j+1)\Delta f(j). \tag{8.49}\]
More interesting is the case \(b(m)=\exp(-x\,m^{2}),\ x>0\) applied to (3.40), leading to
\[\sum_{j=1}^{\infty}\!\!\frac{B_{2p+1}(j)}{2\,p+1}\vartheta_{2}\!\left(0, \mathrm{e}^{-2^{2+2j}}\right)=\sum_{j=1}^{\infty}\!\!j^{2\,p}\left(\vartheta_{ 3}\!\left(0,\mathrm{e}^{-2^{2+2j}}\right)-1\right) \tag{8.50}\]
by recognizing that
\[\sum_{n=1}^{\infty}\!\!\mathrm{e}^{-4\,x\,n^{2}2^{2\,j}}=\frac{1}{2}\left( \vartheta_{3}\!\left(0,\mathrm{e}^{-x\,2^{2+2j}}\right)-1\right) \tag{8.51}\]
and
\[\sum_{n=1}^{\infty}\!\!\mathrm{e}^{-x\,(2\,n-1)^{2}\,4^{j}}=\frac{1}{2} \vartheta_{2}\!\left(0,\mathrm{e}^{-x\,2^{2+2j}}\right) \tag{8.52}\]
coincide with the basic [13, section 20.2] definitions of the Jacobi theta functions \(\vartheta_{2}\) and \(\vartheta_{3}\). Identity (8.50) can also be interpreted as a summation by parts identity since
\[\Delta\vartheta_{3}(0,q)=\vartheta_{2}(0,q), \tag{8.53}\]
a special case \(z=0\) of the more general but classic identity
\[\vartheta_{3}(z,q)=\vartheta_{3}(2z,q^{4})+\vartheta_{2}(2z,q^{4}) \tag{8.54}\]
that can be found in [15, Example 1 p. 464].
Continuing with the choice \(b(m)=\exp(-x\,m^{2})\), the two identities (8.51) and (8.52) recur in many other identities presented in previous sections. Applied to (3.44) we find
\[\sum_{j=0}^{\infty}\left(\frac{\vartheta_{2}\!\left(0,\mathrm{e}^{-x\,2^{2+2j} }\right)}{j+1}+\frac{\vartheta_{3}\!\left(0,\mathrm{e}^{-x\,2^{2+2j}}\right)- 1}{\left(j+1\right)\left(j+2\right)}\right)=\vartheta_{3}\!\left(0,\mathrm{e} ^{-x}\right)-1\,, \tag{8.55}\]
and applied to (3.42) we obtain
\[\sum_{j=0}^{\infty}\Bigl{(}\bigl{(}-j^{2}+j+1\bigr{)}\ \vartheta_{2}\Bigl{(}0, \mathrm{e}^{-x\,2^{2+2j}}\Bigr{)}+2\,j\,\Bigl{(}\vartheta_{3}\Bigl{(}0,\mathrm{ e}^{-x\,2^{2+2j}}\Bigr{)}-1\Bigr{)}\Bigr{)}=\vartheta_{3}\bigl{(}0,\mathrm{e}^{-x} \bigr{)}-1\,. \tag{8.56}\]
## 9. **Additional identities**
We list in this section two additional identities as a consequence of the dissection identity
\[\prod_{m\geq 1}a_{m}=\prod_{j\geq 0,n\geq 1}a_{(2n-1).2^{j}} \tag{9.1}\]
when applied to a product of Gamma functions. The reader is referred to [16] for more dissection identities on products of Gamma functions.
**Proposition 9.1**.: The function
\[f\left(a,z\right)=\frac{\Gamma^{2}\left(a+1\right)}{\Gamma\left(a+1-\imath z \right)\Gamma\left(a+1+\imath z\right)} \tag{9.2}\]
satisfies the identity
\[f\left(a,z\right)=\prod_{j\geq 0}f\left(\frac{a}{2^{j+1}}-\frac{1}{2},\frac{z }{2^{j+1}}\right). \tag{9.3}\]
Proof.: The function \(f\left(a,z\right)\) has the infinite product representation
\[f\left(a,z\right)=\prod_{m\geq 1}\left(1+\left(\frac{z}{m+a}\right)^{2}\right). \tag{9.4}\]
We deduce from the identity (9.1) that
\[f\left(a,z\right) =\prod_{j\geq 0,n\geq 1}\left(1+\left(\frac{z}{\left(2n-1\right).2 ^{j}+a}\right)^{2}\right)\] \[=\prod_{j\geq 0,n\geq 1}\left(1+\left(\frac{z}{n.2^{j+1}+a-2^{j}} \right)^{2}\right)\] \[=\prod_{j\geq 0,n\geq 1}\left(1+\left(\frac{\frac{z}{2^{j+1}}}{n+ \left(\frac{a}{2^{j+1}}-\frac{1}{2}\right)}\right)^{2}\right)\] \[=\prod_{j\geq 0}f\left(\frac{a}{2^{j+1}}-\frac{1}{2},\frac{z }{2^{j+1}}\right). \tag{9.5}\]
Explicitly,
\[\frac{\Gamma^{2}\left(a+1\right)}{\Gamma\left(a+1-\imath z\right)\Gamma \left(a+1+\imath z\right)}=\prod_{j\geq 0}\frac{\Gamma^{2}\left(\frac{a}{2^{j+ 1}}+\frac{1}{2}\right)}{\Gamma\left(\frac{a}{2^{j+1}}+\frac{1}{2}-\imath z^{ \frac{z}{2^{j+1}}}\right)\Gamma\left(\frac{a}{2^{j+1}}+\frac{1}{2}+\imath z^{ \frac{z}{2^{j+1}}}\right)}. \tag{9.6}\]
If \(z=x,\ x\in\Re\), because \(\Gamma(x)\) is its own complex conjugate, (9.6) can be rewritten
\[\frac{\Gamma^{2}\left(a+1\right)}{\left|\Gamma\left(a+1+\imath x\right) \right|^{2}}=\prod_{j\geq 0}\frac{\Gamma^{2}\left(\frac{a}{2^{j+1}}+\frac{1}{2} \right)}{\left|\Gamma\left(\frac{a}{2^{j+1}}+\frac{1}{2}+\imath\frac{x}{2^{j+ 1}}\right)\right|^{2}}. \tag{9.7}\]
With \(a=0\), we have
\[f\left(0,z\right)=\frac{\sinh\left(\pi z\right)}{\pi z},\ f\left(-\frac{1}{2},z\right)=\frac{1}{\cosh\left(\pi z\right)} \tag{9.8}\]
and we deduce
\[\frac{\sinh\left(\pi z\right)}{\pi z}=\prod_{j\geq 0}\cosh\left(\frac{\pi z}{2^{j +1}}\right), \tag{9.9}\]
reproducing the listed identity [10, Eq. (92.1.3)].
When written in sum form, (9.6) becomes
\[\sum_{j=0}^{\infty}\biggl{(}2\,\ln\biggl{(}\Gamma\biggl{(}\frac{a}{ 2^{j+1}}+\frac{1}{2}\biggr{)}\biggr{)}- \ln\biggl{(}\Gamma\biggl{(}\frac{a}{2^{j+1}}+\frac{1}{2}-\frac{i\,z}{2^{j+1 }}\biggr{)}\biggr{)}-\ln\biggl{(}\Gamma\biggl{(}\frac{a}{2^{j+1}}+\frac{1}{2}+ \frac{iz}{2^{j+1}}\biggr{)}\biggr{)}\biggr{)}\] \[=\ln\Biggl{(}\frac{\Gamma(a+1)^{2}}{\Gamma(-i\,z+a+1)\,\Gamma(i\, z+a+1)}\Biggr{)}\,, \tag{9.10}\]
so that, by either differentiating with respect to \(a\) or \(z\), and demanding that \(z:=x\in\mathbb{R}\), we respectively find two Euler sums:
\[\sum_{j=0}^{\infty}2^{-j}\left(\psi\biggl{(}a\,2^{-j-1}+\frac{1}{2}\biggr{)}- Re\biggl{(}\psi\biggl{(}\frac{i\,2^{-j}\,x}{2}+\frac{1}{2}+a\,2^{-j-1}\biggr{)} \biggr{)}\right)=2\,\psi(a+1)-2\,Re(\psi(i\,x+a+1)) \tag{9.11}\]
and
\[\sum_{j=0}^{\infty}2^{-j}\,Im\biggl{(}\psi\biggl{(}\frac{i\,2^{-j}\,x}{2}+ \frac{1}{2}+a\,2^{-j-1}\biggr{)}\biggr{)}=2\,Im(\psi(i\,x+a+1)) \tag{9.12}\]
where again \(\psi(x)\) is the digamma function.
**Proposition 9.2**.: The function
\[g\left(a,b,z\right)=\frac{b}{\left(b-z\right)}\frac{\Gamma\left(a-\sqrt{a^{2} -b}\right)\Gamma\left(a+\sqrt{a^{2}-b}\right)}{\Gamma\left(a-\sqrt{a^{2}-b+z} \right)\Gamma\left(a+\sqrt{a^{2}-b+z}\right)} \tag{9.13}\]
satisfies the identity
\[g\left(a,b,z\right)=\prod_{j\geq 0}g\left(\frac{a}{2^{j}}-1,1-\frac{a}{2^{j+1}} +\frac{b}{2^{2j+2}},\frac{z}{2^{2j+2}}\right). \tag{9.14}\]
Proof.: The function \(g\left(a,b,z\right)\) has infinite product representation
\[g\left(a,b,z\right)=\prod_{m\geq 1}\left(1-\frac{z}{m^{2}+2am+b}\right). \tag{9.15}\]
We deduce
\[g\left(a,b,z\right) =\prod_{n\geq 1,j\geq 0}\left(1-\frac{z}{\left(2n-1\right)^{2}2^{2j }+2a\left(2n-1\right)2^{j}+b}\right)\] \[=\prod_{n\geq 1,j\geq 0}\left(1-\frac{z}{2^{2j+2}n^{2}-2^{2j+2}n+2 ^{2j}+2^{2j+2}an+b-2^{j+1}a}\right)\] \[=\prod_{n\geq 1,j\geq 0}\left(1-\frac{z}{2^{2j+2}n^{2}+n\left(-2 ^{2j+2}+2^{j+2}a\right)+\left(2^{2j}+b-2^{j+1}a\right)}\right)\] \[=\prod_{n\geq 1,j\geq 0}\left(1-\frac{z}{n^{2}+n\left(\frac{a}{2^ {2j}}-1\right)+\left(1+\frac{b}{2^{2j+2}}-\frac{a}{2^{j+1}}\right)}\right)\] \[=\prod_{j\geq 0}g\left(\frac{a}{2^{j}}-1,1-\frac{a}{2^{j+1}}+ \frac{b}{2^{2j+2}},\frac{z}{2^{2j+2}}\right). \tag{9.16}\]
Explicitly,
\[\frac{b}{\left(b-z\right)}\frac{\Gamma\left(a-\sqrt{a^{2}-b}\right) \Gamma\left(a+\sqrt{a^{2}-b}\right)}{\Gamma\left(a-\sqrt{a^{2}-b+z}\right)\Gamma \left(a+\sqrt{a^{2}-b+z}\right)}\] \[=\prod_{j\geq 0}\frac{1-\frac{a}{2^{j+1}}+\frac{b}{2^{j+2}}}{ \left(1-\frac{a}{2^{j+1}}+\frac{b-z}{2^{j+2}}\right)}\frac{\Gamma\left(\frac{a }{2^{j}}-1-\sqrt{\left(\frac{a}{2^{j}}-1\right)^{2}-\left(1-\frac{a}{2^{j+1}} +\frac{b}{2^{2j+2}}\right)}\right)}{\Gamma\left(\frac{a}{2^{j}}-1-\sqrt{\left( \frac{a}{2^{j}}-1\right)^{2}-\left(1-\frac{a}{2^{j+1}}+\frac{b}{2^{2j+2}} \right)+\frac{z}{2^{2j+2}}}\right)}\] \[\times\prod_{j\geq 0}\frac{\Gamma\left(\frac{a}{2^{j}}-1+\sqrt{ \left(\frac{a}{2^{j}}-1\right)^{2}-\left(1-\frac{a}{2^{j+1}}+\frac{b}{2^{2j+2} }\right)}\right)}{\Gamma\left(\frac{a}{2^{j}}-1+\sqrt{\left(\frac{a}{2^{j}}-1 \right)^{2}-\left(1-\frac{a}{2^{j+1}}+\frac{b}{2^{2j+2}}\right)+\frac{z}{2^{2 j+2}}}\right)} \tag{9.17}\]
## 10. **Conclusion**
The equality of the multisets \(D_{b}=\{(bn)\,.\,b^{j},n\geq 1,j\geq 0\}\) and \(E_{b}=\{m^{\left(\nu_{b}(m)\right)},m\in\mathbb{N}\}\) allowed us to express identities of the multisection type for a variety of special functions. The versatility of this approach is due to the fact that it exploits the equivalence of two summation domains (multisets) regardless of the values of the entities summed over these domains.
We conclude this study with several directions of research:
The first direction would consider multidimensional versions of this approach, i.e. multidimensional multisets allowing summation over lattices in \(\mathbb{R}^{q}\). This new context would allow to study multivariate summations of interesting special functions such as the multidimensional Riemann theta functions. We refer the reader to [17] for an excellent reference in this domain.
A second possibility would require further development to extend the results reported here to finite series and products. Since matrix multiplication is essentially a multiple sum of products, it may be possible to reduce the arithmetic load inherent in large matrix multiplication (in specialized cases) into a single sum, which is effectively what has been done here. Let us notice that similar results for finite sums associated with the Hurwitz-Lerch zeta function [19] were recently obtained; they are however derived using a completely different method, namely contour integration in the complex plane.
Finally, the principle that underlies the results presented in this article can be enunciated as follows: for an arbitrary function \(f\) such that the following series converge,
\[\sum_{n,j\geq 1}f\left(n.\alpha\left(j\right)\right)=\sum_{m\geq 1}\beta \left(m\right)f\left(m\right) \tag{10.1}\]
where \(\beta\left(m\right)\) is the number of integers of the form \(\alpha\left(j\right)\) that divide \(m.\) For example,
- with \(\alpha\left(j\right)=2^{j+1}\), the function \(\beta\left(m\right)\) enumerates the number of distinct powers of \(2\) that divide \(m\), which is also the \(2-\) valuation \(\nu_{2}(m)\)
- with \(\alpha\left(j\right)=j\), the function \(\beta\left(m\right)\) coincides with the number-of-divisors function \(\sigma_{0}\left(m\right)\)
- with \(\alpha\left(j\right)=j^{2}\) then \(\beta\left(m\right)\) is the number of square integers that divide \(m\) (entry A046951 in [4]).
## Appendix A Proof of identity (8.37)
Identity (8.37) claims that
(A.1) \[\sum_{j\geq 0}2^{j}\frac{e^{-k\pi\left(2^{j}+1\right)}}{\cosh\left(k\pi 2^{j}\right)}=\frac{e^{-2k\pi}}{\sinh\left(k\pi\right)}.\]
Denoting \(q=e^{-k\pi}\), this is
(A.2) \[\sum_{j\geq 0}2^{j+1}\frac{q^{2^{j}+1}}{q^{2^{j}+q^{-2j}}}=\frac{2q^{2}}{q^{-1}-q}\]
or equivalently
(A.3) \[\sum_{j\geq 0}2^{j+1}\frac{1}{q^{-2^{j+1}}+1}=\frac{2q^{2}}{1-q^{2}}.\]
Substituting \(q\mapsto\frac{1}{q}\) produces
(A.4) \[\sum_{j\geq 0}2^{j+1}\frac{1}{q^{2^{j+1}}+1}=\frac{2}{q^{2}-1}.\]
We know from (6.3) that
(A.5) \[\frac{q}{1-q}=\sum_{j\geq 0}2^{j}\frac{q^{2^{j}}}{q^{2^{j}}+1}.\]
Substituting \(q\mapsto\frac{1}{q}\) produces
(A.6) \[\frac{1}{q-1}=\sum_{j\geq 0}2^{j}\frac{1}{q^{2^{j}}+1}\]
so that
(A.7) \[\sum_{j\geq 0}2^{j+1}\frac{1}{q^{2^{j+1}}+1}=\frac{1}{q-1}-\frac{1}{1+q}= \frac{2}{q^{2}-1},\]
which is (A.4).
|
2302.04238 | Computational Models of Solving Raven's Progressive Matrices: A
Comprehensive Introduction | As being widely used to measure human intelligence, Raven's Progressive
Matrices (RPM) tests also pose a great challenge for AI systems. There is a
long line of computational models for solving RPM, starting from 1960s, either
to understand the involved cognitive processes or solely for problem-solving
purposes. Due to the dramatic paradigm shifts in AI researches, especially the
advent of deep learning models in the last decade, the computational studies on
RPM have also changed a lot. Therefore, now is a good time to look back at this
long line of research. As the title -- ``a comprehensive introduction'' --
indicates, this paper provides an all-in-one presentation of computational
models for solving RPM, including the history of RPM, intelligence testing
theories behind RPM, item design and automatic item generation of RPM-like
tasks, a conceptual chronicle of computational models for solving RPM, which
reveals the philosophy behind the technology evolution of these models, and
suggestions for transferring human intelligence testing and AI testing. | Yuan Yang, Mathilee Kunda | 2023-02-08T18:09:01Z | http://arxiv.org/abs/2302.04238v1 | # Computational Models of Solving Raven's Progressive Matrices: A Comprehensive Introduction
###### Abstract
As being widely used to measure human intelligence, Raven's Progressive Matrices (RPM) tests also pose a great challenge for AI systems. There is a long line of computational models for solving RPM, starting from 1960s, either to understand the involved cognitive processes or solely for problem-solving purposes. Due to the dramatic paradigm shifts in AI researches, especially the advent of deep learning models in the last decade, the computational studies on RPM have also changed a lot. Therefore, now is a good time to look back at this long line of research. As the title--"a comprehensive introduction"--indicates, this paper provides an all-in-one presentation of computational models for solving RPM, including the history of RPM, intelligence testing theories behind RPM, item design and automatic item generation of RPM-like tasks, a conceptual chronicle of computational models for solving RPM, which reveals the philosophy behind the technology evolution of these models, and suggestions for transferring human intelligence testing and AI testing.
keywords: Raven's Progressive Matrices, Intelligence Tests, AI Testing +
Footnote †: journal: Journal of Artificial Intelligence
## 1 Introduction
Most AI researcher, if not all, must have ruminated on fateful questions, which are disturbing but cannot be answered yet, such as "how far are we
on the way to achieve the human-level AI?" and "how long does it take for us to fully understand the fundamental mechanism of intelligence?" Some are more pessimistic, like "will human-level AI be realized in my lifetime?" Though these questions cannot be answered for now, every AI researcher is glad to see these questions being raised and attempts being made to answer them, because, whether optimistic or pessimistic, these questions represent the conscience of AI research.
Works to answer these questions are mainly centered around comparing AI systems and humans on daily tasks that are considered indicators of intelligence. Among these works, the most direct way is to evaluate AI systems on human intelligence tests. The scope of intelligence tests is larger than the ability tests used in clinical setting. For example, SAT and MAT can be considered as intelligence tests. In addition, many developers and publishers do not name their tests intelligence tests for some people consider the word "intelligence" elitism and racism, and prefer to use more accurate words, like "tests of learning abilities", "assessment of memory and attention", and "development motor scales". Intelligence tests are usually classified into two categories--single-format tests and battery-types tests. The single-format test contains items of the same format while the batter-type test contains multiple subtests of different formats. As current AI systems require the problem format to be clearly defined, evaluations of AI systems on intelligence tests are mainly on the single-format tests or a subtest of battery-type tests. Raven's progressive matrices (RPM) are a family of single-format tests that have been used to test AI systems in a substantial amount of works. Meanwhile, RPM has also become an impetus for developing more intelligent systems that could solve RPM as well as humans. The length of this research line dates back to 1960s; the width of this research line ranges across multiple disciplinarians, such as AI, cognitive science, neuroscience, psychometrics and so on. However, there has been lacking a work that inspects this research line in a joint view of its entire temporal and spatial span and establishes the theoretical depth of it. Given the recent development of this research line, we believe now it is a good point to do this a work.
We will start this work by reviewing the basics of RPM in the context of human intelligence testing in Section 2. The purpose of section is to answer the two theoretical questions that one would first ask about RPM--what RPM measures and how RPM measures it. The answers go well beyond the ones like "it measures human intelligence" and "it asks participants to solve problems". By answering these two questions, we intend to explain the rationale of using RPM as a human intelligence measure. We believe this is necessary for analyzing the rationale of using RPM as a AI measure, and, more generally, for establishing the theoretical foundation of AI testing.
We extend the discussion to the entire problem domain represented by RPM in Section 3. This domain includes several more tasks that are similar to RPM and also used for human intelligence testing and AI testing. To distinguish them from original RPM, we refer to them as RPM-like tasks. In these tasks, while items for human intelligence testing are mostly handcrafted by human experts, algorithmically-generated items are more and more useful in some special testing scenarios such as computer-based, adaptive, large-scale and/or repeated testing. Algorithmically generated items are also a realistic incentive for studies of deep learning models for solving RPM-like problems. Thus, in the second half of this section, we also reviewed the important works for algorithmic generation of matrix reasoning items, which exactly replicate the format of original RPM. In this section, we intend to (a) provide our readers with different choices of tasks and problem/data sets for different research purposes, (b) provide practical guidance for building algorithmic item generators, and (c) pave the way for the discussion of learning models in the following sections.
In Section 4, we propose a framework to collate all computational models for solving RPM and RPM-like tasks. We refer to this framework as a conceptual chronicle because it emphasizes the conceptual connections between computational models and the underlying logic for technological development. It is neither like the reviews that use specific taxonomies of reviewed works nor the ones that compile the reviewed works into a chronological order. Instead, it simulates the process of how a beginner's understanding of this field would
naturally evolve as she knows more and more about this field. In a sense, it is more like chapter organizations of textbooks. We believe such a presentation is the best way for readers to gain a coherent understanding of this field.
In Section 5, we zoom away from the computational models and address more general topics of AI testing. We first tackle the fundamental issue in this research field--i.e., the validity of using intelligence tests and similar tests to evaluate AI systems. The discussion is based on the initial idea that AI systems could be measured by these tests as human intelligence is measured by them. Unless this issue is properly resolved, the practice of applying these tests on AI systems would be restricted into pure problem solving for specific problems, rather than deepening our understanding of human intelligence and AI. Secondly, on the flip side, we also discuss the implications of human intelligence manifested on intelligence tests for building AI systems. The generalization ability and robustness of human intelligence on intelligence tests are far better than what current AI systems could achieve. We believe such a discussion is crucial for future works in this research field.
## 2 Raven's Progressive Matrices
For readers who are not familiar with RPM, Figure 1 shows some examples of RPM items. The original RPM tests contain items of four formats as shown in Figure 1. The items are presented as multi-choice problems. The context can be a single image with one piece missing (Figure 0(a)), or a 2\(\times\)2 or 3\(\times\)3 matrix
Figure 1: RPM examples of different formats and stimuli.
with the last entry missing (Figure 0(c), 0(b) and 0(d)). To solve an RPM item, one needs to select an answer from the answer set to complete the context matrix. In original RPM tests, the answer sets contain 6 choices for single-image and 2\(\times\)2 matrices and 8 choices for 3\(\times\)3 images.
Given different perceptual stimuli that populate the matrix, the item requires different cognitive abilities and skills. For example, the items in Figure 0(a) and 0(b) tap into cognitive abilities of perceptual processing. Particularly, Figure 0(a) requires processing perceptual continuity to interpolate the missing piece in (or match the answer choices to) the context image; Figure 0(b) requires processing perceptual progression to extrapolate the missing image. The other two items in Figure 0(c) and 0(d) differ from the first two because they requires not only the perceptual processing abilities, for example, perceptual decomposition and organization, but also abstract inductive reasoning, which involves constructing abstract symbols from raw perceptual stimuli and reasoning about these symbols.
Figure 1 represents the most typical designs of original RPM. It needs to be pointed out that RPM-like tasks are not restricted to these designs and that various designs have bee used in the RPM-like task to test different cognitive abilities and verify cognitive theories (more details in Section 3).
It has been claimed that RPM tests are the best single-format intelligence test that one can have. This claim is based on the statistical evidence that the test scores on RPM are highly correlated with all other common intelligence tests. RPM could be visually considered located at the center on the map of all intelligence tests (Snow et al., 1984), implying that the underlying trait behind RPM tests is also central to the traits that are measured differently. For this reason, while RPM receives much attention in clinical settings, it also receives a great deal of attention in research settings, especially in the communities of cognitive science and artificial intelligence.
### What RPM Measures?
What RPM measures exactly? This simple question must have been haunting many researchers who are not psychologists or cognitive scientists for the first several years of their research on RPM. Well, the answer to this question may be quite straightforward to some researchers--it measures intelligence. But the others simply do not understand why these "drop in from the sky" items can tell about a person's intelligence. This question is probably better to be rephrased as "why and how does solving these problems composed of simple geometric patterns measure a person's intelligence?"
The answer is not a simple one, given the complex nature of human intelligence testing. First of all, RPM represents a type of intelligence tests that are theory-motivated. That is, the test development is inspired and guided by some abstract theories about intelligence, which involve factors that are not observable. In contrast, our stereotypical impression of tests is the ones that are related to our daily experience and pragmatic purposes. For example, SAT contains sections of writing, verbal comprehension, and mathematics because competence on them is necessary for students to perform well in college and graduate; the Armed Services Vocational Aptitude Battery contains sections of electronics, auto, shop, mechanical comprehension, and assembling objects, because these knowledge and skills are necessary for the technical positions in army. The development of these tests starts off with clear purposes and understanding of what specific behavior should be measured.
However, RPM, as an intelligence test, is to measure intelligence--a factor that is not clearly defined, directly observable, or measurable. Thus, theories have been constructed to explain the relation between intelligence and observable and measurable behavior. When RPM is not introduced to someone without clarifying the theories, she would have the question at the beginning of this subsection. In particular, John C. Raven, the author of RPM (Raven, 1936, 1941), had studied with Charles Spearman, who noticed that a person's performances on tests of different cognitive abilities are correlated and thus hypoth
esized that a factor--general intelligence \(g\)1--underlies all cognitive abilities. Spearman further pointed out that the \(g\) factor is composed of two abilities -- _eductive ability_ and _reproductive ability_. Eductive ability is the ability to make meaning out of confusion and generate high-level, usually nonverbal, schemata which make it easy to handle complexity. Note that the process of "eduction" is more often referred to as inductive reasoning. Reproductive ability is the ability to absorb, recall, and reproduce learned information and skills.
Footnote 1: Spearman referred to \(g\) as general cognitive ability because he thought the word intelligence had been abused by many people.
To test eductive and reproductive abilities, Raven developed RPM and Mill Hill Vocabulary Scale, respectively. In contrast to the pragmatic tests, the development of these tests started off with the author's personal understanding of these abilities. But, it is important to point out that the development of theory-motivated tests are not idiosyncratic because the developer needs to prove that the test indeed measures what it is expected to measure. The proof is usually achieved by collecting statistical evidence that the test score is correlated with certain measurable behavior and other tests, which are determined by the purpose of the test and interpretation of test score. For example, if the test is for recruitment, the test score should be correlated with future job performance; if the test is a general intelligence test, the test score should be correlated with cognitive ability tests and medical data such as fMRI data of the brain. In the terminology of psychometrics, the developer needs to validate the test to make sure it measures what it is expected to measure. However, the studies of RPM validity would make a new book. We simply claim that RPM is well-validated test of general intelligence.
Readers might have already noticed that there are two abilities under the umbrella of \(g\) and correspondingly two tests. What about the reproductive ability and its test? Why is RPM considered as the best single-format test for general intelligence instead of the other? Is eductive ability more important than reproductive ability? In his theory of general intelligence, Spearman did
not treat these two abilities as separate factors. On the contrary, he believed that there is only a single factor--\(g\)--underlying all cognitive abilities, and eductive and reproductive abilities are two "analytically distinguishable components" of \(g\)(Raven, 2008). Eductive and reproductive abilities are better better treated as two interwoven general cognitive processes, through either of which \(g\) can be measured. Since the test scores of RPM are best correlated to other intelligence tests, RPM is considered the most effective single-format intelligence test.
Now is a good point to compare to another two relevant concepts that per-vade the literature of intelligence and our readers are probably more familiar with them. In the theory of general intelligence by Cattell (Cattell, 1941, 1943, 1963, 1987), he proposed that there are two general factors (emerging from factorial analysis) subtending intellectual performances--fluid intelligence and crystallized intelligence. Fluid intelligence, \(g_{f}\), is the ability to discriminate and perceive complex relationships when no recourse to answers is already stored in memory. Crystallized intelligence, \(g_{c}\), consists of judgmental, discriminatory reasoning habits long established in a particular field, originally through the operation of fluid intelligence, but no longer requiring insightful perception for their successful operation. The definitions of fluid and crystallized intelligence resembles the ones of eductive and reproductive abilities. Moreover, fluid and crystallized intelligence are frequently used as synonyms of eductive and reproductive abilities in literature. But these two sets of terms are conceptually different. In particular, Spearman considered eductive and reproductive abilities as two components, while Cattell treated fluid and crystallized intelligence as factors. When we say components of a system, we mean that the components must work together for the system to work; if either of eductive and reproductive component does not work, the whole system does not work. But when we say factors (especially in factorial analysis), we mean different dimensions that each exert separable influence on experimental outcome and thus can be studied separately. We can calculate what percentage of the variation in the data is caused by which factor (using procedures in analysis of variance), but it is conceptual wrong to do so in component systems because the components'
influences are not separable. Note that this does not mean that factors are completely independent because two factors can still correlate and jointly contribute to a proportion of variation. A good example is height and weight, which are correlated, but still two different concepts and factors. As factors, their private and and shared contribution to athletic ability can be determined statistically if we collect data of athletes. Therefore, when we are using these two sets of terms interchangeably, we need to be clear about which theoretical assumption about them and have different conclusions from the the experiment if necessary.
Beside the conceptual issue behind terminology, another issue is that the boundary between theory-motivated and pragmatic tests is not so clear in practice. As more and more researches are conducted on a pragmatic test, theories will be invented to explain human responses on the tests. Similarly, as a theory-motivated test is proven to be a valid measure for some mental trait, it is also possible to use it for pragmatic purposes. For example, RPM was once used for military recruitment in UK during World War II (Burke, 1958).
### A Brief History of RPM
This paper would be incomplete if we did not say something about the history of RPM, which is almost 100 years long. Admittedly, not every detail of this history is relevant to our research of RPM in the context of AI. However, the development of RPM in human intelligence testing would provide potential enlightenment for the future of AI testing, which is largely undefined yet. We will introduce the whole family of RPM in this subsection2, and discuss the the motivation behind each RPM test and the connection between them.
Footnote 2: This subsection is mainly based on the manuals of RPM tests. For readability, we will not insert citations of the manuals in this subsection. Otherwise, it would be everywhere.
Raven (1936) developed the first RPM test in 1930s when he was studying with Lionel Penrose, who was a geneticist and psychiatrist. This test was used to study the genetic and environmental determinants of intellectual defect. As other genetic studies, this study required a large population of subjects, includ
ing adult parents and children at all ages, being tested at different places, such as home, school, and workplace. It is, therefore, infeasible to administer full-length intelligence tests, such as Binet tests and Wechsler tests, which require hours for a session. In addition, because some subjects then were illiterate and many workplaces were too noisy for verbal questions, the item had to be non-verbal and as self-evident as possible. These practical requirements together led to the design of the first RPM test.
As we have mentioned, the development of RPM was theoretically inspired by the Spearman's theory of intelligence. Although the theory is instructive for understanding intelligence, the overarching \(g\) factor is a latent variable, which is not directly observable and measurable. This makes its measurement inherently complicated because one needs to identify the measurable activities and decide how they relate to the latent variable, for example, it can be calculated by weighting scores on multiple cognitive ability tests. To simply its measurement, Raven mentioned in his personal notes that he intended to develop "a series of overlapping homogeneous problems whose solutions required different abilities" (Carpenter et al., 1990). In particular, these items are homogeneous in the types of perceptual stimulus and abstract relations, but their difficulty varies in a wide range. If these homogeneous items are arranged evenly in an increasing order of difficulty, they together will form a ruler of intelligence. That is, a subject is less likely to be able to solve an item if she cannot solve the items before it. As the test is administered to more and more people and more data are collected, the item difficulty is determined more accurately (relative to people's ability to solve it; through psychometric procedures). Now, the outcome of this multi-ability, homogeneous, and increasing-difficulty design is that we can measure the latent variable \(g\) with a single single-format test. Intuitively, the RPM tests make the \(g\) factor directly measurable and the scores more interpretable as we use a tape measure to measure height and a thermometer to measure temperature.
RPM is a family of progressive matrices tests, including three main tests--Standard Progressive Matrices (SPM), Coloured Progressive Matrices (COM), and Advanced Progressive Matrices (APM), and each test has multiple versions
consisting of different items. The first RPM test is the SPM test published in 1938 (Raven, 1941), which is the ancestor of all the following RPM tests. Including the first version of SPM, all the SPM tests are composed of 60 items, which are organized into 5 set (A, B, C, D, and E) according to their difficulty. The item difficulty increases within each set and from Set A through Set E. Meanwhile, each set has a distinct theme manifested by the perceptual stimuli and conceptual relations of items in this set.
To spread the scores and have a better precision at the lower and upper ends of the ability range, the first versions of CPM and APM were developed and published in 1947. CPM reused the Set A and B of 1938 SPM and placed a transitional set of 12 items--Set Ab--between Set A and B. The items in this set were constructed to be intermediary in difficulty between Set A and Set B. Thus, CPM has had 36 items organized into three sets. As the name indicated, CPM is printed in color to appear interesting as it is often administered to children under 11. CPM can also be administered to mentally retarded persons, the elderly, and people with brain injury. Different from SPM and APM, CPM was published in two forms -- the book form (i.e., a the paper-and-pencil test) and the board form. In the board from, each item is a board with a part removed and movable pieces as answer choices to complete the board. The board form has been proved to be equivalent to the book form, tapping the same cognitive process. Moreover, the board form has two advantages. First, the board form can be better administered without verbal instruction because the administrator can demonstrate the expected response by manipulating the board and answer pieces. This is important for people who are deaf people or unable to communicate for some reasons.
The APM was originally drafted in 1943 for use by the British War Office Selection Boards, who needed a more difficult RPM test that can provide better discrimination at higher levels than SPM. The APM test was published in 1947, consisting of two sets -- Set I and Set II. Set I comprises 12 items covering all themes and sampled on the full test of SPM. In practice, Set I can be used to familiarize people with the test, sorting people into the "dull" 10%, "average"
80%, and "bright" 10%, and decide whether SPM or Set II should be used next. The 1947 Set II consisted of 48 items, which resembled the items in Set C, D, and E of SPM in presentation and argument. In 1962, 12 items making no contribution to the score distribution were dropped from Set II and the remaining 36 item re-arranged.
In the last decades, there has been a significant and steady increase in many intelligence test scores, including the SPM scores. Among all RPM tests, SPM is designed to cover the widest ability range. But this increase causes SPM to be less discriminative at upper levels of ability range (i.e., ceiling effect). In 1998, a new SPM test--SPM plus--was published to restore its discriminative power at the upper levels, and, meanwhile, keep its discriminative power at the lower levels unchanged. In particular, SPM plus includes all the items in Set A and B of SPM and replaces moderately difficult items with more difficult items in Set C, D, and E.
As a result of its simple self-evident format, insensitivity to culture and language, and centrality among all intelligence tests, RPM has been the most widely studied single-format intelligence test and has large amounts of testing data available for research. This, however, raises a concern that the test has become too well known and the participants could be coached for solving them or memorizing the answers. This is problematic when important decisions (such as educational opportunity and job recruitment) are made upon the test results. Therefore, parallel versions of CPM and SPM was developed in 1998. These versions are designed to be parallel to the classic SPM on an item-to-item and overall score basis so that the existing data of classic SPM and CPM could be used to analyze the data of the parallel versions.
The administration procedure of RPM tests is relatively flexible compared to other intelligence tests. RPM tests can be administered both individually and in groups. In individual test, one administrator guides one participant through the test. In group test, one administrator proctors the participants as in normal school exams. Individual tests introduce emotional factors which are not present in group testing or self-administration, and thus the scores are slightly
lower than group tests, in which participant work on their own. But individual tests allow the administrator to make sure the participant understands what to do and observe the participant to collect more data, such as whether the participant uses a trial-error strategy. Thus, individual test is recommended when important decisions are made upon the test result. In both group and individual tests, instructions can be given verbally or using gestures such as pointing, nodding, and shaking head. In most cases, RPM tests are given in an untimed manner or with sufficient time to attempt every item since, when timed, the validity of scores is reduced according to statistical evidence. Moreover, it has been argued that RPM is neither a speed test nor a power test, or a combination of both. There is an exception that, after familiarization with Set I of APM, Set II of it was administered with a time limit to measure the speed of intellectual work.
To sum up, RPM is a big family of tests, including SPM, parallel SPM, SPM plus, CPM (with two forms), parallel CPM, and APM. All the RPM tests that are used today have gone through many revisions as more and more data are collected in different countries and from different groups of people. There also exist different procedures to administer the tests, which result in qualitative different results. When studying RPM in the context of artificial intelligence, it is important to point out which RPM test is used and how it is used in terms of the administration procedures.
### What RPM measures exactly?
At the beginning of this section, we have tried to answer the question "what RPM measures" from a theoretical perspective. In short, RPM measures eductive ability, which is a component of general intelligence (i.e, the \(g\) factor or genera cognitive ability), and thus can be used as an index of general intelligence. However, the answer is still too abstract and does not land on the concrete items in RPM tests. To be honest, the answer at the beginning could apply to almost every test of eductive ability, fluid intelligence, or general intelligence. To tell the whole story of RPM, we further reify the answer by inspecting
the concrete items and the administration procedures.
We have indicated in previous section that the test design is the outcome of an iterative process, in which the revised tests are repeatedly administered to people so that data can be collected to further revise the test.Since RPM is also a theory-motivated test, the test design is also determined by by the theory of intelligence and how it is implemented in the test. We take SPM as an example. To protect the secrecy of RPM tests, we created several new items (Figure 2) that simulate the item series in SPM. As mentioned, there are five sets in SPM (Set A, B, C, D, and E). The eight items in Figure 2 simulate the way how the item design varies from the first item of Set A to the last item of Set E. At the beginning of Set A, a participant will see an item similar to the one Figure 1(a). The role of this item is to give the very basic idea of the test. This item is a good starting point in that no prior knowledge is required to solve the item and its solution is self-evident to almost every participant. In the standard administration procedure, this item is used for teaching trial. The administrator explicitly tells (possibly in a nonverbal way) the participant that "only one of answer choices can complete the pattern correctly" and which one
Figure 2: Example SPM item series.
it is correct for this item.
Note that in every administration procedure in the manual of RPM tests (individual or group, verbal or nonverbal), the administrator only tells the participant which answer choice is correct, but never explains why it is correct or the thinking process to solve it. This point is extremely important for the testing to be valid. The teaching trials are to help the participant with the format of the test, i.e., one needs to select an answer to complete the pattern, but not the content of the text, i.e., what pattern it is and how it is completed. The content part is just what the test measures--eductive ability. An even stronger but similar argument (Raven, 2008) is that it is not correct to describe RPM items as "problems to solve". The instruction that an answer has to be selected does not means that it is a problem. Instead, only when the participant has made some meaning out of the item can the participant sees the item as a problem to solve. The meaning-making part is the core of RPM items, which measures the eductive ability.
After the items for teaching trials, the participant will see an item similar to the one in Figure 1(b). This item takes an important transitional role that shifts the participant's attention from the test format to the test content. In particular, this item explicitly exhibits the nature of the test content--relational reasoning. That is, to solve the following items, the participant needs to consider the relations between the objects rather than, for example, repeating the raw perceptual input in the teaching trials. In addition, the transitional role also lies in the appearance of the items: the teaching-trial items and the transitional items are not presented as matrices, but the transitional items are one step closer to the matrix structure in the following items (see Figure 1(c) through 1(h)), because the relations in transitional items happens in both the horizontal and vertical directions. These transitional items are necessary because they assure that the participant give valid responses to the following items based on the understanding accumulated in the previous items.
After the transitional items, the test enters 2\(\times\)2 items like the one in Figure 1(c) and 1(d), in which, geometric objects are separated into the disconnected
matrix entries. These 2\(\times\)2 matrices start with the ones that more rely on low-level perceptual processing (Figure 1(c)) and are relatively easy. After the participant is familiar with the format of 2\(\times\)2 matrix, it and gradually move on to the ones that involves more abstract relations (Figure 1(d)) and are thus more difficult than the perceptual items.
The four items in Figure 1(a) through 1(d) represent the test design in the first two sets of SPM. The following three sets follow the same logic--each item is like a rung of a ladder that makes it possible for the participant to step on the next rung, and the maximum height the participant can reach depends on her strength for climbing the ladder. As a real ladder rung, an item cannot be two far from the previous one. For example, the participant will find an item similar to the one in Figure 1(e), which is used to introduce the 3\(\times\)3 structure. This item only differs from some items in Set A and B in the matrix size but underlying perceptual processing remains the same. After the participant gets familiar with the 3\(\times\)3 structure, SPM moves on, as in the Set A and B, from perceptual items to the items that involve more abstract relational concepts, such as number (Figure 1(f)), binary logical operation (Figure 1(g)), and ternary permutation (Figure 1(h)). Moreover, the number of relations in item also gradually increases in the last three sets of SPM. For example, the items in Figure 1(e), 1(f), and 1(g) each contain only one relation; the item in Figure 1(h) contains two relations--permutation of object shape and permutation of filling texture.
The example series in Figure 2 epitomizes the design of SPM. Through this example, we can see the motivation behind the test design is to provide an ability ladder for the participant to climb. The rungs/items are distributed evenly so that the ladder is climbable. Furthermore, the ladder is climbable to participants for people at every ability level since it starts from the "ground"--i.e., the beginning trivial items requiring no prior knowledge--and guides the participant to move in the expected direction through conceptually connected items. Once the "field of thought" is established, how far the participant can go depends on her ability in this field.
In a sense, SPM is different from problem-solving tests that everyone has taken at school. Instead, SPM is a miniature that simulates a collection of all tests from elementary level to college level because one need to graduate from every level sequentially. Although the duration for these two types of testing is vastly different, both of them measure the learning potential of the participant. Note that the word "potential" here is more suitable than "ability" because the "potential" means a latent quality that develops under the influence of environmental factors. Since the environment factors can be better controlled in intelligence tests than in the education system, SPM is probably a better measure of learning potential. Moreover, potential is more than ability since the desire to learn knowledge and the courage to conquer new problems are also part of potential.
In general, RPM is much more than problem solving. Even the word "test" is misleading because of our stereotypical impression of test. RPM tests are a system for evaluating eductive ability through measuring the learning potential. However, the common practice of using RPM or RPM-like tests as purely problem-solving tests and making extravagant claim about corresponding abilities of AI systems in many AI studies have been a big misuse of these tests.
## 3 RPM-Like Tasks
In this section, we extend our discussion to the entire problem domain represented by RPM, which includes RPM-like tasks that inherited the basic elements of original RPM tests and implemented them in more enriched manners. Such RPM-like items can be found in almost every modern intelligence test. In contrast to the theoretical analysis in the last section, we take a more pragmatic approach in this section to describe these tasks. In particular, We surveyed four intelligence tests 3 that are widely used in clinical setting and/or frequently
related to RPM in literature--Cattell's Culture Fair Intelligence Test (CFIT), Cognitive Assessment System-Second Edition (CAS2), Wechsler Adult Intelligence Test-Fourth Edition (WAIS-IV), and Leiter International Performance Scale-Revised (Leiter-R). Through this survey, we summarized five tasks in the problem domain--matrix reasoning, figure series, analogy making, contrastive classification, and open classification. In addition, We further survey the methods for algorithmically generating matrix reasoning items, which are a prerequisite for the discussion in the following sections of data-driven AI models for solving RPM-like tasks. As we have mentioned, the items in intelligence tests are mostly handcrafted and thus in a very limited number, which is far below the need of current data-driven models. This section could provide options of existing RPM-like items and suggestions of algorithmically creating new RPM-like datasets for different research purposed.
### RPM-like Tasks in Intelligence Tests
Although the theories of intelligence behind the four intelligence tests are different, the RPM-like tasks in these tests are consistent to some degree in terms of what is measured. For example,
* the RPM-like tasks in CFIT measures the general cognitive ability, i.e., the \(g\) factor, and stresses that the \(g\) factor "reaches its purest expression, i.e., high \(g\) loading, whenever complex relationships have to be perceived" (Cattell, 1950);
* the RPM-like tasks in CAS2 measures the simultaneous processing ability in the PASS theory of intelligence (Das et al., 1994), i.e., the ability to "integrates stimuli into (conceptually) interrelated groups or a whole" (Naglieri et al., 2014);
* the RPM-like tasks in WAIS-IV "involves fluid intelligence, broad visual intelligence, classification and spatial ability, knowledge of part-whole relationships, simultaneous processing, and perceptual organization" (Wechsler et al., 2008);
* the RPM-like tasks in Leiter-R measure "fluid reasoning, deductive and inductive reasoning, and the ability to perceive fragments as a whole, generate rules out of partial information, perceive sequential patterns, and form new concepts" (Roid and Miller, 1997).
From these descriptions of RPM-like tasks in these tests, we can see that they all more or less involve measuring eductive ability or fluid intelligence. Given this internal connection between RPM-like tasks, it would be unsurprising to see common elements shared between them. For perceptual elements, to distinguish eductive ability (or fluid intelligence) with reproductive ability (or crystallized intelligence), they must not be unique to certain cultural groups. There are not too many choices satisfying this requirement, for example, elements from nature like sun and moon, human body (hand and foot), and common shapes and colors. Similarly, common conceptual elements, such as symmetry, topological relations, and number concepts, are also frequently used to create RPM-like items. Now, it is already very hard for test developers to design novel elements for RPM-like items because most of the appropriate elements have already been used in intelligence tests. If one comes up with novel perceptual and conceptual elements that can be used in RPM-like tasks, it will be a great contribution to intelligence test development. Exploration for proper perceptual and conceptual elements for RPM-like tasks is also helpful for building and evaluating AI systems working in this problem domain.
In addition to perceptual and conceptual elements, there are different formats to present these elements. According to these formats, we classify the RPM-like tasks in the four intelligence tests surveyed into five groups--matrix reasoning, figure series, analogy making, contrastive classification, and open classification. These formats are equally interesting to the perceptual and conceptual elements, as each format is a delicate way to present the same set of elements so that they can be instantly perceived as a problem to be solve but not a trivial one.
#### 3.1.1 Matrix Reasoning
Since the the four tests are battery-type tests, they all have multiple subtests, including the RPM-like subtests. Therefore, to keep the whole test in a reasonable length, the RPM-like subtests are briefer than the original RPM tests. In particular, these RPM-like subtests do not necessarily implement the "ladder" design mentioned in Section 2.3, which is an import feature of the original RPM tests. Nevertheless, three of the four tests surveyed contain subtests that replicate the matrix format of original RPM: Test 3 of Scale 2 and 3 of CFIT, Matrices of CAS2, and Matrix Reasoning of WAIS-IV. To distinguish them with the following RPM-like tasks that we will discuss in later sections, we refer to them as matrix reasoning. Figure 3 summarizes matrix reasoning tasks through a diagram.
As shown in Figure 3, there exist two parts in a matrix reasoning item--the context of this multi-choice problem (Part A) and answer choices (Part B). Part A provides the contextual information through a background and a matrix as foreground. Examples of the background can be found in the items of Figure 1(a) and 1(b). The matrix varies in size from 1\(\times\)1 to 4\(\times\)4 in most tests and has at least one missing entry. To increase the difficulty, there can be some entries, which are intentionally hidden but need not to be completed. As indicated in
Figure 3: A diagrammatic summary of Matrix Reasoning Task
the Configuration in Figure 3, the locations and numbers of these two types of entries can also be customized for each item. Part B consists of 5 to 8 answer choices in most tests. The reason that we separate answer choices from the context is not only that their functions are different but also that where answer choices are located relative to the context has an influence on the distribution of choosing each answer choices according to human experiment data. Therefore, this is a design choice that need to be considered in test development. This is also a noteworthy point when evaluating AI on RPM-like tasks. That is, it requires more investigation if AI systems behave differently when answer choices are located differently relative to the context and relative to each other.
Although the matrix reasoning tasks replicate the format of original RPM (with slight modifications such as hidden entries and different locations of missing entries), the content of them are more diverse than original RPM. For example, the difficulty of original RPM mainly lies in extracting conceptual relations, and the requirement for perceptual processing is relatively low; but, due to different underlying theories about intelligence, some RPM-like items are designed to load more on perceptual processing abilities, for example, mentally rotating complex 3D objects and the abstract conceptual relations are built upon such demanding perceptual processing.
#### 3.1.2 Figure Series
Essentially, what makes RPM items meaningful testing questions is the relations between figures and how these relations are arranged in the 2D structure of matrices. There is no particular reason for using matrix structure. That is, as long as the spatial structure makes sense to the relations, one can use any suitable spatial structures (one could use a circular structure if the relations proceeds and comes back to itself, like modulo addition \(+1\:mod\:N\) and the circle of music keys). Thus, it would not be surprising to see a more fundamental structure--series--to be used in RPM-like tasks, such as Test 1 of Scale 2 and 3 of CFIT, Sequential Order and Repeated Pattern of Leiter-R, and part of Matrix Reasoning of WAIS-IV. We refer to RPM-like items of this structure as
figure series. A diagram was given to summarize figure series items in Figure 4.
Figure series has the several characteristics that make it different from other RPM-like tasks. First, the structure of series determines that one or more relations are repeating themselves along the series. Note that the relation is not necessarily a binary relation and it could involve more than two consecutive entries in the series. Second, to provide sufficient contextual information, the figure series are usually longer than a row or a column of matrix reasoning. Third, there could be one or more missing entries in the series. In particular, the missing entry is not necessarily the last one.
Figure series could also be considered a special case of matrix reasoning task by restricting the dimensions of the matrix, but it is also conceptual different from matrix reasoning task. In matrix reasoning, there can be multiple distinct relations along the rows and columns of the matrix. In most cases, the row relations are different from the columns one. One needs to figure out the relations in both row and and columns directions and assemble them to uniquely determine the answer. In figure series, multiple relations are repeating themselves in a single direction.
#### 3.1.3 Analogy Making
Besides modifying the format of original RPM (as in figure series), the context of it could also be viewed from different angles. An important view is from an important human cognitive ability--analogy making. That is, by viewing
Figure 4: A diagrammatic summary of figure series task
the matrix entries as analogs, analogies can be drawn between rows, between columns, or between diagonal lines. The correct answer is thus the one that makes the best analogies out of the matrix. Therefore, the nonverbal analogy-making task could be considered as a close relative of RPM. A classic example of this task is the goemetric analogy problems (find images in (Lovett et al., 2009)) published in the 1942 edition of the Psychological Test for College Freshmen of the American Council on Education. These analogy-making items can also be found in intelligence tests we surveyed, such as Design Analogy of Leiter-R and part of Matrix Reasoning of WAIS-IV. A diagrammatic summary of this task is given in Figure 5.
In the analogy-making task, the context is explicitly separated into two parts, Part A and A' in Figure 5, which are composed of analogs from two different domains. Part A and A' correspond to the base and the target domains in general analogy making situation, where the base domain is usually a familiar one and the target domain is an unfamiliar one which is to be understood through the knowledge in the base domain. The analogy-making task simulates this situation by arranging the analogs in Part A and A' in the same way and removing one or more analogs in the Part A'. Note that, although the analogs in Figure 5 are listed in series, this does not mean that the same relations
Figure 5: A diagrammatic summary of analogy making task
are repeating itself in the series as in Figure series. The analogs could be arranged in any spatial layout when the layout make senses to the relations between analogs. Since the analogs are usually in two series in most intelligence tests, the analogy making task resembles the figure series task. But these two tasks are conceptually different and requires different cognitive abilities. The analogy-making task is also conceptually different from matrix reasoning task even when we artificially separate the rows or columns of a matrix into two parts. This is because, to make an "interesting" analogy, the base and target domains must be perceptually distant from each other and higher-order relations must be extracted from both domains. In matrix reasoning, this means that the rows (or columns) must be sufficiently perceptually different. These conditions are not always satisfied in matrix reasoning items, especially when there exist relations in both horizontal and vertical directions.
#### 3.1.4 Contrastive Classification
Classification has long been used to probe human and artificial intelligence. It requires the participant to extract an abstract concept such that the given stimuli can be classified into these concepts. When these stimuli are like the ones in RPM, classification can be regarded as an RPM-like task as they both reason about the relation between multiple visual stimuli. In intelligence tests, classification tasks can be presented in a contrastive manner. That is, two groups of stimuli are presented and the two groups represent two contrastive but related concepts, for example, large-small, concave-convex, and high-low. But note that classification is not limited to antonym pairs for it also uses concept pairs like pentagon-hexagon and more random concepts like topological structures. The advantage of being contrastive is obvious: it allows the usage of complex and diverse concepts (rather than simple concepts describing perceptual attributes) to make the test intellectually interesting to participants; meanwhile, the complex and diverse concept would not make the item too open to solve as the concept is uniquely determined by a unique difference between the two groups.
## Part A (Concept A)
### Configuration
Figure 6: A diagrammatic summary of contrastive classification task
The most representative contrastive classification is the Bongard Problems. It requires the participant to verbally describe the conceptual difference between the two groups. In most intelligence tests, contrastive classification is usually multi-choice problems, in which answer choices are selected to be a member of a conceptual group, i.e., identifying instances of the concepts drawn out of the two groups. The contrastive classification are usually presented in two manners--explicit and implicit ones. For explicit ones (Figure 6a), the two stimulus groups are explicitly separated, for example, the Bongard Problems and Test 2 of Scale 1 and CFIT. Explicit contrastive classification tasks are also used to evaluate AI system, for example, the SVRT and PSVRT datasets (Stabinger et al., 2021). For implicit contrastive classification (Figure 6b), the stimuli from two conceptual groups are mixed together and the participant needs to separate the them into two groups, like the famous Odd-One(s)-Out tests and Test 2 of Scale 2 and 3of CFIT. Note that, in contrastive classification tasks, spatial layout of stimuli is less important compared to matrix reasoning and figure series. The only requirement is that group membership is clearly indicated in explicit contrastive classification.
#### 3.1.5 Open Classification
Classification task is naturally not contrastive in our daily cognitive activities. That is, the object to classify is not always accompanied by instances of another contrastive concept. Instead of being contrastive, the real-life setting of classification is more based on perceptual and conceptual similarity. Thus, we referred to it as open classification. In particular, the concepts involved in an open classification item can be completely unrelated. There could be only one single concept. For example, in the verbal similarity subtest of WAIS-IV, one would see an item like "in what way are dolphins and elephants alike" 4. A possible answer is that they are both animals and a better answer is that they
are both mammals. Different answers are scored differently. The more specific the answer, the higher the score. As shown by this example, verbal open classification items require a certain amount of prior knowledge to be intellectually interesting. When open classification is in nonverbal form, it could be considered as a RPM-like task. In the intelligence tests we surveyed, examples of nonverbal open classification include Test 4 of Scale 2 and 3 of CFIT and Classification Subtest of Leiter-R.
Similar to the contrastive classification, the open classification can also be presented in explicit or implicit ones, as summarized in Figure 7. The explicit open classification (Figure 6(a)) consists of two parts. Part A provides instances of multiple concepts (not necessarily contrastive or even related) with each instance representing a distinct concept. Part B consists of instances to classify into the concepts in Part A by matching to the instances in Part A. The implicit open classification (Figure 6(b)) is similar to the verbal open classification example except that the dolphins and elephants are replaced by nonverbal stimuli. The response format and scoring are also similar to the dolphin-elephant example.
Figure 7: A diagrammatic summary of open classification task
#### 3.1.6 Summary
The five categories of RPM-like tasks that we summarized from the intelligence tests are by no means comprehensive. The purpose of them is to expand our attention to the entire problem domain represented by RPM so that the AI research is closer to the nature of the problem domain rather than focusing only solve the original RPM or specific tests. The problem domain is much more diverse and larger than the approximately 100 original RPM items. The problem domain spreads out to all visual stimuli and relations among them that are proper to test people with certain prior knowledge and experience.
In item writing of intelligence test, a good "taste" is extremely important. Firstly, a good item first has to be straightforward for the participant to realize that this item is a problem to solve. This point seems saying nothing because any intelligence test item is a problem to solve. The word "problem" here should not be understood literally. In particular, the item is a problem to solve not because the the administrator tells the participant it is so or the participant knows that a test is composed of problems. Instead, the participant should realize this by observing the item and forming a conjecture that there should be underlying patterns based on the observation. This conjecture is more of feeling rather than a complete understanding of the solution or patterns, which means that it is based on a rough idea of what should be paid attention to solve the item. This characteristic to give the participant this feeling is important because it makes the item intellectually interesting and attractive to the participants and the participant is thus motivated to solve the item. Without this characteristic, the participant would possibly give invalid responses, for example, giving random responses without thinking.
The second point in item writing is that the scope of item content should allow a large range of difficulty. Specifically, it should allow to create rather difficult items to test highly intelligent individuals. This point in itself is not an issue because there exist a huge amount of sophisticated abstract relations and patterns if one delves into any specific field. But, when combined with the first
point--straightforward as a problem, this poses a great challenge because these points are contradicting to each other in many cases. A master of item writing is one who can reconcile these two points and achieve a combined effect that when the participant sees the item, she immediately understands in what way it is problem to solve and invests effective intellectual effort to solve it, and when a correct answer is reached, it would be an aha moment that the participant strongly believes that the problem is solved. In this sense, the five categories of RPM-like items mentioned above are masterpieces of item writing. But this does not mean that the problem domain is limited to these categories, and more efforts are needed to further explore the problem domain.
### Algorithmic Item Generation of Matrix Reasoning
Algorithmic Item Generation (AIG) refers to approaches using computer algorithms to automatically create testing items. AIG was initially introduced to address the increased demand for testing items in the special test settings:
* Large-scale testing, for example, repeated tests in academic settings and longitudinal experiments, where many parallel forms are needed due to the retest effect.
* Adaptive testing, in which the next items are determined by the responses to previous items, which is a more efficient and reliable testing form, but also requires larger item banks.
* Computer-based and internet-based testing, which makes standardized tests more accessible to the public and brings the exposure control issue to a new level.
For AIG to work, test developers must have a deep understanding of what is measured and the corresponding problem domain, from which items are generated. In addition, test developers also need to examine the testing properties of generated items, such as validity and reliability, as they are examined in handcrafted tests. AIG has been studied and used in different areas, such as
psychometrics, cognitive science, and education. It can be used to a wide range of testing items from domain-general tests, such as human IQ tests, to domain-specific tests, such as medical license tests (Gierl et al., 2012).
As RPM-like tasks are more and more used in human intelligence testing and AI testing, the demand for RPM-like items has been increasing rapidly. In particular, since data-driven AI systems were applied on RPM-like tasks, the scale of this demand has been changed from hundreds to millions, which is impossible for human item writers to satisfy. Thus, AIG of RPM-like items has been receiving more and more attention. However, AIG of RPM-like items have been studied separately in different research fields. In this subsection, we aggregate these works from different fields together and systematically explore how AIG of RPM-like items works in both human intelligence testing and AI testing. To have a thorough discussion on technical details and theoretical implications, we focus on the matrix reasoning task, which is the most widely studied RPM-like task in both human intelligence and AI. In the rest of this subsection, we first review the AIG works of matrix reasoning for human testing. Then, we switch to the ones for AI testing.
#### 3.2.1 Algorithmically Generating Matrix Reasoning Items for Human Intelligence Testing
Human intelligence tests consist of items which are carefully handcrafted by strictly following the procedures of psychometrics and theories of human intelligence. In particular, Handcrafted items must go through iterations of evaluation and calibrating for good psychometric properties before being included in the final item bank. The attrition rate could be up to 50% (Embretson, 2004). A variety of efforts in AIG have been made to free item writers from this one-ousness. In the following, we discuss the typical AIG works of matrix reasoning for human intelligence testing. The title of each reviewed work is followed by a keyword of its most outstanding characteristic. The technical details of the works are summarized in Table 1.
generation" is more often "automatic item generation" in literature. The word "automatic" alludes to the usage of computer. But the algorithms and the theories of what to measure that support the algorithms are the very essence of AIG, rather than the computer. As it will be shown in this first reviewed work, computer is not necessary. Hornke and Habon (1986) conducted one of the earliest studies, if not the earliest, on AIG of matrix reasoning items. They created a procedure for item generation, hired university students to manually execute the procedure, and created 648 3\(\times\)3 items. Each step in this procedure has finite clearly defined options so that the student can choose between them randomly. Although the diversity and complexity of these items are not comparable to ones handcrafted by human experts, no one had ever "automatically" created so many items before Hornke and Habon (1986).
Hornke and Habon considered the item writing task as the reverse of solving, which can be decomposed into three types of cognitive operations, which address three independent dimensions of the solving process. To generate items, Hornke and Habon thus designed a procedure that sequentially make choices on the three dimensions by selecting from finite sets of options:
* Variation rules of geometric elements: eight options are provided (see the first 8 matrices in Figure 8 for examples)--identity, addition, subtraction, intersection, exclusive union (or symmetric difference), progression, variation of open/closed gestalts (i.e. permutation of three hollow/solid shapes).
* Analogical directions: a variation rule proceeds in row or column direction.
* Perceptual organizations: this dimension addresses how multiple variation rules are combined into a stimulus in a matrix entry. Three options are provided (see the last 3 matrices in Figure 8 for examples): separation, integration, and embedding. Separation means that separate geometric elements are used for different variation rule; integration means that different attributes of a single geometric element are used for different variation rules.
Figure 8: Example items created by following Hornke and Habon’s AIG procedure.
tion rules; and embedding means that different parts of a single geometric element are used for different variation rules.
In their experiment, the hired students were given a set of geometric shapes (e.g. differently sized squares and triangles) and instructed to create items by jointly sampling the 3 dimensions and geometric shapes from the given set. The students were told to create each item by combining at most two variation rules. Therefore, the resulting item bank contained only 1-rule and 2-rule items. Human experiments on this item bank showed that the cognitive operations corresponding to these 3 dimensions explained approximately 40% of the item difficulty. As for the unexplained 60%, other early studies (Mulholland et al., 1980) indicated that the numbers of elements and rules were also major sources of difficulty. Although this "human-based" AIG work looks a bit primitive compared to the computational power today, the way it decomposes the generating process has a long-lasting influence on the following works.
Cognitive Design System Approach--Combination of Cognitive Modeling and PsychometricsEmbretson (1995, 1998, 2004) introduced the Cognitive Design System Approach. Different from other AIG works that focus on generating items, this approach focuses on human testing by integrating cognitive modeling and psychometric models and theories (such as IRT theory and models) into a procedure that is similar to how human experts create and validate intelligence tests. A matrix reasoning item bank was generated as a demonstration.
This approach starts with cognitive modeling of the solving process of an existing cognitive ability test at the information-processing level. In the demonstration, Embretson reused the cognitive model proposed by (Carpenter et al., 1990), which have also been used in many other AIG works of matrix reasoning. However, Embretson also pointed out that the cognitive model did not include perceptual encoding or decision processes in the solving process. Thus, Embretson incorporated three extra binary perceptual stimulus features--object overlay, object fusion, and object distortion--in the generation procedure, which represent three different types of mental decomposition of the complete gestalt
into its basic parts. Object overlay and fusion are similar to separation and embedding in Figure 8, while object distortion refers to perceptually altering the shape of corresponding elements (e.g. bending, twisting, stretching, etc.). A software--ITEMGEN--was developed based on this approach.
Once the cognitive models are determined, the stimulus features are accordingly determined. It then integrates these features into psychometric models to estimate item properties (e.g. item difficulty and item discrimination), formulated as parameterized functions of the stimulus features. The function parameters are initially set by fitting the psychometric models to human data on the existing cognitive ability test. Thereafter, the item properties of newly generated items (by manipulating the stimulus features) can be predicted by these functions. The prediction and empirical analysis of the newly generated items are compared to further adjust the parameters. Once the functions are sufficiently predictive, the psychometric model can be integrated into an adaptive testing system to replace a fixed item bank and generate items of expected properties in real-time. To sum up, the Cognitive Design System Approach is more than constructing an item generator; it also takes into account the psychometric properties of the generated items.
MatrixDeveloper--4-by-4 MatricesMatrixDeveloper (Hofer, 2004) is an unpublished software for generating matrix reasoning items. It has been used in a series of psychometric studies of algorithmically-generated matrix reasoning items (Freund et al., 2008; Freund and Holling, 2011a,b,c). According to the limited description in these studies, the MatrixDeveloper is similar to the Cognitive Design System Approach in terms of variation rules (e.g. the five rules of the cognitive model of (Carpenter et al., 1990)) and perceptual organizations (i.e. overlap, fusion, and distortion). The difference is that it generates 4\(\times\)4 matrix items, which are uncommon for matrix reasoning task. Theoretically, it can thus accommodate more variation rules than 3\(\times\)3 or 2\(\times\)2 matrices so that the differential effects of variation rules can be better studied.
GeomGen--Perceptual OrganizationThe early cognitive modelings of solving handcrafted matrix reasoning items tend to characterize the items by the numbers of elements and rules and types of rules, for example, (Mulholland et al., 1980; Bethell-Fox et al., 1984; Carpenter et al., 1990). This characterization is consistent with the firsthand experience of working on the items and direct measures of human behavior (such as accuracy, response time, verbal protocols, and eye-tracking). In addition, the rationale of this characterization could be explained through the working memory theory of Baddeley and Hitch. However, for creating new items, we need to consider at least one more factor--perceptual organization (Primi, 2001). It tells how geometric elements and rules are perceptually integrated to render the item image. For example, the third dimension in the procedure of Hornke and Habon (1986) is a specific way to deal with perceptual organization. More generally, perceptual organization involves the Gestalt grouping/mapping of elements using Gestalt principles such as proximity, similarity, and continuity. This factor is less clearly defined and no systematic description of this factor has ever been proposed. But, to create new items, one has to adopt some formalized ways to manipulate perceptual organization.
(Arendasy, 2002; Arendasy and Sommer, 2005) proposed a generator program--GeomGen--that adopted a binary perceptual organization, which was reused and extended in many following works.The perceptual organization in GeomGen provides two options--classical view and normal view. In classical view, the appearance of geometric elements changes while numbers and positions of them remain constant across matrix entries. In normal view, numbers and positions of elements change while the appearance of them remain constant across the matrix entries. An obvious difference between the two views is how the correspondence between elements from two matrix entries is established. And this difference is important because it leads to items that requires different cognitive processes at the very first step of correspondence finding before the rules between matrix entries are considered.
The taxonomy of perceptual organization in GeomGen is only a specific way to define perceptual organization but by no means the unique way. For
example, (Primi, 2001) proposed another important taxonomy--harmonic and nonharmonic, which, together with GeomGen taxonomy, forms a more comprehensive description of perceptual organization that is adopted in many following AIG works.
Primi (2001) describes "harmonic organizations as visually harmonic items display perceptual and conceptual combinations that represent congruent relationships between elements, whereas nonharmonic organizations tend to portray competitive or conflicting combinations between visual and conceptual aspects that must be dealt with in reaching a solution." Primi (2001) mentioned that, in the practice of AIG, the nonharmonic items could be derived from the harmonic ones by manipulating the geometric elements to cause misleading Gestalt groupings, as shown in Figure 9. The correct Gestalt grouping/mapping (i.e. element correspondences) are obvious in harmonic items, whereas nonharmonic items requires extra cognitive effort to resolve the conflict between competing gestalt groupings and mappings.
In summary, the contributions of all the aforementioned factors--the number
Figure 9: An example of deriving nonharmonic items from harmonic items.
of elements, the number of rules, the type of rules, and perceptual organization--to item complexity could be explained by their effect on the central executive component of working memory. But the ways they exert their influences are different. The number of elements and rules relate to the short-term memory management and goal (or strategy) management, whereas the type of rules and perceptual organization relate to selective encoding and short-term memory management Primi (2001). According to the literature of AIG of matrix reasoning, the type of rules and perceptual organization are less investigated and might be important for understanding the solving process of matrix reasoning and the item difficulty. Several human studies came to the same conclusion Primi (2001); Arendasy and Sommer (2005); Meo et al. (2007), while other researchers might have different opinions on this (Embretson, 1998; Carpenter et al., 1990).
_Sandia Matrix Generation Software--High-Fidelity SPM Generator._ The previous works study AIG more from the perspective of cognitive science and psychometrics. Less details about algorithms and software development were given in the works. But, in practice, we are also interested in how these ideas are implemented and, especially, accessibility of the generator software. Matzen et al. (2010) provided in their work a representative example of this that could "recreate" the 3\(\times\)3 SPM with high fidelity--Sandia Matrix Generation Software.
Matzen et al. (2010) identified two basic types of 3\(\times\)3 items in SPM-- the element transformation and the logic problems. An element transformation refers to a progressive variation of a certain attribute of the element. There could be multiple variations in different directions, for example, a color variation in the row direction and a size variation in the column direction. However, in every single direction, there is only one attribute varying. This is because, on one hand, it is so in the original SPM, on the other, multiple attributes varying in the same direction does not increase complexity of the problem (to human participants) compared to only one attribute. The attributes considered for transformation problems are shape, shading, orientation, size, and num
ber, each of which takes values from an ordered categorical domain. The logic problems involve operations such as addition/subtraction, conjunction (AND), disjunction (OR), or exclusive disjunction (XOR) of elements. Each generated item is either a transformation one or a logic one, but not both.
In addition, Sandia Matrix Generator generates answer choices in a way of the original SPM problems. An incorrect answer choice could be (a) an entry in the matrix, (b) a random transformation of an entry in the matrix, (c) a random transformation of the correct answer, (d) a random transformation of an incorrect answer, (e) a combination of features sampled from the matrix, or (e) a combination of novel features that did not appear in the matrix.
The item difficulty was studied through an item bank of 840 generated items. The problem set contained problems of 1, 2 or 3 rules (in row, column or diagonal direction). Note that the original SPM problem does not contain 3-rule problems. The generated problem set and the original SPM were given to the same group of college students. Experimental data showed that the generated items and the original SPM had very similar item difficulty. In particular, the data further showed that the item difficulty was strongly affected by the number of rules, analogical directions, and problem types (i.e., transformation problems versus logic problems).
CSP Generator--First-Order Logic RepresentationA more important thing about AIG is to give a general formal description of the generating process, rather than developing various specific generator software. Wang and Su (2015) made such an effort to formalize the generating process of matrix reasoning items through the first-order logic, and turned AIG into a constraint satisfaction problem (CSP) by formulating the "validity" 5 of RPM items into a set of first-order logic propositions.
Footnote 5: Not exactly the same definition of validity in psychometrics
In particular, a variation rule is represented as an instantiation of Equation
(1) and (2),
\[\exists\alpha\ \forall i\in\{1,2,3\}\ \exists o_{i1},o_{i2},o_{i3}\ P(\alpha,o_{i1},o_{i2},o_{i3}) \tag{1}\]
\[P(\alpha,o_{i1},o_{i2},o_{i3})=Unary(\tau(o_{i1},\alpha),\tau(o_{i2},\alpha),\tau (o_{i3},\alpha))\wedge \tag{2}\]
\[Binary(\tau(o_{i1},\alpha),\tau(o_{i2},\alpha),\tau(o_{i3},\alpha))\wedge\]
\[Ternary(\tau(o_{i1},\alpha),\tau(o_{i2},\alpha),\tau(o_{i3},\alpha))\]
where \(\alpha\) is a goemetric attribute, \(o_{ij}\) is a geometric elements in the figure of Row \(i\) and Column \(j\), \(\tau(\alpha,o_{ij})\) is the value of \(\alpha\) of \(o_{ij}\), and \(P\) is a predicate that describes the variation pattern of attribute \(\alpha\) in each row. In Equation (2), the predicate \(P\) further equals a conjunction of three predicates--\(Unary\), \(Binary\), and \(Ternary\)--representing three categories of relations commonly used in matrix reasoning, as illustrated in Figure 10.
An interesting observation of Figure 10 is that, mathematically, the unary relation is a special case of the binary relation, which is a special of the ternary relation. That is, the ternary relation is theoretically sufficient to generate all the items. However, interpreting the same variation as unary, binary and ternary relations requires different working memory abilities and thus leads to different difficulties. Therefore, these three categories are cognitively different, and need to be separately included in a generator program to achieve a better control over psychometric properties.
Equation (1) and (2) represent only the variation pattern of a single attribute \(\alpha\). There could be multiple variation patterns of different attributes in a ma
Figure 10: Three categories of relations commonly used in matrix reasoning (Wang and Su, 2015).
trix, i.e., multiple different instantiations of Equation (1) and (2). Meanwhile, it is also possible that some attributes are not assigned any instantiations of Equation (1) and (2). In this case, they could be given either constant values or random values across matrix entries. Random values may cause distracting effects in the generated items, which is similar to the nonharmonic perceptual organizations in (Primi, 2001).
To generate an item through Equation (1) and (2), the generator program samples values from finite domains to determine (a) the number of rules (i.e., the number of the instantiations of Equation (1) and (2)), (b) the attribute \(\alpha\) for each rule, (c) the values of \(\tau(\alpha,o_{ij})\), (d) the specific types of \(Unary\), \(Binary\), and \(Ternary\) relations. The matrix image is rendered from the instantiations of Equation (1) and (2), and each incorrect answer choice is generated by breaking an instantiation of Equation (1) and (2) (i.e., using values not satisfying them).
The generated items and the APM test were also given to a small group of university students. The experimental data showed that the overall difficulty and rule-wise difficulty (number of rules) were similar to the items in APM. However, as the author pointed out, their generator could not synthesize all the items in APM for some underlying transformations were hard to implement. When the items were created with distracting attributes, the generated items became much more difficult for human subjects.
#### IMak Package--Open Source.
Although there have already been many works on AIG of matrix reasoning, the generator software and the source code are usually not easily available to the public. This makes it hard to reproduce and build upon these works. Blum and Holling (2018) realized this point and released their generator as an R package--IMak package--that is globally available via the Comprehensive R Archive Network. The source code and detailed documentation of their work come with the R package. New items could be obtained by simply three lines of R code in the R interpreter--one for downloading the package,one for importing the package,and one for generating the items.
The author's purpose of developing the IMak package is to study the effect
of types of variation rules on item difficulty. The generator was thus designed to manipulate the types of rules while keeping other factors constant, and, thus, the generated items look quite different from the generated items of the generators mentioned above. For example, Figure 11 shows some example items that we created through this package, each of which exemplifies a basic rule type. With the current release (version 2.0.1), the geometric elements are limited to the main shape (the broken circle plus the polyline in it), the trapezium that is tangent to the main shape, and the dot at one of the corners of the polyline. Furthermore, the size and shape of these element are fixed for all generated items, but the position, orientation and existence would vary according to 5 basic rules.
As shown in Figure 11, there are 5 basic rules in IMak. All the rules are in the outward analogical direction (i.e. row and column). For example, in Figure 10(a), the main shape is rotated counterclockwise by 45 degrees in the first row;
Figure 11: Example items generated through the IMak package. Each item exemplifies a single basic rule. The correct answer is set to the first answer choice for demonstration.
the main shape is rotated counterclockwise by 90 degrees in the first column. Then the correct answer would be a counterclockwise rotation of the main shape by 135 (45 + 90) degrees compared to the top left one. Similarly, all the other 4 examples follow the same analogical direction. Each item could contain up to 4 rules (because mains shape rotation and reflection are conflicting). This design seems to excessively simplify the RPM-like problems, but it does serve the very purpose of study the differential effect of rules by fixing other factors.
Besides open-source accessibility and the special design of geometric elements, IMak has four other distinctive features that are inspiring for following works. Firstly, IMak generates 2\(\times\)2 format of AIG of matrix reasoning. Being affected by the famous work of (Carpenter et al., 1990) on RPM, the vast majority of AIG works would only generate 3\(\times\)3 matrices. The 2\(\times\)2 items have largely been neglected in the AIG works of matrix reasoning. Secondly, the answer set contains two more meta-choices "no correct answer" and "I don't know", which encourage subjects to solve the items more constructively rather than eliminating responses. Thirdly, the variation of one element could depend on the variation of another element. For example, the dot's moves depend on the variation of the main shape, since the dot only moves along the polyline in the main shape. This kind of variation rule is rare in matrix reasoning items, but common in real-world problem-solving, and it represents an extra complexity factor of matrix reasoning.
Last but not least, IMak used a rule-dependent strategy to generate incorrect answer choices. For 1-rule items, 4 distinct values of the attribute of the rule are sampled, including the correct value; since all other attributes remain constant in the matrix, another random attribute is chosen and sampled for 2 values. The resulting 8 (4\(\times\)2) combinations make the 8 options in the answer set. For 2-rule items, 3 values are sampled for each of the 2 attributes of the 2 rules, resulting in 9 combinations, and one of them is discarded. For 3-rule items, 2\(\times\)2\(\times\)2 combinations are sampled in the same way. For 4-rule items, 2\(\times\)2\(\times\)2\(\times\)2 combinations were sampled in the same way, and half of them are discarded.
In a human experiment, 23 generated items were administered to 307 par
ticipants from Germany, Indonesia, and Argentina. Reliability, validity and unidimensionality were initially verified by the experiment results. Particularly, item difficulty could be partly predicted from the number and type of rules based on psychometric models. As a summary, the open source software is a more recommended way to publish AIG works, especially for research purpose, as it can be shared across research groups around the world. More importantly, the studies should not be restricted to a fixed set of items but the way the generator is designed.
#### 3.2.2 Algorithmically Generating Matrix Reasoning Items for AI Testing
We now review two AIG works of matrix reasoning that were specially for AI testing. The datasets generated in these two works are extremely influential on the data-driven AI models for solving RPM-like tasks because almost all of them were tested on one or both of these two datasets. In addition, we also review the works that address the context-blind issue of the algorithmically generated datasets reviewed, which is a special and important issue for data-driven AI models.
Procedurally Generated MatricesBased on the five rules in (Carpenter et al., 1990), Barrett et al. (2018) continued the first-order logic approach of Wang and Su (2015) and created a large-scale (1.2M items) dataset of matrix reasoning items--Procedurally Generated Matrices (PGM). Since the generator program and source code are not publicly available, our discussion is based on the description in (Barrett et al., 2018) and our observation of the dataset.
In PGM, an instantiation of Equation (1) and (2) in the first-order logic approach was denoted by a triplet \([r,o,a]\) of relation \(r\), object \(o\)6 and attribute \(a\). These three factors are not independent. Particularly, Figure 12 summarizes their dependencies in the generator of PGM. Figure 12 consists of 29 paths from
the left to the right, corresponding to 29 \([r,o,a]\) triplets7.
Footnote 7: This number—29—equals the number of triplets mentioned in the work of Barrett et al. (2018), which, however, did not provide a list of the 29 triplets. Therefore, we could only conjecture that the 29 triplets here are the ones used in PGM.
As shown in Figure 12, the objects in PGM are classified into two disjoint subsets--shape and line. In the shape subset, closed shapes are arranged in 3\(\times\)3 grid (fixed positions in this case) inside each matrix entry (do not mistake this with 3\(\times\)3 matrices). In the line subset, line drawings spans the whole area of a matrix entry and are always centered in the matrix entry. A PGM item can include both shapes and line drawing, with the shapes superimposed on the line drawings, but the reasoning about these two are completely independent. Thus, in Table 1, we split PGM into two rows to describe it more clearly.
The generation procedure of a PGM item could be described by 5 steps: (a) sample 1 to 4 triplets from the 29 triplets described in Figure 12 (number triplets
Figure 12: Left: The dependencies among relations, objects, and attributes used to generate the Procedurally Generated Matrices (PGM) dataset (Barrett et al., 2018). Each path from left to right corresponds to a \([r,o,a]\) triplet representing a variation pattern in the matrices. As one can check, there are 29 paths, i.e. \([r,o,a]\) triplets, in the graph. Note that Barrett et al. (2018) did not differentiate between “shape_type” and “line_type” and referred to both of them as “type”. But these two are treated as two distinct attributes in PGM’s implementation. Right: The dependencies among relations, nodes, and attributes used to generate the RAVEN dataset. Note that we listed “distraction” as a rule in this graph to indicate that uniformity and orientation are distracting attributes. The paths from constant through number and position to layout are treated as a single rule in RAVEN’s implementation. Therefore, there are 15 paths/rules in the graph.
and position triplets can not be selected simultaneously); (b) determine the analogical direction for each triplet: row or column; (c) sample attribute values for each triplet from their domains (Different sampling methods are specifically implemented for different rules and attributes); (d) determine the attribute values for unspecified attributes (either constant or random); and (e) render all attribute values into a pixel image of the matrix.
The relations used in the PGM dataset, which are also referred to as rules in other literature, stem from the 5 rules of APM summarized in (Carpenter et al., 1990), as follows:
* Constant in a row.
* Quantitative pairwise progression.
* Figure addition or subtraction, i.e. the set union and set diff (not arithmetic addition and subtraction), which could also be considered as the logical operator OR and XOR.
* Distribution-of-three-values, i.e. the consistent union.
* Distribution-of-two-values, i.e. the logical operator XOR.
Comparing the PGM relations to the above rules, we found that they are almost equivalent. The "constant in a row" corresponds to the without-distraction mode in PGM. The "distribution-of-three-values" corresponds to the consistent union in PGM. The "figure addition or subtraction" and "Distribution-of-two-values" are logical operator OR and XOR in PGM. However, the PGM has one more relation--AND--in addition to the 5 rules in (Carpenter et al., 1990) to be more complete.
Relational and Analogical Visual rEasoNing.The spatial configuration, as an important dimension of perception organization, is highly restricted in PGM--3\(\times\)3 grid for the shape subset, all-centered for the line subset, and superimposing a shape item on a line item. To enrich the spatial configuration of AIG of matrix reasoning, Zhang et al. (2019) developed a new generator and generated the Relational and Analogical Visual rEasoNing (RAVEN) dataset. In particular, RAVEN includes 7 hardcoded spatial configurations, as shown in Figure 13.
The source code of RAVEN's generator is available online8. The discussion of RAVEN is thus based on the inspection of the RAVEN's generator's source code.
Footnote 8: [https://github.com/WellyZhang/RAVEN](https://github.com/WellyZhang/RAVEN)
The 7 configurations are derived from a more general symbolic representation framework for images--Attributed Stochastic Image Grammar (A-SIG). In A-SIG, an image is described by a tree structure, where the conceptual granularity becomes finer and finer from root toward leaves. To generate RAVEN, the tree structure is predefined as a general A-SIG tree as shown in Figure 14, which consists of 5 conceptual levels--scene, structure, component, layout, and entity--and uses a stochastic tree-traversal process to generate images. In general, the main idea of an A-SIG tree is that, while traversing the tree, if the current node has dashed edge to its child nodes, then expand a single random child node; if the current node has solid edge to its child nodes, then expand all its child nodes. Attributes and their attribute value domains are attached to nodes so that images can later be generated by sampling from these domains after the tree structure is determined. Such a stochastic traversing process from
Figure 13: 7 hardcoded spatial configurations—center, 2x2Grid, 3x3Grid, Left-Right, UpDown, Out-InCenter, and Out-In2x2Grid—are used to arrange objects in each matrix entry in the RAVEN dataset. Each configuration is represented by the bounding boxes that objects could occupy. The position and size of each bounding box are hardcoded in the generator program. An example matrix for each configuration is given in the first row (image obtained by running the generator code). Note that not every bounding box has to be occupied, but every object has to be in one of the bounding boxes.
the root to leaves would generate a skeleton of a class of images--i.e. a spatial configuration. However, the 7 configurations in RAVEN were hardcoded in the language of A-SIG, rather than generated through this stochastic traversing process, which could otherwise have made RAVEN more diverse in spatial configuration.
To compare with the PGM dataset, we represent PGM items also in A-SIG, as shown in Figure 15. The line configuration of PGM is basically the same as the center configuration of RAVEN except that the entity types (shape) are different. The shape configuration of PGM is almost the same as the 3x3Grid configuration of RAVEN except that bounding box sizes are slightly different. The shape-over-line configuration of PGM is also conceptually similar to the
Figure 14: The general A-SIG tree and 7 specific A-SIG trees used in the RAVEN dataset (image adapted from (Zhang et al., 2019) by adding more technical details from the source code of the generator). The root node denotes the scene that the image describes. The structure nodes are the containers of different spatial structures. A structure is composed of components that could be overlaid with each other. Each component has its own layout and, more importantly, variation rules, which are independent of other components. The layout node, as its name indicated, contains the attributes specifying the number and positions of geometric objects. Entities represent geometric objects with attributes, not including number and position.
double-component configurations of RAVEN. The general difference between PGM and RAVEN lies in the layout and entity nodes. As shown in Figure 15, the PGM dataset is not able to separate the concepts of "entity" and "entity layout" by using triplets \([r,o,a]\). That is, the object \(o\) takes the roles of both layout and entity nodes, but could not play the roles effectively and simultaneously.
RAVEN inherited all the five rules from (Carpenter et al., 1990). Moreover, the "addition-and-subtraction" rule is extended in RAVEN containing not only figure addition and subtraction (i.e., the set operations "OR and XOR") but also arithmetic addition and subtraction, which were not discussed in (Carpenter et al., 1990). Since these two operations are conceptually different, we refer to the arithmetic addition and subtraction as "arithme
Figure 15: The spatial configurations of PGM represented in A-SIG to compare with RAVEN. There are 3 spatial configurations in PGM—line, shape, and shape-over-line—in PGM. The example matrix is given for each configuration at the bottom (images taken from PGM dataset).
addition and subtraction as "OR and XOR". In addition, the "distribution-of-three-values and distribution-of-two-values" from (Carpenter et al., 1990) are merged into a single rule in RAVEN by considering the latter as a special case of the former with a null value for one of the three values. Therefore, RAVEN has a slightly different rule set compared to PGM. Similarly, we could represent the variation rules of RAVEN also as triplets -- \([r,n,a]\) where \(n\) represents nodes (layout or entity) in A-SIG trees, and \(r\) and \(a\) are relations and attributes, being the same as PGM. Then Figure 12 shows the dependencies among \(r\), \(n\) and \(a\).
PGM and RAVEN generators are similar in some aspects. In particular, they share two similarities. First, their choices of attributes, attribute domains, and rule types are similar. For example, they both forbid number-rule and position-rule from co-occurring in an item because these two attributes would probably conflict with each other. Second, although RAVEN has more spatial configurations, these configurations are not structurally different from PGM (as can be seen from the comparison of their A-SIG trees). Meanwhile, PGM and RAVEN are different in two aspects. First, they are different in the number of rules in an item. In PGM, 1 to 4 triplets were sampled from the 29 triplets. In contrast, in a RAVEN item, every attribute is governed by a rule except the two distracting attributes (uniformity and orientation). Thus, there are 4 rules (for number/position, type, size, and color, respectively) in each RAVEN item. Second, the rules in RAVEN are all row-wise while the rules in PGM are either row-wise or column-wise.
Context-Blind Issue.The answer sets in RAVEN were generated in a similar way to the way in the first-order logic approach. That is, each incorrect answer choice is created by modifying a single attribute of the correct answer. RAVEN is slightly different from (Wang and Su, 2015) because RAVEN has only 5 attributes (not including the distracting attributes) whereas (Wang and Su, 2015) has 15 attributes. Hence, in (Wang and Su, 2015), every incorrect answer has a unique attribute on which it differs from the correct one; but RAVEN
has to reuse some of the 5 attributes to generate 7 incorrect answers, i.e. an attribute is given different values to generate multiple incorrect answers.
This method of creating incorrect answer choices reaches the maximum level of distracting and confusing effect, because one must identify all the variation rules to solve the problem. On the contrary, ignoring any rule would lead to multiple choices. However, this design has a major drawback--it fails the context-blind test for multi-choice problems. In a matrix reasoning item, the incomplete matrix is the context of the multi-choice problem that provides information for solving the problem. Failing the context-blind test means that it is possible for human participants or computational models to solve the item while turning blind to the context.
Two works (Hu et al., 2021; Benny et al., 2021) separately pointed out the context-blind issue of RAVEN. They provided evidence that data-driven AI models can achieve high accuracies (from 70%+ to 90%+) when only given access to the answer sets of RAVEN. The context-blind performance of some data-driven AI models is even better than the normal performance with full access to the items. This implies that data-driven AI models are capable of capturing the statistical regularities in the answer sets. The reason for this context-blind issue obviously lies in the generating process of answer set. In particular, since each incorrect answer choice is a variant by modifying a single attribute of the correct answer choice, the correct answer must be the one that possesses every common feature among all the choices (or, equivalently, the one most similar to every other choice).
Both Hu et al. (2021) and Benny et al. (2021) proposed their own solutions to this issue--the Impartial-RAVEN and RAVEN-FAIR datasets. These two datasets have the same context matrices as the original RAVEN and regenerated the answer sets in different ways. The similarity and difference between these three versions can be clearly illustrated by putting them in simple graphs. If we represent each answer choice as a vertex and each modification of an attribute as an edge, then the answer sets of the three versions can be depicted by the graphs in Figure 16. The answer set of the original RAVEN is created by modifying an
attribute of the correct answer. Thus, its graph is a star centered at the correct answer (the solid vertex). And what the aforementioned computational models in the context-blind test captured was the unique center of the star structure.
Hu et al. (2021) proposed the Impartial-RAVEN, in which the answer set can be represented by a 3-regular graph in Figure 16. To create such a graph, three independent attributes are randomly chosen from the five attributes of RAVEN, and three values of the three attributes are sampled from the three attribute value domains, respectively, so that the newly sampled values are different from the ones of the correct answer. Then, by assigning new values to these attributes combinatorially, we would have \(2^{3}=8\) answer choices, including the correct one. The relations among these 8 answer choices form the 3-regular graph in Figure 16.
Benny et al. (2021) proposed a less regulated procedure to generate answer sets. Starting from an initial answer set consisting of only the correct answer, an answer choice is randomly selected from the current answer set, and then an attribute of the selected answer choice is randomly altered to alter to create a new answer choice; repeat this process until we have 8 answer choices. This procedure results in tree structures similar to the one in Figure 16.
These two enhanced versions of RAVEN were tested by context-blindly training the baseline model in (Zhang et al., 2019) and the CoPINet model in (Zhang et al., 2019). The accuracy decreased to below 20%. Ideally, a human subject or computational model who context-blindly works on the RAVEN items should perform as well as a random guess, i.e. \(1/8=12.5\%\), which implies
Figure 16: The answer sets of three versions of RAVEN datasets depicted in graphs. Each vertex is an answer choice and two adjacent vertices differ by one attribute.
that the answer set per se does not provide any useful information for solving the item. However, in the practice of item writing, to maintain a certain level of distracting and confusing effect of incorrect answer choices, the majority of incorrect answer choices must share some similarities among themselves, with the correct one, and the context matrix, which would raise the performance of random guess a bit. On the flip side, without this design, it would be quite easy for subjects to find the correct answer, because incorrect answers would be very much perceptually distinct from the context and other answer choices. Therefore, a reasonable context-blind performance would be slightly higher than random guess. The balance is determined by the item writer's judgment.
A subtle difference between the two enhancements of RAVEN could be found by comparing their graphs in 16. If we consider a single trial (in a probabilistic sense) where we context-blindly give a participant (or an AI model) an item from Impartial-RAVEN and an item from RAVEN-FAIR, the probability that this participant solves the Impartial-RAVEN item would be almost the same as the probability of solving the RAVEN-FAIR item. However, if we repeat this with different items again and again, the performance on RAVEN-FAIR would probably exceed the performance on Impartial-RAVEN, assuming that the participant is intelligent enough to figure out the graph structures behind the answer sets, and thus makes an educated guess by selecting the "center" (or the max-degree vertex) of trees in a probabilistic sense. In this case, we would say that the RAVEN-FAIR is context-blind valid at the item level, but not at the dataset level.
#### 3.2.3 Summary
In this subsection, we reviewed AIG works of matrix reasoning items. We classified the works into two groups by their purposes--whether it is for human intelligence testing or for AI testing. The works in the first group aim at not only generating items but also good psychometric properties. As the classical studies on intelligence tests, these works are usually based on cognitive models and psychometric models. The choices of stimulus features are thus determined by
the cognitive and psychometric models. Particularly, the factors--the number of elements, the number of rules, the type of elements, types of rules, analogical directions, and perceptual organizations--are usually considered in this line of research. Among these factors, the types of elements and rules and perceptual organization are the less investigated ones due to the difficulty in defining and formalizing them.
The works in the second group can be seen as the continuation of the first group, but the psychometric aspects are less emphasized. For example, in an human experiment of PGM, in which 18 items were administered to human participants, participants without prior experience failed almost all the items, whereas participants with prior experience scored above 80%. Such a result is definitely not what a psychometrician would expected from a test for eductive ability, fluid intelligence, or general intelligence. In contrast, the result appear to be a result from a test of reproductive ability or crystallized intelligence. Generally speaking, this result implies that the datasets for AI testing do not necessarily qualify for human intelligence testing.
More importantly, this gives rise to another interesting question--how do we assess the performance of data-driven AI models on the large datasets such as PGM and RAVEN? On one hand, some data-driven AI models indeed perform well on AIG items that pose great challenges to human subjects; on the other hand, training on the large-scale datasets specially prepares the AI models for a highly restricted subset of the problem domain, but human subjects, who are not trained at all, or just trained on several examples from this subset, could perform well in the entire problem domain.
Similar questions were asked when AI systems first entered the area of human testing (Detterman, 2011). Efforts have been made to address these questions. (Bringsjord and Schimanski, 2003; Bringsjord, 2011) address this issue by incorporating AI testing into a general concept--psychometric AI. Hernandez-Orallo et al. (2016) proposed that (a), instead of collecting items, we should collect item generators, and (b) the generated items should be administered to machine and human (and even other animals) alike (universal psychometrics).
All these propositions are constructive and, meanwhile, suggest much higher requirements for AIG studies.
Current AIG datasets are far below the level of flexibility and diversity that human item writers can achieve. For example, the spatial configurations in PGM and RAVEN are fixed; inter-element variation, in which the variation of one element depends on the variation of another element, is also very rare; so are perceptually and conceptually ambiguous analogies. A more promising methodology for AIG of RPM-like tasks for AI testing is to study the problem domain and human cognition, rather than construct ad hoc generator programs. Huge uncharted territories lie in the complexity factors such as the types of elements and rules and perceptual organization, and how the nature of problem changes as different administration/evaluation protocols are used for human subjects and AI models.
## 4 Computational Models for Solving RPM and RPM-Like Tasks
In previous sections, we have established the basic understanding of the problem domain represented by RPM, which lays the foundation for us to discuss the core topic of this article--computational models for solving the problems. Similar to the way in the previous discussion, we start from the origin of the research, keep the prerequisite knowledge at a minimal level, and unfold our discussion in a manner that reveals the philosophy behind technical development in simplest language.
The ultimate purpose of this section is to help our readers develop a solid understanding rather than enumerating as many previous works as possible in chronological order or in an arbitrary taxonomy. Therefore, we use a narrative, which simulates the process of how an novice's understanding of the solution to the problem domain would naturally evolve if not influenced by the external conditions (such as computational power) and other relevant research works. This narrative is not real history but specially designed to reduce the complexity for understanding. In particular, the computational models that arise late in
this narrative might arise early in reality, and vice versa. Examples like this are common in scientific research: the original concepts behind some cutting-edge technologies might have been there for decades before these technologies are implemented, but some alternatives to the original concepts, due to being easy to implement, might have already been implemented before the cutting-edge technologies; when we look back at these concepts, we rearrange the order to make the concepts more coherent and understandable. Thus, this narrative is a conceptual chronicle for understanding rather than a real chronicle for recording.
In this conceptual chronicle, we divide the development of computational models for solving RPM into five stages--imagery-based approach, logical reasoning, neuro-symbolic reasoning, learning approach, and data manipulation. With hindsight, we found that an upward-spiral pattern is looming out of these five stages. That is, researchers are making process while visiting the same places again and again with better and better understanding. The places could be specific research questions or a type of approach to answer the research questions. The conceptual chronicle starts from a straightforward approach (imagery-based approach) which is specific to the problem domain but very effective; it then moves on to more and more general approaches (logical reasoning, neuro-symbolic reasoning, and learning approach); when these approaches is still incapable of solving the problem domain perfectly, it returns to the study of the problem domain per se and solves the problem in a similar way to the first approach, but uses completely different set of techniques. The same upward-spiral trajectory could be described differently (e.g., different methodologies are alternatively dominating the research of intelligence), but the pattern that it revisits the same places again and again until the entire problem domain is perfectly solved remains unchanged.
In the rest of this section, we will use the acronyms of computational models for simplicity and please refer to Table 2 for their full names.
\begin{table}
\begin{tabular}{l l l} \hline \hline Acronym & Full Name & Article \\ \hline - & Gestalt Algorithm & (Hunt, 1974) \\ ASTI & Affine and Set Transformation Induction & (Kunda, 2013) \\ ASTI+ & Affine and Set Transformation Induction Plus & (Yang et al., 2020) \\ - & Fractal Model & (McGreggor et al., 2014) \\ - & FAIRMAN & (Carpenter et al., 1990) \\ - & BETTERMAN & (Carpenter et al., 1990) \\ CogSketch+SME & CogSketch and Structual Mapping Engine & (Lovett et al., 2009) \\ - & Anthropomorphic Solver & (Strannegard et al., 2013) \\ - & ANALOGY & (Evans, 1964) \\ - & Analytic Algorithm & (Hunt, 1974) \\ ALANS2 & ALgebra-Aware Neuro-Semi-Symbolic & (Zhang et al., 2020) \\ PrAE & Probabilistic Abduction and Execution & (Zhang et al., 2021) \\ VAE-GPP & Variational Autoencoder and Gaussian Process & (Shi et al., 2021) \\ Priors & & \\ TRIVR & Two-Stage Rule-Induction Visual Reasoning & (He et al., 2021a) \\ NVSA & Neural-Vector-Symbolic Architecture & (Hersche et al., 2022) \\ Pairwise-ADV\({}^{*}\) & Pairwise Attribute Difference Vector & (Mekik et al., 2017) \\ Triple-ADV\({}^{*}\) & Triple Attribute Difference Vector & (Mekik et al., 2018) \\ DeepIQ & Deep IQ & (Mandziuk and Zychowski, 2019) \\ CNN+MLP & - & (Hoshen and Werman, 2017) \\ CNN+decoder\({}^{*}\) & - & (Hoshen and Werman, 2017) \\ ResNet+MLP & - & (Barrett et al., 2018) \\ Wide-ResNet+MLP & - & (Barrett et al., 2018) \\ WReN & Wild Relation Network & (Barrett et al., 2018) \\ LEN & Logic Embedding Network & (Zheng et al., 2019) \\ MXGNet & Multiplex Graph Network & (Wang et al., 2020) \\ multi-layer RN & multi-layer Relation Network & (Jahrens and Martinetz, 2018, 2019, 2020) \\ SRAN & Stratified Rule-Aware Network & (Hu et al., 2021) \\ MRNet & Multi-Scale Relation Network & (Benny et al., 2021) \\ Rel-Base & Basic Relational Reasoning & (Spratley et al., 2020) \\ Rel-AIR & Attend-Infer-Repeat Relational Reasoning & (Spratley et al., 2020) \\ CNN+LSTM+MLP & - & (Barrett et al., 2018) \\ Double-LSTM & - & (Sekh et al., 2020) \\ ESBN & Emergent Symbol Binding Network & (Sinha et al., 2020) \\ NTM & Neural Turing Machine & (Sinha et al., 2020) \\ ARNe & Attention Relation Network & (Hahne et al., 2019) \\ \(\text{HTR}^{*}\) & Hierarchical Transformer Reasoning & (An and Cho, 2020) \\ NI & Neural Interpreter & (Rahaman et al., 2021) \\ SCL & Scattering Compositional Learner & (Wu et al., 2021) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computational Models for Solving RPM and RPM-like Tasks
### Stage 1: Imagery-Based Approach
Visual mental imagery refers to mental images that play a functional role in human cognition(Kosslyn et al., 2006). The most important characteristic of mental imagery is that human can experience mental imagery in the absence of the concurrent sensory input. Try to answer this question "how many windows are there in your house?" when you are not home (use another building you are). Most people answer this question by imagining their houses. This imaginary house is a mental imagery. Some people count the windows by mentally walking in and around their houses, while others do so by mentally rotating their houses. Whether walking in and around the houses or rotating the horse, they inspect and manipulate on this mental representation, as they inspect and manipulate the real object. Further more, mental imagery can be unrealistic, for example, some people rotate their houses upwards or downwards without
\begin{table}
\begin{tabular}{l l l} \hline \hline Acronym & Full Name & Article \\ \hline
4 VAE+WReN & 4 variants of VAE plus WReN & (Steenbrugge et al., 2018; van Steenkiste et al., 2019) \\ generative-MRNet\({}^{*}\) & - & (Pekar et al., 2020) \\ LoGe & Logic-Guided Generation & (Yu et al., 2021) \\ MCPT & Multi-label Classification with Pseudo Target & (Zhuo and Kankanhalli, 2020) \\ PRD & Pairwise Relations Discriminator & (Kiat et al., 2020) \\ LABC & Learning Analogies by Contrasting & (Hill et al., 2019) \\ CoPINet & Contrastive Perceptual Inference Network & (Zhang et al., 2019b) \\ DCNet & Dual-Contrast Network & (Zhuo and Kankanhalli, 2021) \\ ACL & Analogical Contrastive Learning & (Kim et al., 2020) \\ Meta-ACL & Meta Analogical Contrastive Learning & (Kim et al., 2020) \\ MLCL & Multi-Label Contrastive Learning & (Makłinski and Mandziuk, 2020) \\ FRAR & Feature Robust Abstract Reasoning & (Zheng et al., 2019) \\ - & Continual Learning & (Hayes and Kanan, 2021) \\ DRT & Dynamic Residual Tree & (Zhang et al., 2019a) \\ - & GAN & (Hua and Kunda, 2019) \\ - & Structural Affinity Method & (Shegheva, 2018) \\ PGM & Procedurally Generated Matrices & (Barrett et al., 2018) \\ RAVEN & Relational and Analogical Visual Reasoning & (Zhang et al., 2019a) \\ \hline \hline \end{tabular} \({}^{*}\)No Acronym was given in the original article. We created a name to clearly refer to it in our discussion.
\end{table}
Table 2: (continued from previous page)
the houses falling apart. For this reason, the ability of using mental imagery is important for creativity. This point makes another important characteristic of mental imagery.
Evidence from psychology and neuroscience (Kunda et al., 2013) suggests that mental imagery is frequently used by human participants to solve RPM items. Intuitively, a human participant would inspect objects in the matrix, compare them by mentally superimposing one on another, mentally transform the objects, and mentally estimate perceptual similarity. Without turning to more sophisticated techniques and terminology, this description is the most immediate one that one can think of to describe the solving process (although they might not use the term "mental imagery"). For this reason, imagery-based computational models (Hunt, 1974; Kunda et al., 2009, 2013, 2010; Yang et al., 2020; Yang, Yuan et al., 2022) were constructed to solve RPM and RPM-like items. In general, these models represent matrix entries by pixel images, apply predefined pixel-level operations on the images (e.g., affine transformations and set operations) and calculate pixel-level similarities between the images (e.g., Jaccard index and Hausdorff distance).
Although these systems have proven to be effective for solving the RPM items, they appear relatively late in development of computational models for solving RPM and are still an underexplored approach in the AI community. This is partly because directly working on raw perceptual input data, especially applying various operations on pixel images and computing similarity, requires the computational power that was not available at the beginning of this line of research. Another reason might be that the theory of mental imagery has been studied mostly in cognitive psychology and received less attention in the AI community. Nonetheless, it is still a promising approach for general problem solving.
Before we dive into Stage 2, it would be better to chew on the idea of imagery-based approach a bit. The imagery-based approach provides an "in-place" solution, i.e., solving a visual reasoning problem "visually" without introducing auxiliary devices such as preprocessing of raw perceptual input. There is
nothing wrong for being parsimonious because being parsimonious is a general principle of problem solving (Occam's Razor). On the flip side, a tacit consensus in artificial intelligence is that certain degree of abstraction is desirable. That is, the approach must include steps that transform the raw perceptual input into a more abstract form which reduces the complexity of problem solving. Abstraction is even deemed a hallmark of valuable AI techniques--the more abstract, the more intelligent the approach is. According to this criterion, the imagery-based approach is not intelligent at all. This could be another reason that imagery-based approach received less attention in the AI community. Because experiments also show that mental imagery plays an important role in human cognition, this brings us into a dilemma of the criteria of being intelligent in problem solving. Note that the "abstract" end of this dilemma is not proficiency in using the abstracted information but the process or ability to abstract.
The most valuable contribution of imagery-based models is not on problem solving but bringing this dilemma into light. Being mindful of this dilemma and criteria of being intelligence would put future AI systems in a more promising direction. Although this dilemma is of vital importance to AI research, there is not simple answer to it. A possible solution is that we can try to understand imagery and abstraction as two factors, correlated or independent, rather than two options contradicting to each other. A simple analogy could be made at this point to clarify this idea: in graduate math classes, instructors are undoubtedly teaching knowledge that are quite abstract; experienced instructors are able to convey the abstract knowledge in very visual languages so that it is more accessible to students. A stronger claim is that abstract concepts are always associated with some imagery representations in human thinking. This claim might not be correct in all cases, but indeed points out an important feature of human intelligence. This solution also resembles the psychometric treatment to human intelligence. AI systems can be evaluated in the both dimensions of the two factors, as human intelligence is measured. And being intelligent means that the system needs to score high in both dimensions.
Another similar solution is to view abstraction and mental imagery as two distinct and necessary cognitive processes that complement and cooperate with each other. Which of them manifests depends on the task and the subject; that a subject does not show one of them does not mean that this subject does not possess it. Conditional arguments like this is quite common in the study of human intelligence. For example, mental information processing speed (measured by special tasks) is greatly correlated with general intelligence test performance of people who score lower in the test, while processing speed is not correlated with that of people who score higher. But this does not mean the highly intelligent people cannot think fast. Either way, the dilemma is resolved by stressing that these two options are not exclusive to each other.
### Stage 2: Logical Reasoning
Based on the discussion at the end of the last subsection, the reason why we choose logical reasoning as the second stage in this conceptual chronicle is obvious. The computational models using logical reasoning works on abstract representations of RPM-like items. For example, a entry image \(A\) in a matrix could be described by a series of propositions such as "triangle(\(A\))=True, triangle-large(\(A\))=False, triangle-on-the-left(\(A\))=True, square(\(A\))=True, square-small(\(A\))=True, and so on". In this example, these representations are restricted to Boolean expression, but we can use more expressive formal logic like "color(\(A\))=green, number-objects(\(A\))=3, texture(\(A\))=dotted, and so on". The abstract representations in these models are either manually constructed or obtained through a preprocessing module. For example, the earliest computation model for solving RPM-like items--ANALOGY (Evans, 1964)--consists of two modules and first part is for constructing such representations9, whereas the influential models--FAIRMAN and BETTERMAN (Carpenter et al., 1990)--use handcrafted logic
representations.
Each computational model in this stage has a customized formal system for representing RPM-like items. This system is either specially designed for solving a specific problem set of interest or reusing some standard systems such as regional connection calculus and scalable vector graphics. Based on a formal representation system, the three main components of logical reasoning are implemented. In the context of RPM, the entry images in the matrix are the premise and the answer choices are possible consequences, whereas the rules are to be determined. The models in this stages split into two branches according to how rules are determined.
#### 4.2.1 Rule Matching
The first branch is rule matching, in which the model hardcodes finite predefined rules and matches rows and column to each of the predefined rules. For example, a predefined rule describing the number of objects could be "number-objects(\(A\))+number-objects(\(B\))=number-object(\(C\))", in which \(A,B\) and \(C\) are entry images in a row or column of a 3\(\times\)3 matrix. If a rule applies to the first row(s) or columns(s), it is reproduced on the last row or column to generate the formal representation of the missing entry. Many computational models have been constructed this way to solve RPM-like items (Hunt, 1974; Carpenter et al., 1990; Ragni and Neubert, 2012, 2014). This might look amazing from the current point of view because it simply would not generalize due to the predefined rules. However, this is not true from the perspective of problem solving. The readera who are skeptical about this can make analogies to other cases, like consider how many rules one need to derive the integer field and to derive the real number field, and consider also the expressive power of these number fields. The reason why these number fields can be represented concisely and completely is that when we discuss them in math, the symbols (i.e., elements in these fields) does not need to bind with concrete entities. In computational models of logical reasoning, the formal representation systems are in charge of binding symbols with entities. Thus, it is very partial to argue that rule match
ing models are not generalizable without referring to the formal representation system. If the geometric visual stimuli are extremely simple (as in most general intelligence tests) or the formal representation system is extremely powerful, the rule matching models will generalize as well to the whole problem domain as several rules can produce the entire integer field and real number field. This conditional argument echoes with the discussion about abstraction and imagery at the end of last subsection as they all are dealing with the problem of the level of abstraction and the operations that can be implemented at that level.
Another observation on the rule matching models is that most of the works in this branch are for cognitive modeling rather than problem solving. For example, the models in (Ragni and Neubert, 2012, 2014) are implemented on the ACT-R cognitive architecture. The purpose of cognitive modeling based on cognitive architecture lies at the information-processing level, i.e., modeling how information is exchanged between multiple cognitive function modules. But how the information is processed exactly inside each module is not really the focus of cognitive modeling. This corresponds to how a human or computation model comes up with a rule that happens to be able to solve the item at hand. Thus, the use of predefined rules would be understandable from the perspective of cognitive modelling. As a summary, the rule matching approach, though might not be able to solve all possible items in the problem domain, fulfills its duty perfectly in problem solving and in cognitive modeling.
#### 4.2.2 Rule Induction
In contrast to rule matching, the second branch--rule induction--is mainly studied for problem solving (Bohan and O'Donoghue, 2000; Davies and Goel, 2001; Ragni et al., 2007; Schwering et al., 2007; Strannegard et al., 2013) with an exception that the analogy-making models are closely related to both cognitive modeling and problem solving (Tomai et al., 2005; Lovett et al., 2007, 2009). Rule induction means that the models need to discover the rules, i.e., how an entry is transformed to the next one or how the entries in a row or column are related, in a more open manner. In particular, the rules are represented as
the identical and different parts between the logical structures of entry images, and/or how the different parts are changed to identical parts by transformations. The rules are also logical representations in nature. After the rules are discovered, the models have two options--they can either reproduce the rules on the last row or column to generate the answer and compare it to the answer choices, or insert each answer choice into the last row or column to induct a rule for the last row or column and compare it to the previously inducted rules. These two options corresponds to the two strategies--constructive matching and response elimination--commonly adopted by human participants (Bethell-Fox et al., 1984). The latter one is more often adopted by analogy-making models as it is similar to how an analogy is drawn by human.
Rule induction is a larger topic that goes beyond the traditional format of logic reasoning. For example, the geometric objects and rules can be represented in a vector-symbolic architecture (Rasmussen and Eliasmith, 2011), in which geometric objects are represented as vectors and rules are inducted and applied through operations on the vectors, such as circular convolution. If we take a closer look at the details of calculation, we would find that the calculation is a different way to implement the rule induction in the models mentioned above (of course, vector-symbolic architecture has its own advantages and purposes). Another example is that reinforcement learning methods can be used to train an agent to induct the rules in matrix reasoning items (Raudies and Hasselmo, 2017), i.e., when the agent forms a correct rule (action in reinforcement learning) when it attends to certain row or column in the matrix (state in reinforcement learning), the algorithm rewards the agent.
The boundary between rule matching and rule induction is not always so clear in practice. To what extent a model is performing rule matching or rule induction depends on how the potential rules are provided to the model: if only several specific rules are provided, it is rule matching; if a huge rule space is provided by specifying some "bases" or "generators" of it, it is rule induction; and there is a lot of places in between. One can even argue that a rule induction model is in nature a rule matching model because it matches to the whole or
a subspace of the rule space in some implicit way, and, thus, there is no such distinction between rule matching and rule induction. Nonetheless, there are indeed examples of rule induction that no one will consider as rule matching. For example, consider a free group in abstract algebra, in which finding an element satisfying a specific condition could be so difficult even if the generators of this free group look so simple. Other similar examples could be find in the problems of program synthesis and inductive programming.
### Stage 2.5: Neuro-Symbolic Reasoning
The reason why we use a decimal in the title is that this stage is an intermediate stage that shares features with both its predecessor and successor. Since the influence of its predecessor and successor is stronger than this stage, this stage is relatively short and rapidly transits to its successor.
The models of neuro-symbolic reasoning consists of two modules--a neural perception frontend and a symbolic reasoning backend. The neural perception frontend (implemented as neural networks in most cases) extracts/approximates the distributions over the values of each entry in the predefined formal representation system. The symbolic reasoning backend performs probability calculation according to a predefined set of rules. In a sense, neuro-symbolic reasoning can be considered as a special case of rule matching in logical reasoning. The probability formulae in the backend are determined by the predefined rules and the output of the reasoning, such as the probability that a rule exist in rows or columns, the missing entry contains a certain value in its representation, or a certain answer choice is correct. Similarly, different implementations of frontend and backend have been used to construct probabilistic reasoning models, such as ALANS2, PrAE, VAE-GPP, TRIVR, LoGe, and NVSA (Zhang et al., 2020, 2021; Shi et al., 2021; He et al., 2021; Yu et al., 2021; Hersche et al., 2022).
Compared to logical reasoning, neuro-symbolic reasoning clearly requires that a dedicated neural processing module is used to construct the formal abstract representation of each entry image. In addition, it also takes into account the uncertainty in perception by using probability to represent and rea
son. Technically speaking, neuro-symbolic reasoning is only a small step forward compared to logical reasoning. The reason why it is listed as a separate stage is that it is a natural watershed between **knowledge-based approaches** and **data-driven approaches**, because the neural perception frontend requires training data while imagery-based and logical reasoning are knowledge-based. In the next two subsections, we will elaborate on data-driven approaches.
### Stage 3: Learning Approach
An obvious characteristic of the first three stages is that they all rely on the predefined representation systems of geometric objects and relations/rules between geometric objects. To reduce the reliance on such explicit prior knowledge, the learning approach has been introduced into the field of RPM-like tasks. This section reviews the learning models, especially deep learning, for solving RPM-like tasks. We divide the learning approach into four types according to the structures of learning models. For each type, we provide a high-level functional description that applies to all the models of the type while trying not to complicate the discussion with too much technical details. The purpose of this taxonomy is to reveal the structural evolution of the learning models (from Type 1 to Type 4), analyze the reason why it evolves this way, and, more importantly, provide a guidance for research works in this field.
#### 4.4.1 Type 1
A natural solution to reduce the reliance on the predefined representation system of geometric objects and rules is similar to the upgrade from the logical reasoning to the neuro-symbolic reasoning: instead of approximating the distribution of attribute values of entries, we can directly approximate the conditional distribution of possible rules given multiple matrix entries (through standard or customized neural network approximators). Therefore, the rules work only as labels to distinguish between different rules and no formal representations of geometric objects and rules are involved in computation.
Two typical examples of Type 1 are Pairwise-ADV and Triple-ADV (Mekik
and ternary rules, respectively. A binary rule variable indicates whether a binary rule applies to two adjacent entries, for example, whether the objects in the two entries are of the same color, while a ternary rule variable denotes whether a ternary rule applies to three adjacent entries, such as the number of objects in Entry C equals the sum of geometric objects in Entry A and B. Another example of Type 1 is DeepIQ (Mandziuk and Zychowski, 2019), in which the variable of rules between two adjacent entries is an ordered categorical variable (rather than binary), for example, the objects in the two entries differ by 3 units in their sizes. The random variables used these two examples are similar to using the different formal representation systems in the logical reasoning approach, but they are functionally equivalent.
The parallelism heuristic--spatial parallelism implies abstract conceptual parallelism--is commonly used to determine the combinations of matrix entries to present to the distribution approximator. According to the parallelism heuristic, the rule distributions in parallel rows or columns should be the same or similar; thus, probability metrics, such as KL-divergence, or general similarity metrics,
Figure 17: Type 1
like Euclidean distance, are used to measure the similarity; the answer choice is chosen so that it gives a last row/column whose rule distribution is most similar to the ones of the context rows/columns.
A diagram of Type 1 is given in Figure 17. Note that a entry-wise encoder is used to process each input entry individually, which is similar to the perception frontend in probabilistic reasoning. But, unlike the perception frontend, the entry-wise encoder does not necessarily output distributions over the predefined representations of geometric objects. The entry-wise encoder is to represent any latent space that can be used to approximate the rule distributions. After the entry-wise encoder encodes every entry in an input sequence, the embeddings of these entries are further aggregated and processed by the rule distribution approximator; the rule distributions of different sequences are finally compared to select the answer choice. The entry-wise encoder and rule distribution approximator can be implemented based on various neural network modules, such as CNN, ResNet and MLP. In practice, these two modules are jointly trained given the ground-truth rule labels of entry sequences.
#### 4.4.2 Type 2
Unlike the approaches in Stage 1, 2, and 3, Type 1 has avoided composing computing streams that explicitly rely on the predefined formal representation systems. But it still relies on the ground-truth rule labels and the parallelism
Figure 18: Type 2
heuristic. This issue is solved in Type 2, which is free of the reliance, as shown in Figure 18. This type converts an RPM into a classification problem, where the class labels are the correctness of each answer choice. In particular, when only one answer choice is included in the input, it is a binary classification problem; when all answer choices are included, it is a multi-class problem.
Readers might have noticed a difference between Figure 17 and Figure 18--the entry-wise encoder has been replaced by an encoder (not necessarily entry-wise). As the name indicates, the encoder takes as input multiple entries and, thus, the relational information between entries are thus encoded into its output. This difference gives rise to the difference between perceptual and conceptual processing. In RPM, perceptual processing is the processing of each single matrix entry, whereas conceptual processing generally involves reasoning about the relations between multiple matrix entries, i.e., the rules that govern the variation of multiple matrix entries. If one wishes to explicitly separate these two types of processing, one would have a module that attends to each entry individually and another module to aggregate the outputs of the first module, as in Type 1. This design choice is important for building computational models for visual abstract reasoning tasks. By changing the name to "encoder", we implies that Type 2 does not necessarily require an explicit separation of perceptual and conceptual processing.
Hoshen and Werman (2017) implemented the first Type-2 model using a
Figure 19: Type 2+
CNN encoder and an MLP classifier, and tested it on simple figural series and RPM-like tasks. This CNN+MLP model has since been used as a baseline to evaluate later works. Being influenced by popular works in image classification, the early attempts to solve RPM-like tasks by learning models mostly follows the structure of Type 2, for example, the Wild-ResNet+MLP model (Barrett et al., 2018) and the ResNet+MLP model (Zhang et al., 2019), respectively representing the binary and multi-class versions of Type 2. In the work of Hoshen and Werman (2017), they also proposed the generative counterpart of the CNN+MLP model, by replacing the MLP classifier with a deconvolutional module to generate the predicted answer image (no answer choice is provided as input in this case). We include this modification in Type 2 by upgrading Type 2 to Type 2+, as shown in Figure 19.
#### 4.4.3 Type 3
By following the formulation of image classification, Type 2 eliminates the reliance on the ground-truth rule labels and the parallelism heuristic. However, visual abstract reasoning is a conceptually different tasks from image classification. In particular, abstract high-order relations need to be built upon raw perceptual visual input. The reason why a human participant thinks a visual
Figure 20: Type 3
abstract reasoning item difficulty is not because she cannot recognize the simple geometric objects in the item, but because the abstract concepts and relations could be complex, diverse, and hard to be extract from the simple geometric objects. In the latter case, concrete concepts are built upon complex visual stimuli, for example, recognizing daily objects in various backgrounds. Therefore, without further customization, the standard learning models for image classification is not able to give a satisfying solution to visual abstract reasoning.
By comparing the Type 2 with its predecessors, which performs well on RPM (but rely on predefined formal representation systems of geometric objects and rules), we find that an unnecessary design in Type 2 is that it does not separate perceptual and conceptual processing, which has been proved to be beneficial for visual abstract reasoning in many later works. This observation leads us to Type 3, as shown in Figure 20. Note that one can argue that Type 3 as a special case of Type 2, by regarding everything before the classifier head as a single module. But models based on this specification generally perform better than the typical models of Type 2.
After the entry-wise encoder encodes every entry, these entry embeddings go through a combinatorial process, in which subsets of these entry embeddings are selected and fed into next module subset by subset. In Figure 20, we use two trapezoids of opposite orientations for the entry-wise encoder and this combinatorial process to indicate that the amount of information is compressed and decompressed (i.e., the number of combinations is more than what are combined). As the name "combinatorial heuristics" indicates, Type 3 explicitly relies on some heuristics to take combinations, which include but not limited to the aforementioned parallelism heuristic. Essentially, these heuristics inform the model of which entry embeddings, together as a group, would make an instance of a rule. Each group is individually processed by a singleton rule encoder to produce a rule embedding for the group. At last, all rule embeddings are aggregated for classification.
A typical example of Type 3 is the WReN model (Barrett et al., 2018). WReN takes as input all context entries and one answer choice (thus solving
binary classification). The entry-wise encoder is a small CNN (plus tagging the entry embeddings with one-hot position vectors indicating the entries' positions in the matrix). For combinatorial heuristics, WReN considers all binary rules (i.e., relations between every two entries). Note that WReN does not use the parallelism heuristic, which is commonly used in other models; but the position-tagged entry embedding compensates this, because the rule encoder can easily find the non-parallel sequences through position tags and output a specific rule-embedding to indicate this for the following processing. The groups aggregator in WReN is simply a summation.
Following WReN, a series of models of Type 3 have been created, using different entry-wise encoders, combinatorial heuristics, rule encoders and groups aggregators. For example, LEN (Zheng et al., 2019) considers only ternary rules for combinatorial heuristics, i.e., groups every three entries together for rule encoding, and applies gating variables to each groups in aggregator instead of tagging positions of entries (unsurprisingly, the experiment results showed that all gating variables but the ones of rows and columns were zeroed); MXGNet (Wang et al., 2020) also considers ternary rules, uses CNN or R-CNN as entry-wise encoder, relies on parallelism heuristic for combinatorial heuristics (instead of gating variables), and employs a graph-learning-based rule encoder that re
Figure 21: Type 3+
gards the 3 entries as a graph and computes the graph embedding as the rule embedding.
Different from the previous Type-3 models, multi-layer RN (Jahrens and Martinetz, 2020, 2019, 2018) extends the relation encoding in WReN into a multi-layer form. This is, the relation embeddings of entry groups are not aggregated into a single embedding for classification, but into multiple embeddings, which are further fed into another combinatorial module and rule encoder. Therefore, one could visualize multi-layer RN as a Type-3 model, repeating the middle three modules as many times as needed. Intuitively, higher-order relations can better be extracted through this multi-layer design.
The SRAN model (Hu et al., 2021) adopts a more complicated encoding scheme by using multiple encoders and multiple rule encoders, where entries of two context rows/columns of 3\(\times\)3 matrices (6 in total) are encoded entry-wise, 3-entries-wise, 6-entries-wise by three different encoders, and the resulting entry-embeddings, 3-entry-embeddings and 6-entry-embeddings are sequentially integrated by three rule encoders into a single rule embedding, representing the rule of these two context rows/columns. The encoding scheme of SRAN, though complicated, does not deviate too much from Type 3. But, in stead of using rule embeddings to solve the item as an classification problem, SRAN directly uses similarity metrics of rule embeddings to select the answer, as in Type 1, which is also a common practice (just a different way to present the same supervising signal). Thus, it gives us a more complete Type 3+, as shown in Figure 21.
MRNet (Benny et al., 2021) is another Type-3 model using multiple entry encoders and multiple rule encoders, which process the input at multiple resolutions, determined by different layers' output in a CNN entry-wise encoder. The computational streams of different resolutions proceed separately and are aggregated at the end for classification.
Both these two models--SRAN and MRNet--are examples of using multiple entry encoders and multiple rule encoders. Another model--NSM (Shekhar and Taylor, 2021)--would be a better example to show the flexibility of Type 3. In particular, NSM solves the analogy-making task through two different
rule encoders--a LSTM rule encoder and a modular network encoder--for the base domain and the target domain, respectively. Moreover, the structure of the modular network depends on the output of the LSTM rule encoder. These examples imply that, to build a Type-3 model, one can use not only multiple encoders but also different types of encoders, and that the multiple encoders can be assembled in more complex ways, rather than being parallel.
Readers might have noticed the words "group" and "map" in the diagrams of Figure 20 and 21. By these words, we intend to call attention to a mechanism that is pervasive in information processing for visual abstract reasoning, i.e., which pieces of information should be grouped together and thus to be aggregated later, and which pieces of information should be mapped 10 and thus to be processed in the same way11. These two types of decisions are interdependent on each other; more precisely, they are better to be viewed as two aspects of the same cognitive process. These decisions have to be made repeatedly at every level in information processing. Unfortunately, there might not be a centralized or universal theory for this grouping-mapping mechanism. As one can see in these Type-3 models, they all resort to some specific heuristics, which might not be always incorrect for visual abstract reasoning tasks.
Footnote 10: or aligned, or corresponded; we use “map” to resonate with structure-mapping theory of analogy making; i.e., if two entities in the base and target domains are mapped to each other, then they are analogous to each other.
Footnote 11: i.e., processed by the same module to force the analogical relation between them.
#### 4.4.4 Type 4
Now, it is a good time to look back at the path that we have walked down for reviewing data-driven approaches and summarize how we come here:
* From neuro-symbolic reasoning to Type 1: we eliminate the need of predefined representation systems of geometric objects and rules, but introduce the need of ground-truth rule labels. Parallelism heuristics is also inherited.
* From Type 1 to Type 2: we eliminate the need of the ground-truth rule labels and parallelism heuristics, but the models do not perform well, because we use models for image classification, which is a fundamentally different task from visual abstract reasoning. The problem is solved in an image-classification way.
* From Type 2 to Type 3: we separate perceptual and conceptual processing to make the model more suitable for visual abstract reasoning. Although the models perform reasonably well, specific grouping-mapping mechanism (or combinatorial heuristics) are needed for solving different RPM-like tasks.
In this path, every time we want to eliminate the need of some prior knowledge, we introduce one or more neural networks modules to learn it from annotated data. This general solution in Stage 3 makes the procedural aspect, i.e., the process of computing, of solving RPM less of problem as the procedure can be interpolated from the input and expected output, through learning. The critical research point of the learning approach that determines the outcome of learning is thus shifted to the structural aspect. In visual abstract reasoning, the structural aspect includes hierarchical structure of processing, for example, the separation of perceptual and conceptual processing. With another layer of analogical processing (i.e., higher-order processing involving multiple relations), it would make a more complete hierarchy. In another dimension, the structural
Figure 22: Type 4
aspect also includes grouping-mapping mechanism we mentioned above. If one cannot abstract the task into these factors in the structural aspect and identify the "atomic" ones that can be easily solved through learning, the resulting learning model would not be effective and generalizable in the entire problem domain.
As indicated above, a remaining factor unsolved in Type 3 is the combinatorial heuristics. Type 4 attempts to solve it by regarding grouping-mapping mechanism and rule encoding as a single "atomic" factor that can be learned through a single module--reasoning module, as shown in Figure 22. The reason for combining them is empirical and pragmatic, because they are interwoven and it is hard to say the former determines the latter, or the other way around. Since the reasoning module of Type 4 contains no grouping-mapping heuristics, its output does not necessarily indicate various rule among entries, and thus cannot be processed as in Type 3/3+. Thus, supervising signals are directly applied on this output. If we go back to see the structure of Type 2, you will find that Type 4 resembles Type 2 in appearance. Nonetheless, Type 4 is much more effective than Type 2 on RPM and it takes many trials and errors to settle on this solution. It has now become a relatively stable solution to visual abstract reasoning, and different core techniques have been used to implement the reasoning module. We summarize the works into four categories using distinct reasoning kernels.
Reasoning Kernel 1: CNN has been a basic tool to extract features from raw perceptual input, and the extracted features are not only relevant for solving specific downstream tasks, but also representing correlations in the input. Solving visual abstract reasoning tasks is also to process correlations among raw inputs. Theoretically, CNN would have been an effective solution to RPM-like tasks. However, several early influential works (Barrett et al., 2018; Zhang et al., 2019, 2019) argued that CNN and CNN-based models are not capable of
solving RPM-like tasks. 12. Since then, the research has been mainly focusing on other solutions. Ironically, after several years of exploration, Spratley et al. (2020) proposed two Type-4 models--Rel-Base and Rel-AIR--which are all CNN-based models and perform well on both PGM and RAVEN. After comparing these two models with the previous CNN models, we found that the difference is whether the conceptual processing and the perceptual processing are separated. Taking Rel-Base as an example, its entry-wise encoder is a CNN module and its reasoning module is also a CNN module; all the entry embeddings are first stacked together and then convolved with convolution kernels in the reasoning module. But the baseline CNN-based models do not have this artificial separation. Therefore, we conjecture that the outstanding performance of many non-CNN models is not because they found better solutions than CNN, but because they separate perceptual and conceptual processing. On the flip side, another implication is that when using a single CNN module for both perceptual and conceptual processing, it is an extremely difficult task to learn this separation from data, i.e., learn the hierarchical structure of the task and how the information at each level is correlated. However, from the perspective of general problem solving, it would be impossible for us to know when the perceptual-conceptual separation lies for every possible task; in this case, we would have to use a single huge monolithic model; and how such a model can be trained effectively would be an important future research question.
Footnote 12: This is also why the CNN+MLP and ResNet+MLP models have been constantly used as baselines.
Reasoning Kernel 2: LSTMA typical Type-4 model is the CNN+LSTM+MLP model (Barrett et al., 2018). This model takes as input all context entries and one or more answer choices. Each entry embedding is sequentially processed by an LSTM reasoning module, and the final state of LSTM is fed into an MLP classifier to predict the answer. This model is also used as a common baseline in many later works. LSTM has also been combined with other modules: Double
LSTM (Sekh et al., 2020) uses two LSTM modules, which each specialize in different rule types and are coordinated by an extra module trained to predict the rule type13; ESBN and NTM (Sinha et al., 2020; Webb et al., 2020), combining LSTM with external memory modules, can also be used as the reasoning kernels in Type 4.
Footnote 13: The reliance on ground-truth rule labels slightly deviates from our definition of Type 4.
Reasoning Kernel 3: Self-AttentionAnother commonly used reasoning kernel is the self-attention module, which is composed of a multi-head attention and a feed-forward network (with residual connections and normalization). The most typical example of this reasoning kernel is the ARNe model (Hahne et al., 2019). It extends the Type-3 model, WReN, by inserting between the entry-wise encoder and the combinatorial heuristics a self-attention module. Note that although ARNe inherits the combinatorial heuristic of WReN, it is no longer a Type-3 model because the self-attended embeddings no longer represent individual entries. Instead, each self-attended embedding contains information about all the matrix entries, and should better be considered as summaries of the whole matrix from different angles. Therefore, the inherited combinatorial heuristics module and the following modules of WReN can be considered similar to other general classifier heads, simply aggregating the input and predicting the answer. With hindsight, a reasonable order should have been first testing the self-attention module with a simpler classifier head rather than WReN.
A similar example is the HTR model (An and Cho, 2020), where an R-CNN entry-wise encoder is used to extract all geometric objects in each entry and two self-attention-based sub-modules are used to move the reasoning from object-level to entry-level and from entry-level to matrix-level. The first sub-module takes as input the object embeddings in a single entry and sums up the self-attended object embeddings as the entry embedding. Unlike ARNe and WReN solving RPM as binary classification, HTR solves it as multi-classification. Therefore, the output of the second sub-module contains 8 embeddings corre
sponding to the 8 answer choice. These 8 embeddings are fed into a contrastive classifier head (Zhang et al., 2020) to predict the answer label.
A more general example is the Neural Interpreter model (Rahaman et al., 2021), which implements its most basic building block "function" as a self-attention module associated with two learnable vectors, which affects the module's computation and its access to input data, respectively. The self-attention modules are analogous to functions in programming language (as the term "interpreter" indicates), with one vector defining the function body and the other defining the function signature (type-matching particularly). A neural interpreter is composed of multiple iterations of a finite set of functions. As the original self-attention in Transformer, it converts a set of embeddings into a set of corresponding embeddings decorated with relational information. Neural interpreter was tested on RPM as binary classification. A CNN entry-wise encoder is used to produce entry embeddings. As in BERT (Devlin et al., 2018), a classification token is included in the input embeddings, whose corresponding output embedding was fed into a linear classifier head.
Reasoning Kernel 4: Multi-Head Rule DetectorThe last reasoning kernel is closely related to the rule encoder of Type 3. Recall that the combinatorial heuristics module in Type 3 groups the entry embeddings into multiple groups, and each group is separately processed by the rule encoder to obtain a rule embedding for this group. Although this rule encoder has 1-in and 1-out, it is responsible for recognizing and encoding all the possible rules that might occur in the input. Recall that, by moving from Type 3 to Type 4, we intended to eliminate the reliance on combinatorial heuristics. An natural alternative solution could be an "all-in-all-out" rule encoder (rather than 1-in-1-out), which takes as input all the entry embeddings of a matrix (no grouping) and outputs all the possible rules. The relationship between "1-in-1-out" and "all-in-all-out" is analogous to the relationship between image classification versus object detection, where multiple objects exist in the image. Particularly, the new rule encoder can have multiple output heads, where later supervising pressure
can be applied to force each head to represent a specific rule or specific rules. Therefore, we refer to this reasoning kernel as multi-head Rule detector. This kernel is underrepresented because we found only one model using this kernel--the SCL model (Wu et al., 2021), but it is very efficient for visual abstract reasoning.
### Stage 4: Data Manipulation
The reported performance of some learning models in Stage 3 has already surpassed human performance under certain circumstances. However, the unreported or non-highlighted performance is far from satisfactory. A serious issue is that abstract concepts are not learned by these models because they do not generalize well when the abstract concepts are presented in different perceptual stimuli. This type of generalization is fundamental to visual abstract reasoning and also a hallmark of human intelligence. Therefore, the exploration has never stopped. Since the four types of learning models in the last stage explored many structural possibilities for building learning models, we have observed more and more efforts on studying the problem domain per se and how it is solved by human. This is perfectly understandable because when one realizes that all the existing tools do not work, she will naturally scrutinize the problem per se and try to understand why it is different from previously solved problems. These efforts result in the works of Stage 4, which utilize the features of visual abstract reasoning task that do not necessarily exist in other tasks. These efforts also resonate with the upward-spiral pattern we mentioned at the beginning of this conceptual chronicle as these task-specific features are also heavily used in the approaches in Stage 1 and 2, though in different ways. In particular, datasets of RPM-like items are delicately manipulated to present the task to learning models in a similar way of how human perceive and conceptualize RPM items. This way, the works in Stage 4 could force the models to learn abstract concepts and specific visual stimuli, distinguish between them, generalize the abstract concepts to the entire domain, and, finally, build the ability on the entire problem domain.
#### 4.5.1 Auxiliary Training
For the models of Type 2, 3 and 4, an extra classifier head can be attached to exactly where the existing classifier head is attached to predict the meta-target of the input RPM-like item, which is a multi-hot vector indicating the attributes of geometric objects and rules in this item. These meta-targets are usually accessible in algorithmically-generated datasets, such PGM and RAVEN. The learning models can thus be trained on the answer labels and meta-targets simultaneously. The training on meta-targets is often referred to as auxiliary training in literature.
Intuitively, this extra supervising signal can boost the accuracy of the answer-label classifier head. Auxiliary training was first tried with the WReN model on the PGM dataset and indeed showed a approximately 10% boost (in IID generalization regime). The contribution of auxiliary training was verified by a high correlation between the two classifier heads' accuracies (Barrett et al., 2018). Similar observations on PGM were also found in other studies (Pekar et al., 2020; Hahne et al., 2019). In particular, the ARNe model would not even converge without auxiliary training.
However, the effect of auxiliary training is still inconclusive. Benny et al. (2021) showed that auxiliary training on PGM could only increase the accuracy of 1-rule items but decrease the accuracy of multi-rule items. This could cause the decrease of the overall accuracy when the dataset is composed of complex RPM-like items. Besides being affect by rules, the effect also differs between datasets. It has been reported that the auxiliary training would generally decrease the performance on the RAVEN dataset (Zhang et al., 2019, 2019; Zheng et al., 2019; Wang et al., 2020), with one exception (Kim et al., 2020), which used a special contrastive loss and will be discussed later. Besides, Malkinski and Mandziuk (2020) also showed contradictory results that when the meta-target is encoded in a sparse manner (the above works are all dense-encoding), the auxiliary training can increase the performance on RAVEN. Therefore, we can only say that the effect of auxiliary training is jointly determined by model,
loss function, dataset, and meta-target encoding.
#### 4.5.2 Disentangled and Generative Representations
The neuro-symbolic reasoning in Stage 3 has been frequently using standard neural networks, such as autoencoder and CNN, as the perception frontend to construct representations of entry image with explicit symbolic meaning. In contrast, as we mentioned in Type 1 of Stage 4, the symbolic meaning of encoders' output is not guaranteed. In addition to representations with symbolic meaning, disentangled and generative representations are used in Stage 4. For example, the Type-1 model, DeepIQ (Mandziuk and Zychowski, 2019), uses a variational auto-encoder (VAE) as its encoder, which is pretrained on entry images of the Sandia dataset and kept frozen when the rule approximator is trained later.
Several advantages of disentangled and generative representations in RPM have been reported, such as data efficiency (van Steenkiste et al., 2019), robustness to distracting attributes (Zheng et al., 2019) and better OOD generalization (Steenbrugge et al., 2018). Disentangled and generative representations of entry images are usually obtained through VAE or its variants. For examples, in Type 3, \(\beta\)-VAE, FactorVAE, \(\beta\)-TCVAE and DIP-VAE were pretrained on entry images and the frozen encoders were combined with WReN (Steenbrugge et al., 2018; van Steenkiste et al., 2019); a reduced version of MRNet was jointly trained with a VAE to simultaneously predict the answer label and generate the answer image (thus we call it generative-MRNet) (Pekar et al., 2020). For Type-4 models, the VAE is usually jointly trained with the reasoning module, for example, the aforementioned ESBN model (Sinha et al., 2020), and the LoGe model (Hersche et al., 2022), which uses vq-VAE as its encoder and decoder. Another special example of Type 4 is the Rel-AIR model (Spratley et al., 2020), which integrates into its encoder an Attend-Infer-Repeat model (Eslami et al., 2016)--a model that can bee thought of as iterative VAE.
#### 4.5.3 Contrastive learning and Manipulating Data
In addition to supervised learning, contrastive learning has also been used for solving RPM. We need to point out that the techniques of contrastive learning have been highly adapted to employ the structural and analogical characteristics of RPM and thus might not strictly follow the paradigms of contrastive learning. Particularly, the characteristics of RPM provides more options to manipulate data, such as decomposing matrices into rows and columns and regrouping them, and regrouping answer choices and even RPM problems, and various supervising signals can be applied to contrast the decomposed and regrouped data.
Intra-Item Contrasting: Row/Column ContrastingThe minimum structure that can be contrasted is rows/columns of a matrix. This type of contrasting was first attempted in the MCPT model (Zhuo and Kankanhalli, 2020), where 8 answer choices are inserted into the 3\(\times\)3 matrix to obtain 10 rows/columns (2 context rows/columns and 8 answer choice rows/columns). The context rows/columns are assigned pseudo-label 1 and answer choice rows/columns are assigned pseudo-label 0; and this newly constructed pseudo-dataset of row/columns is learned by a Type-2 model, assuming that only one "mis-assigned" pseudo-label for the correct choice row/columns does not affect the final result of learning. To solve RPM, the answer choice row/column with the highest predicted output (between 0 and 1) is selected.
The intuition behind MCPT is to capture any characteristic that distinguishes between the correct and incorrect choices when they are embedded into the third row/column. In particular, it checks whether the third row/column has a meaningful variation that is similar to any context row/column in the dataset. The PRD model (Kiat et al., 2020) enhanced this type of single-row/column contrasting by including the parallelism heuristic. As in standard contrastive learning, positive and negative pairs are constructed from rows/columns, where the first two rows/columns in an RPM matrix make a positive pair. The negative pair could be constructed in different ways, such as rows/columns from different RPM-like items, randomly shuffled rows/columns of the same RPM, or filling the
third row/column with a random non-choice entries. In PRD, a Type-2 model is used to learn a metric to measure the similarity between the two rows/columns in a pair. To solve an RPM, the choice row/column that is most similar to the first two rows/columns is selected. Compared to the single-row/column contrasting, the double-row/column contrasting is more common, which could be found in many other works. For example, the aforementioned generative-MRNet (Pekar et al., 2020) contrasts the answer choice rows/columns completed by the generated answer to the answer choice rows/columns completed by the given answer choices.
The rationale of moving from single-row/column to double-row/column contrasting was also exemplified by the LABC training/testing regime (Hill et al., 2019), which makes the contrasting more accurate and complete through the meta-targets used in auxiliary training. Different from the single-row/column and double-row/column contrasting, where the effect of contrasting is applied through extra contrastive loss functions, LABC, as a training/testing regime, requires models to learn adapted datasets, which will force the model to contrast the rows/columns. In particular, an RPM-like item is adapted by muting some digits of its meta-target vector and regenerating the incorrect answer choices according to the muted meta-target vector. Since meta-targets represent the rules and geometric objects that are used to generate RPM items, the newly-generated answer choices are partially correct. This way, the model will have to compare such answer choice rows/columns and the context rows/columns to find the correct answer, instead of only seeking meaningful variations in the answer choice row/column as in the single-row/column contrasting. LABC makes this idea more systematic by introducing the concepts of semantically and perceptually plausible answer choice corresponding to muting different subsets of mete-target digits and using distracting objects and rules.
Intra-Item Contrasting: Matrix ContrastingInstead of contrasting rows/columns, we can also contrasting the matrices completed by each answer choice. This is essentially contrasting the answer choices in the context of context entries. The
Type-2 model, CoPINet (Zhang et al., 2019), is the first model performing such contrasting. The contrasting in CoPINet is two-fold--contrastive representation and contrastive loss. First, for an RPM-like item, the embeddings of the matrices completed by each answer choice are aggregated into a "central" embedding, and their differences to the "central" embedding are used in the following processing. Second, given the interweaving of these matrix embeddings, it naturally leads to a contrastive loss function that incorporates matrices completed by correct and incorrect answer choices and increases the gap between their predicted values. This contrastive loss function could be easily embedded into models of parallel computation streams, for example, the aforementioned HTR model (An and Cho, 2020).
We need to point out that row/column contrasting and matrix contrasting are not exclusive. For example, the DCNet model (Zhuo and Kankanhalli, 2021) first uses row/col contrasting to compute the matrix embeddings and then uses the matrix contrasting to predict the answer.
Inter-Item Contrasting: Single-Label Contrasting.The above contrasting has been restricted within a single RPM-like item. The contrasting can also be between multiple items. The ACL and Meta-ACL (Kim et al., 2020) are the first two inter-item contrasting models. The relation between ACL and Meta-ACL is similar to that between single-row/column and double-row/column contrasting. Given an RPM, let \(X\) be its incomplete context matrix (regarding the missing entry as an empty image), \(X_{i}\) an incomplete matrix obtained by replacing the \(i\)-th entry with a white-noise image, and \(X^{\prime}\) an incomplete matrix obtained by randomly reordering the entries of \(X\). The ACL model contrasts the positive pair \((X,X_{i})\) with the negative pair \((X,X^{\prime})\). The Meta-ACL resorts to meta-targets to compose positive and negative pairs. In particular, two incomplete matrices of two items of the same meta-target form a positive pair \((X_{S},X_{T})\), and the corresponding negative pair is \((X_{S},X^{\prime}_{S})\). In both ACL and Meta-ACL, the contrasting effect is applied through an extra standard contrastive loss function.
The MLCL model (Malkinski and Mandziuk, 2020) formalizes the idea of
Meta-ACL in a multi-label setting by regarding multi-hot meta-targets as multi-labels. Therefore, instead of requiring positive pairs to have exactly the same meta-targets, MLCL regards pairs of intersecting meta-targets as positive pairs. Different from Meta-ACL, the completed matrices are used. In particular, the correctly completed matrices are used for inter-item contrasting, and the intra-item contrasting between the correctly completed matrix and its incorrectly completed matrices is performed as in CoPINet. These two types of contrasting losses are jointly optimized.
#### 4.5.4 Other Dimensions of Manipulating Data
Besides contrasting, there are also other dimensions of manipulating data. For example, the FRAR model (Zheng et al., 2019) utilizes a reinforcement learning teacher model to select items from an RPM-like item back to train a student model. The items in the bank are characterized by their meta-targets and the reward is the increase in accuracy of the student model. The models solving RPM-like datasets have also been examined in the setting of continual learning. For example, the RAVEN dataset can be divided into 7 batches according to its spatial configurations and the models are trained with different methods to mitigate forgetting when sequentially learning the 7 batches in different orders (Hayes and Kanan, 2021).
### Summary
The food for thought to share with the readers is that the study of the problem domain and the exploration for general solutions are both important for the overall advance in this field, as indicated by the upward-spiral pattern in the conceptual chronicle of computational models reviewed above. On one hand, the technical development always explores new methods, on the other hand, it inevitably revisits the old ideas again and again until the problem is perfectly solved. Therefore, the most recent models are not necessarily superior to the traditional ones in nature, and the early approaches, like the imagery-based approach, might trigger the next cycle of technical development in future
research.
## 5 Discussion
After a historical overview of RPM and the problem domain represented by RPM in Section 2 and 3 and a conceptual chronicle of computational models for solving this problem domain in Section 4, we will zoom away in this section to discuss more general topics related to intelligence testing and AI systems. A good introduction to these topics is through a fundamental cognitive process--analogy making. In particular, we list the following analogies about intelligence tests and AI system:
* Analogy A--Intelligence Test : Human :: Intelligence Test : AI system
* Analogy B--Intelligence Test : Human :: AI Test : Human
* Analogy C--Intelligence Test : Human :: AI Test: AI System
* Analogy D--Intelligence Test : AI System :: AI Test : Human
* Analogy E--Intelligence Test : AI System :: AI Test : AI system
* Analogy F--AI Test : Human :: AI Test : AI system
The AI tests in the analogies above specifically means the tests that are inspired by human intelligence tests and specially designed to evaluating AI systems, for example, PGM and RAVEN datasets. These AI tests represent the motivation of testing AI systems in a similar way of human intelligence testing. To be rigorous, we enumerate all the possibilities of permutating tests and test-takers in the above analogies. These analogies represent research questions in different fields. For example, cognitive scientists might be interested in A; test developers might be interested in B and E; AI researchers, might be interested in A, C, E, and F; and some people might be interested in D simply for exploration purpose. Many of works reviewed above allude to one or more of these analogies. But most of them did not take one more step to examine whether these analogies
hold or under what conditions they holds. In this case, the result of these works should be interpreted with caution. When it comes to AI testing, We are particularly interested in Analogy C. It describes a situation where human intelligence testing and AI testing are similar and common test theories could possibly apply to both cases. This analogy further gives rise to two general dual topics that are important for building and testing AI systems, respectively:
* How tests measure subjects: the validity of measuring AI in a similar way human intelligence is measured;
* How subjects solve tests: the implication of human intelligence for building AI systems.
### The Validity of AI Testing
Analogy C--Intelligence Test : Human ::AI Test: AI System--calls attention to the connection between human intelligence testing and AI testing. It describes a situation where AI tests based on human intelligence tests are used to evaluate AI systems, as human intelligence tests are used to measure human intelligence. However, whether this analogy holds remain largely unknown to us. If they are, conclusions about human intelligence can be translated to AI systems. For example, one can claim that an AI system has the ability of visual abstract reasoning if the system passes the tests of the algorithmically-generated datasets mentioned above. Analogy C is best represented by the learning models in Stage 3 because the learning models are mainly evaluated through specially designed AI tests, such as PGM, RAVEN, and Sandia. Most of the works discuss their AI systems and contributions in the background of human cognitive abilities, and attempt to draw the conclusions that are comparable to human intelligence when the AI systems perform well. Unfortunately, when we are enjoying the acclamation, an elephant in the room is still in the room--the analogy simply does not hold and there is no validity in building and evaluating these models in the way they are currently built and evaluated. Note that the word "validity" is two-fold: on one hand, it is the validity in psychometrics; on the
other, it is practically meaningless. We will now elaborate on this using learning models as an example.
To prove the idea that the AI testing in the reviewed works is psychometrically invalid, we check if the determinants of validity of human intelligence testing hold for AI testing.
* The first determinant is that human intelligence tests, as other psychological tests, is to measure individual difference on some tasks. Statistical evidence show that the performances on many tasks are correlated, and experts use the word "intelligence" to denote the latent factor or factors that cause the correlation. In other words, it is humans' behavior that comes first; then the word "intelligence" is abductively defined to explain humans' behavior. When an AI system shows similar behavior on the tasks which are comparable to human performance on these tasks, it is not necessarily the same factor(s), i.e., human's intelligence factor(s), that is behind the behavior of the AI system. To satisfy the first determinant in AI testing, we needs to show that the underlying mechanisms are the same or equivalent in all cases. Otherwise, we need to be more cautious when we are describing the AI system's ability and explicitly distinguish it from human cognitive abilities.
* The second determinant is the requirements for designing human intelligence tests: human intelligence tests are usually short to prevent the participant from being exhausted; the stimuli in intelligence tests are diverse and there is usually no repeating stimulus in a single test; meanwhile, the stimuli in intelligence tests are also concise so that it does not introduce confounding factors; the items need to be evenly spread on the spectrum of difficulty so that people at different ability levels can be measured; and so on. All these requirements contribute to the validity of intelligence tests and are not easily satisfied in AI tests. An exception is the Cognitive Design System Approach by Embretson (2004), but this approach has not used to develop any test for AI systems.
The determinants listed here are by no means complete given the complex nature of human intelligence testing, but are sufficient to break the analogy between human intelligence testing and AI testing.
Given the fundamental distinction between human intelligence testing and AI testing, we might simply abandon the idea of establishing the validity by comparing AI testing to human intelligence testing. Instead, as most works in AI, we analyze AI systems for solving intelligence tests and intelligence-test-like datasets purely from the perspective of problem solving, and claim that these AI systems are more capable of solving the tests or datasets than human participants. However, this brings us back to an old issue: the AI systems are specially prepared or trained on the items that are similar to the one used for testing, whereas testing items are kept secret from human participants, let alone training. For visual abstract reasoning, no AI system has shown performance that is comparable to human, especially when generalizing an abstract concept to new visual stimuli that were not associated with this concept before.
Nonetheless, we can still argue that these AI systems are useful because they can at least act as automatic tools to free humans from simple repeating tasks in our daily life. However, this is also not true because intelligence tests, especially general intelligence tests, are designed to distant from the our daily activities so that the result is not affected by one's previous experience. Thus, the ability to solve intelligence test items would not be able to assist human in most cases. Moreover, a cognitive ability or general intelligence does not correspond to a specific clearly defined task that is constantly repeating in certain scenario. Instead, it is abstracted from various daily activities. That is, it is common but also very sparse across various daily activities, and, more importantly, deeply interwoven with other abilities. There is simply no such simple clearly-defined repeating tasks where these AI systems can be applied. For other complex ill-defined tasks, these AI systems also need to be integrated with various other AI systems of different abilities. This kind research, though valuable, is still infeasible at the current stage of AI.
We can try to continue this debate by proposing more contributions and
purposes of building AI systems for solving intelligence tests or intelligence-test-like tests. As long as the contribution is relative to human intelligence, we can always come up with a reason to refute it (except that the contribution is pure scientific exploration). Unfortunately, comparing to human intelligence is unavoidable on our way to implementing human-level AI. It seems that we have come to a dead end.
The solution lies in the theory of analogy making and the origin of intelligence tests. Let us first check the analogy-making aspect of Analogy C to see if we interpret the analogy correctly. One of the most important theories of analogy-making is the structure mapping theory by Gentner (1983). It emphasizes the similarity between the relations in the base and the target domains, rather than the literal similarity between objects in the base and target domains. In particular, the corresponding objects can be starkly different in a literal sense without compromising the strength of the analogy, when the corresponding relationships are similar. This seems trivial to humans who know how to make analogies. But people indeed make mistakes by relying on literal similarity rather than relational similarity when interpreting analogy. In fact, we did in interpreting Analogy C above. We started from corresponding human intelligence tests with AI tests by literal similarity, i.e., they are items to solve. We then took a simple relation "human solves intelligence tests" in the base domain and translated it into the target domain. After a thorough analysis, we found everything went wrong. We just made the very mistake that is just pointed by structure-mapping theory. Thus, interpreting analogy correctly might not be trivial at all in practice.
The correct interpretation starts from studying the relations in the base domain, which can be clarified by a revisit to the origin of intelligence tests. Modern schooling is actually a new manner of eduction compared to the whole history of education. It does not exist until the 20th century. At the beginning, educators found that some children had a great deal of trouble learning in this manner. In order to select the students who were suitable for modern schooling, the French Education Ministry hired Alfred Binet. The solution Binet provided
was to test children's ability to solve problems that could be commonly solved by children at certain ages, determining the children's mental ages. The ratio of mental age to chronological age was used as an index to select students for school education. This index is the prototype of today's intelligence quotient. Therefore, the origin of intelligence tests tells us that intelligence tests were developed to measure individual difference of learning ability under a certain circumstance (school education) relative to the average of a certain group of people (peers). This definition echoes our discussion of RPM in Section 2.
While this definition of intelligence tests seems complicated, it does accurately describe the relations in the base domain of Analogy C. Now, let us check the target domain for a similar relational structure. The general idea of the target domain is undoubtedly to test AI systems. We can try to extract from the target domain the counterparts of the concepts in the definition of intelligence tests. The most important two concepts in the definition of intelligence tests is definitely "learning ability" and "individual difference". "Learning ability" of AI systems is a clear concept because it is native to the learning models. "Learning ability" has been considered as an integral part of AI systems (though the "learning ability" of AI systems might be the different from human learning ability). Thus, "learning ability" does not pose any problem to us. "Individual difference" of "learning ability" of AI system is less clearly defined because of the heterogeneous nature of various AI systems. Note that, in contrast to human intelligence testing, the inherent "learning ability" cannot be sufficiently reflected in the final outcome of learning. This problem can be solved if we considered the dual concept of ability--difficulty. Put simply, if we have items at various levels of difficulty, we can use human ability test items like a ruler to measure people's ability. On the flip side, if we know people at different levels of ability, we can use these people's response to these items to determine the difficulty of these items. That is, ability and difficulty are defined relative to each other. We are so familiar with difficulty in AI research because we have experienced so much of it. In particular, when evaluating AI systems' learning ability, the concept of difficulty is reified as learning tasks. We would say that
a learning task is difficult to a specific AI systems or to a class of AI systems. In practice, learning tasks can be defined differently, such as different datasets, different ways to present datasets, and access to other resources. A good example of learning tasks is the different generalization regimes of PGM and RAVEN datasets Barrett et al.; Zhang et al., which correspond to different conceptual distances between the abstract concepts in training and testing. The more distant, the harder the learning task. Now, we can look back at the the ruler to measure human intelligence, on which the marks are individual test items. Therefore, to interpret Analogy C, we can make the correspondence between human intelligence test items and learning tasks of AI systems. In contrast to previous interpretation of Analogy C, this correspondence is not based on literal similarity but derived from the relational structures in the base and the target domain. This correspondence is extremely important for us to establish the general testing theory of AI systems, but might not be obvious from literal meaning of Analogy C. We now can interpret Analogy C as human intelligence tests measure human intelligence as AI tests of learning tasks measure AI systems.
It is important to point out that this interpretation of Analogy C is not just a rhetoric or an arbitrary makeshift. It calls attention to two basic factors that one needs to consider to establish a test theory--what is being measured? what is used to measure it? For the first question, we definitely want to measure the "learning ability" of AI systems. For the second question, we have a great many existing learning tasks for AI systems. The context to answer the second question is subtly different from human testing and more complex. First, when we are evaluating an AI system on a learning task, we are interested in the overall performance rather than the response on a specific instance of this task. For example, for an image classification task, we would compare the overall accuracies of two AI systems to conclude that one is more capable than the other. We would not make such conclusion because one system gives a correct prediction for a specific image while the other does not, unless this instance (the image) is fundamentally different from other instances and possibly posing
more demands for processing. In that case, this instance would make a separate learning task. In both cases, the correspondence between human intelligence test items and learning tasks for AI systems remains unchanged. The context of AI testing is more complex than human intelligence testing because there exist various learning tasks and various AI systems to solve them, but, for now, not every AI system is designed to solve every learning task. And for practical purposes, we need these specialized AI systems in our society rather only pursuing the ultimate goal of human-level AI. For human intelligence tests, although people might perform extremely well on some subtests but terribly on the others, the tests are valid measure for all human beings. But, currently, one cannot design an AI test that applies to all AI systems. What we can do now is to identify problem domains and fundamentally different learning tasks in the domain, which can be used to compose tests for AI systems. When AI technology enters the era of Artificial General Intelligence (AGI) in the future, we can design AI tests using learning tasks across multiple problem domains.
In general, this interpretation of Analogy C allows us to establish a testing framework for AI systems, which is similar to the testing theories in human intelligence testing. This framework requires extra efforts to study problem domains and, more importantly, study cognitive information processing to identify various learning tasks in the problem domain. Therefore, it is naturally a interdisciplinary research direction. this framework proposes a much higher standard than how AI systems are tested now. Although it requires extra efforts to implement, it will make sure that we are making concrete progress.
### The Implication of Human Intelligence for Building AI systems
Although the history of human intelligence testing is much shorter (approximately 100 years) compared to the time intelligence exists, humans' intelligence test scores have shown a substantial increase (Flynn effect). Many efforts have been made to find what is responsible for this increase. These efforts are important not only for human development but also for AI systems from the perspective of AI testing. Specific social changes have been used to explain Flynn
effect, such as television, computer games, changes in school education and so on. Most of these explanations do not hold up because these social changes are not accompanied by the changes in intelligence test scores. Interestingly, the change in testing scores does correlate with to the changes in human's height, birth weight, and infant mortality in a more than general sense. Thus, the increase in intelligence test scores might be attributed to the same factors responsible for height, birth weight and infant mortality--i.e., improved living conditions such as food and medical care(Raven, 2000).
When we are reviewing the development of AI, we are facing the same meta-question--what causes the development--that is not well answered. either. We could conclude that the recent improvement of AI is due to the increase of computational power and massive amount of data generated through internet. This explanation is not so different from attributing the increase of human intelligence to improvement of living conditions, which is not very operable for theoretical AI research. Apart from computational power and data, most of knowledge in basic science that are used in the cutting-edge AI technologies have been there for decades. Therefore, it is hard to find a theoretical factor that promoted the development of AI.
A hypothesis from the social studies that was proposed to explain humans' cognitive development can better explain the development of AI than other explanations. **The Challenge Hypothesis**(Hunt, 2010):
Intelligence is developed by engaging in cognitively challenging activities. Environments vary in the extent to which they support such challenges, and individuals vary in the extent to which they seek them out.
The statement of the hypothesis is, though concise, but full of wisdom. In the last decades, the development of AI have been definitely accompanied by tasks that were initially challenging to AI systems, such as facial recognition and spam filtering, and later solved. These tasks did not exist before the era of AI. This argument echoes the emphasis in the last subsection on identifying and collating
learning tasks for AI testing.
The second half of the challenge hypothesis--"individuals vary in the extent to which they seek them out"--is even more interesting. In the studies of human cognitive development, there is a somewhat surprising empirical result--eductive ability is more easily influenced by appropriate educational and developmental experience than reproductive ability. In particular, researcher found that educational self-direction, in which students are responsible for deciding what they need to learn, how they learn it, and what are goals, and complex educational activities (e.g., challenge and reasonable learning tasks) give rise to a cyclical development in cognitive ability (Raven, 2000). These studies shed light on a possible promising future trend in AI research, in which AI systems take the initiative to seek out learning tasks in the challenging environment that provide the most efficient development. This trend implies a fundamental change to the paradigm of AI systems by shifting from learning specific tasks to interacting with the environment(Laird et al., 2017)
|
2308.11108 | Ultrastrong Light-Matter Coupling in 2D Metal-Chalcogenates | Hybridization of excitons with photons to form hybrid quasiparticles,
exciton-polaritons (EPs), has been widely investigated in a range of
semiconductor material systems coupled to photonic cavities. Self-hybridization
occurs when the semiconductor itself can serve as the photonic cavity medium
resulting in strongly-coupled EPs with Rabi splitting energies > 200 meV at
room temperatures which recently were observed in layered two-dimensional (2D)
excitonic materials. Here, we report an extreme version of this phenomenon, an
ultrastrong EP coupling, in a nascent, 2D excitonic system, the metal organic
chalcogenate (MOCHA) compound named mithrene. The resulting self-hybridized EPs
in mithrene crystals placed on Au substrates show Rabi Splitting in the
ultrastrong coupling range (> 600 meV) due to the strong oscillator strength of
the excitons concurrent with the large refractive indices of mithrene. We
further show bright EP emission at room temperature as well as EP dispersions
at low-temperatures. Importantly, we find lower EP emission linewidth narrowing
to ~1 nm when mithrene crystals are placed in closed Fabry-Perot cavities. Our
results suggest that MOCHA materials are ideal for polaritonics in the deep
green-blue part of the spectrum where strong excitonic materials with large
optical constants are notably scarce. | Surendra B. Anantharaman, Jason Lynch, Mariya Aleksich, Christopher E. Stevens, Christopher Munley, Bongjun Choi, Sridhar Shenoy, Thomas Darlington, Arka Majumdar, P. James Shuck, Joshua Hendrickson, J. Nathan Hohman, Deep Jariwala | 2023-08-22T01:05:37Z | http://arxiv.org/abs/2308.11108v1 | # Ultrastrong Light-Matter Coupling in
###### Abstract
Hybridization of excitons with photons to form hybrid quasiparticles, exciton-polaritons (EPs), has been widely investigated in a range of semiconductor material systems coupled to photonic cavities. Self-hybridization occurs when the semiconductor itself can serve as the photonic cavity medium resulting in strongly-coupled EPs with Rabi splitting energies (h\(\Omega\)) > 200 meV at room temperatures which recently were observed in layered two-dimensional (2D) excitonic materials. Here, we report an extreme version of this phenomenon, an ultrastrong EP coupling, in a nascent, 2D excitonic system, the metal organic chalcogenate (MOCHA) compound named mithrene. The resulting self-hybridized EPs in mithrene crystals placed on Au substrates show Rabi Splitting in the ultrastrong coupling range (h\(\Omega\) > 600 meV) due to the strong oscillator strength of the excitons concurrent with the large refractive indices of mithrene. We further show bright EP emission at room temperature as well as EP dispersions at low-temperatures. Importantly, we find lower EP emission
linewidth narrowing to ~1 nm when mithrene crystals are placed in closed Fabry-Perot cavities. Our results suggest that MOCHA materials are ideal for polaritonics in the deep green-blue part of the spectrum where strong excitonic materials with large optical constants are notably scarce.**
## Introduction
Exciton-polaritons are part-light, part-matter quasiparticles that are the result of energy being exchanged between a photon trapped in a cavity and an exciton (Coulomb bound electron-hole pair) state that fundamentally change the optical dispersion of a system. Since exciton-polaritons are the result of strong light-matter interactions, their properties can be leveraged in optoelectronic devices such as lasers[1, 2], light-emitting diodes (LEDs)[3, 4], and photovoltaics[5, 6]. The rate at which energy is exchanged between the light and matter states is described by the coupling parameter (g). When g is smaller than the loss rates of the unperturbed exciton (\(\Gamma_{\mathrm{x}}\)) and cavity (\(\Gamma_{\mathrm{c}}\)), the energy is dissipated faster than it is exchanged between the states and no exciton-polaritons are formed. In this case, the system is said to be in the weak coupling regime. However, when the coupling parameter is larger than either state's decay rate (g > \(|\Gamma_{\mathrm{x}}-\Gamma_{\mathrm{c}}|\)/4), the system enters the strong coupling (SC) regime where the excited light and matter states hybridize to form upper and lower exciton-polaritons with properties of both light and matter[7, 8]. In the strong coupling regime, only first-order (absorption and emission) and second-order (scattering) effects need to be considered. However, as the coupling parameter further increases, the exciton-polariton enters the ultrastrong coupling (USC) regime where higher-order effects cannot be ignored[9, 10, 11]. The transition between the strong coupling and ultrastrong coupling regimes is continuous unlike the transition between the weak coupling and strong coupling regimes with the sudden hybridization of states, but by convention, the exciton-polariton is said to be in the ultrastrong coupling regime when the coupling parameter is more than 10% of the exciton energy (E\({}_{\mathrm{x}}\)). In the USC regime, a population of virtual photons forms in the ground state of the system shifting its energy. The USC regime enables both exotic quantum mechanical phenomena such as the dynamical Casimir effect[12, 13] and photon pair production[14] as well as practical phenomena such as switching on the scale of 10 fs[15]. To date, USC has only been observed in the visible range using closed cavity geometries[16, 17, 18], but it has not been observed in a self-hybridized system due to the lack of large band-gap semiconductor materials with strong oscillator strengths in the excitonic resonance.
Self-hybridized exciton-polaritons from transition metal dichalcogenides (TMDCs) have been explored recently for both basic[19] and applied research such as light emitting diodes[20, 21]. One of the major limitations in the self-hybridized TMDC is the absence of polariton emission from multilayer TMDCs owing to their indirect bandgap nature. Alternative strategies such as the preparation of TMDC superlattices that decouple electronic interactions in multilayers are yet to demonstrate polariton emission[22]. The largest bandgap material available in the conventional TMDC family (WS\({}_{2}\)) is around 2 eV, which illustrates the lack of materials with blue emission (-2.5 eV). So far, 3D inorganic materials such as ZnO[23] and GaN[24] are the only materials with polariton emission in the blue region. Further, 3D materials are challenging towards device integration due to lattice strain and complexity involved in the device fabrication. In this work, we report polariton emission in the blue region from a new 2D material - mithrene, which is part of a class of layered, bulk metal-organic chalcogenate materials[25]. Mithrene consists of two-dimensional, inorganic AgSe
layers separated by organic insulator layers (phenyl groups) to form a multi-quantum well system[26]. Its excitonic properties make it a strong candidate for optoelectronic devices since it is both direct band gap and supports excitons with binding energies up to 400 meV[27]. High exciton binding energy, along with the advantageous direct band gap of mithrene in the blue region, can be used in light-emitting devices and photodetectors. In addition, the solution-processibility of mithrene at low temperatures (< 200 \({}^{\circ}\)C) and its van der Waals nature enables easier integration on Si-based platforms compared to its counterparts such as III-V and oxide-based semiconductors where lattice strain is detrimental to device performance. The large exciton binding energy is the result of the quantum-confinement effects of the multi-quantum well geometry, and it enables the excitons to have large oscillator strengths (f) at room temperature. Not only do large oscillator strengths make the excitons highly absorptive, it also increases the coupling parameter of exciton-polaritons as \(g\propto\sqrt{\frac{f}{V_{m}}}\) where V\({}_{m}\) is the mode volume of the cavity. Strong excitons in other quantum-confined semiconductors have been shown to be excellent for optoelectronic applications[28, 29]. However, mithrene is still a relatively new material so most research has focused on its growth/characterization[26, 30, 31, 32] or basic optical properties[27].
In this paper, we present the first study of light-matter interactions in mithrene and observe the formation of self-hybridized, ultrastrong coupled exciton-polaritons in mithrene with the largest observed normalized coupling parameter (g/E\({}_{x}\) = 0.14) in the visible range. We also find that the light-matter states are emissive allowing for geometrically tunable emission from the exciton wavelength to the lower exciton-polariton wavelength (468 nm to 515 nm). When encapsulated by Ag to form a closed-cavity system, we observed longer exciton-polariton lifetime compared to open-cavity as well as multiple states with =1 nm linewidth emission at low temperatures (80 K). Our findings demonstrate that mithrene is a new material that can be used to probe ultrastrong light-matter interactions. The emissive properties of the exciton-polaritons also demonstrates that mithrene can be used as a monochromatic light source in the blue to green region of the visible spectrum.
## Results and Discussion
Our cavity mode is a lossy Fabry-Perot cavity which is formed by the highly reflective substrate and the air-mithrene interface which is reflective due to the large refractive index of mithrene[33, 34]. Since the Fabry-Perot cavity is formed by the top and bottom of mithrene crystal, the cavity energy (E\({}_{c}\)) can be tuned by varying its thickness (t = \(\lambda\)/4n where t is the thickness of mithrene, \(\lambda_{c}\) is the cavity wavelength, and n is the real part of the refractive index of mithrene). The lossy Fabry-Perot cavity then hybridizes with mithrene's exciton to form a higher energy upper exciton-polariton (UEP) and smaller energy lower exciton-polariton (LEP) where the lower polariton state is found to emit blue light (Figure 1a). The mithrene on gold sample was prepared by drop-casting mithrene in a propylamine-H\(\mathrm{\SIUnitSymbolOhm}\) solution on the Au substrate[26]. Atomic force microscopy (AFM) was used to show that this process yielded mithrene flakes of ~30 \(\mathrm{\SIUnitSymbolMicro m}\) x 30 \(\mathrm{\SIUnitSymbolMicro m}\) in lateral area with a thickness of 554 nm (Figure 1b). The refractive index of mithrene is measured using spectroscopic ellipsometry (Supporting Information Figure S1). The reflectance of the system is then simulated using the complex refractive index and the transfer matrix method (TMM)[35]. The TMM is found to accurately predict the energies of the UEP and LEP, and simulations clearly show the anti-crossing behavior for mithrene on Au which is a clear signature of the SC
regime (Figure 1c). The coupling parameter is then calculated to be 378 meV by extracting the UEP and LEP energies for various cavity energies and fitting it to the quantum Rabi model[36, 37, 38] (Supporting Information Section 1). Here, g is 14% of the exciton energy placing the exciton-polaritons in the USC regime. Most other low-dimensional semiconductors only host exciton-polaritons in the SC regime (Figure 1d). The USC regime is typically achieved by either using low energy transitions in the mid-infrared to microwave ranges which lowers the required coupling parameter and allows for smaller mode volumes relative to the wavelength of light[39], or by coupling a high-Q cavity to organic molecules whose Frankel excitons have extremely large oscillator strengths[40]. However, mithrene differs from these because its band gap is in the blue region of light (468 nm), and the estimated value of its exciton binding energy (400 meV) along with the semiconductor layers being inorganic suggests that it hosts Wannier-Mott excitons[41]. Despite mithrene not using either of these strategies for USC, it still supports exciton-polaritons in the USC regime due to its excitons having extraordinarily large oscillator strengths and the crystal (exciton medium) concurrently possessing a large refractive index (due to its part inorganic nature) that enables small mode volumes.
To confirm the hybrid nature of our observed absorption modes, thin films of mithrene on an Au substrate are studied since exciton-polaritons cannot form in films much thinner than the wavelength of light (t < \(\uplambda\)4n). Therefore, the uncoupled exciton states can be seen in a thin film of mithrene (t = 50 nm) using its reflectance and photoluminescence (PL) spectra (Figure 2). In the thin film of mithrene, the simulated and experimental reflectance spectra both show an absorption peak at the exciton wavelength confirming the unhybridized nature of system, and the PL is observed at the exciton wavelength of light (t < \(\uplambda\)4n). The spectra of the two-dimensional exciton-polaritons are shown in Figure 2(a). The spectra of the two-dimensional exciton-polaritons are shown in Figure 2(b). The spectra of the two-dimensional exciton-polaritons are shown in Figure 2(c). The spectra of the two-dimensional exciton-polaritons are shown in Figure 2(d). The spectra of the two-dimensional exciton-polaritons are shown in Figure 2(e). The spectra of the two-dimensional exciton-polaritons are shown in Figure 2(f).
the same wavelength demonstrating the direct band gap nature of mithrene. However, when the mithrene thickness is increased to 486 nm, new absorptive modes are observed both above and below the exciton which are the UEP and LEP, respectively. Sub-band gap emission was also observed in the thicker flakes in the PL because the LEP can emit due to its part-exciton characteristics. In addition to the UEP and LEP, the thicker mithrene flakes also hosted a higher-order (HO) mode similar to what we previously observed in perovskites of comparable thicknesses[45]. The emergence of the HO mode is because the mithrene is thick enough to support multiple orders of the lossy Fabry-Perot cavity. Therefore, there exists a higher order mode of the cavity at shorter wavelengths that is also coupling to the exciton. Since the HO mode-exciton coupling is strong enough for anticrossing to occur, and the two states have a large degree of detuning (energy different between the unperturbed exciton and cavity modes), the HO mode is slightly redshifted from the exciton peak. The HO mode is also mostly excitonic with a small amount of hybridization with light which is why it emits more than the LEP. The detuning between the HO mode and the exciton can be decreased by increasing the mithrene thickness (increasing the cavity wavelength) which increases the fraction of the cavity state in the HO mode. Eventually, the state would be approximately half-exciton, half-cavity making it a LEP. This continuous transition between HO mode can be clearly seen in simulations where the HO continuously redshifts with increased mithrene thickness until it is called a LEP (Supporting Information Figure S2).
The emissive properties of exciton-polaritons in mithrene are further investigated by comparing mithrene in open and closed cavity systems. From top to bottom, the open cavity system is mithrene (943 nm)/Al\({}_{2}\)O\({}_{3}\) (10 nm)/Ag (100 nm) (Figure 3a), and the closed cavity is PMMA (-250 nm)/Ag (15 nm)/Al\({}_{2}\)O\({}_{3}\) (10 nm)/mithrene (943 nm)/Al\({}_{2}\)O\({}_{3}\) (10 nm)/Ag (100 nm) (Figure 3b). Ag is used as the metal in both systems since it is less absorptive than Au in the visible range (Supporting Information Figure S3), and the Ag substrate has 10 nm of Al\({}_{2}\)O\({}_{3}\) on top that is
Figure 2: **Room-temperature exciton-polaritons in mithrene.** (a) Transfer-matrix calculation from the mithrene on the Au substrate showing the reflectance dips corresponding to the exciton and exciton-polariton (UEP, HO, and LEP) states emerging in 50 nm and 486 nm thick mithrene, respectively. (b) experimental observation of the exciton and exciton-polaritons in the reflectance spectrum matches closely with the simulation data. The shift in the polariton branches between experiment and simulation is attributed to the thickness-estimation error in the AFM measurement. (c) Photoluminescence spectroscopy showing exciton emission at 468 nm, while exciton-polariton shows emission at 468 nm (UEP), 474 nm (HO mode), and 527 nm (LEP).
deposited using atomic layer deposition (ALD) to prevent oxidation. For the closed cavity system, 10 nm of ALD Al\({}_{2}\)O\({}_{3}\) is deposited on top of mithrene to protect it during the Ag sputtering process, and PMMA is spin-coated on top of the second Ag layer to prevent oxidation. By depositing Ag on top to make the closed cavity system, the Q-factor of the largest PL peak is increased by a factor of 2.26 compared to the open cavity system since the top interface of the lossy Fabry-Perot cavity is more reflective than the open cavity system (Supporting Information Figure S4). The increased Q-factor is seen as a narrowing of the peaks in both the reflectance (Figure 3c) and room temperature PL (Figure 3d). Additionally, the coupling parameter increases to 385 meV in the closed cavity system due to the decrease in cavity mode volume and cavity loss (Supporting Information Figure S5) which causes the multiple LEP modes in the closed cavity system to redshift from their open cavity wavelengths. The closed cavity system was also cooled down to 80 K, and its linewidths further decreases to sub-nanometer values due to decreased photon scattering (Figure 3e). The linewidth of an LEP is inversely related to its lifetime, and the lifetime of the LEP is a weighted average of the cavity and exciton lifetimes[51]. At room temperature, we hypothesize that the nonradiative lifetime of the exciton plays a significant role in the LEP lifetime. Therefore, as the temperature is reduced, the nonradiative lifetime of the exciton is prolonged, and the radiative lifetime of the cavity dominates the lifetime of the LEP. This is further confirmed by the PL at 80 K of mithrene on a quartz substrate. The transmissive property of the substrate prevents cavity modes from forming so its emission is purely excitonic. The exciton emission shows a larger linewidth than the LEP emission indicating that the high-order Fabry-Perot cavity modes reduce the linewidth of the LEP.
Upon further cooling of the closed cavity system to < 50 K, the anti-crossing of exciton-polaritons is observed in the angle-dependent PL (Figure 4a, b)[52]. The emission at temperatures of 40 K and below is found to extend over a substantial portion of the visible range of light to 725 nm which is significantly longer than the emission at 80 K and room temperature where the longest wavelength emission is 520 nm. The states with the longest wavelength are found to be the most emissive, (excluding the unhybridized exciton emission), at 40 K and 20 K. Since the pump wavelength is above the band gap of mithrene, it creates a population of excitons that are then hybridized into exciton-polaritons. The excitons then relax into the LEP branches, but this process requires the presence of phonons. Therefore, this process occurs more quickly at 40 K than at 20 K which is why the longest wavelength emission at 40 K is more intense than the one at 20 K.
The lifetime of multiple LEP branches are also studied in both the open and closed cavity systems using time-resolved photoluminescence (TRPL) (Figure 4c and d). The lifetime in both systems are found to increase as the LEP redshifted and became less excitonic. This is because in both the open and closed cavity systems,
Figure 3: **Exciton-Polaritons in Open and Closed Cavity Mithrene.** Here, open cavity refers to mithrene/10 nm Al\({}_{2}\)O\({}_{3}\)/100 nm Ag (a) and closed cavity refers to PMMA/15 nm Ag/10 nm Al\({}_{2}\)O\({}_{3}\)/PMMA/ unthrene/10 nm Al\({}_{2}\)O\({}_{3}\)/100 nm Ag (b). Reflectance (c) and PL (d) from open and closed cavities. (e) The PL recorded at 80 K shows a linewidth narrowing of the E-P emission from closed cavity compared to the exciton emission.
the cavity mode has a longer lifetime than the exciton lifetime (0.27 ns). The closed cavity system shows an enhancement in lifetime over the open cavity by a factor of 1.86 (Supporting Information Figure S4). The biggest enhancement is seen at 454 nm compared to other emission wavelengths of the exciton-polaritons.
Figure 4: **Exciton-Polariton Dispersion and Lifetimes.** Temperature dependent E-k studies from mithrene in open cavity shows a clear dispersion from exciton-polaritons at temperatures of (a) 20 K and (b) 40 K. Time-resolved photoluminescence studies from (c) open and (d) closed cavity shows the exciton-polariton lifetime gets stretched by more than 2-fold in a closed cavity system.
## Conclusion
In conclusion, the strength of light-matter interactions in mithrene are studied in both open and closed cavity systems. Mithrene is found to host self-hybridized exciton-polaritons in the USC regime as g/E\({}_{\text{x}}\) = 0.14 which is a record value for non-organic semiconductors in the visible spectrum. The formation of exciton-polariton states not only alters the optical dispersion of mithrene, but it also prolongs the lifetime of the states, enabling *1 nm PL linewidths. Additionally, at temperatures below 40 K, emissive exciton-polaritons from 452 nm to 752 nm are observed suggesting the potential for broadband, polaritonic LEDs and lasers. Our results show that mithrene, and perhaps even other MOCHA compounds, has excellent potential as an excitonic material for dispersion engineering in devices as well as fundamental studies of strongly coupled quantum photonic phenomena in the visible range.
## Methods
**Sample preparation.** Mithrene was prepared by biphasic interfacial synthesis as described previously[26]. Briefly, 3 mM solutions of silver nitrate in water and diphenyl diselenide are prepared separately, and then layered in a vial or vessel. Crystals evolve at the liquid-liquid interface and can be isolated by removing the water from the vessel by pipette, swirling the vial to cause the thin film to become adherent to the glass, and then decanting the remaining toluene. Small crystals grown from an aqueous layer yield typical final thicknesses of approximately 50 nm. Larger mithrene crystals were prepared by addition of ethylamine to the aqueous layer, and these crystals can be grown to hundreds of nanometers in thickness[53, 54]. In a departure from Paritmongkol's method, the synthesis was performed at room temperature.
Mithrene is easily suspended and stable in a variety of organic alcohol solvents and can be drop cast onto desired substrates for imaging or optical characterization[26]. To understand the excitonic properties (absorption and PL) from the mithrene system, we drop-cast 50 \(\upmu\)l of solvent with suspended mithrene crystals quartz substrates and allowed the solvent to evaporate at room temperature. For open cavity samples, the larger mithrene crystals from biphasic-ethylamine route were drop-cast on the 100-nm-thick Au substrate and the 10 nm Al\({}_{2}\)O\({}_{3}\)/100 nm Ag, separately. In the latter case, 10 nm Al\({}_{2}\)O\({}_{3}\) inhibited any adverse reaction between the synthesized mithrene crystal and the Ag substrate. Both Ag and Au substrate were prepared by template-stripped process[55]. Further, 10 nm Al\({}_{2}\)O\({}_{3}\)/10 nm Ag followed by PMMA layer was deposited on the open cavity samples to realize a closed cavity system. Atomic layer deposition process was used to coat Al\({}_{2}\)O\({}_{3}\) on the Ag substrate.
## Low-temperature PL and reflectance.
The reflectance and PL were recorded in reflection mode using Horiba LabRam HR Evolution confocal microscope. The white light intensity reflected by a polished silver mirror was used to normalize the data recorded from the mithrene sample for reflectance measurement. Photoluminescence spectra were recorded using a continuous wave excitation source at 405 nm and passing the emission through 600 grooves/mm before reaching the detector. For low-temperature measurements (up to 80 K), the samples were placed in a Linkam cryostat and cooled down to the desired temperature by controlling the flow of liquid nitrogen.
**TRPL measurement.** TRPL measurements were performed using an 80MHz, 140 fs Ti:sapphire laser. The 800nm fundamental emission was doubled via SHG to provide fs pulses at 400nm which was guided onto the mithrene samples. The corresponding emission was first collected and sent into an Acton SpectraPro SP-2750 grating spectrometer with a 300gpmm grating and dispersed onto a Teledyne PyLoN CCD array. To measure the lifetime, the dispersed signal was sent from the spectrometer to a Hamamatsu Universal Streak Camera which provided the time resolved information of the emission spectra.
**Optical simulations.** Optical simulations were performed using python scripts for transfer matrix method simulations as described in literature[35]. The complex refractive indices used in these simulations were measured using a J.A Woollam M2000 spectroscopic ellipsometer and fitting the experimental results to a series of Lorentz oscillators.
**E-K measurement.** To measure the E-K spectrum for mithrene, the sample was mounted in a Montana Instruments S200 cryostation with a 100X, 0.75 NA in-situ objective. The sample was excited with a 140fs, 80 MHz pulsed laser centered at 400nm. The angle-resolved emission was collected by imaging the back focal plane of the objective onto the entrance slit of a Princeton Instruments Isoplane SCT320 spectrometer using a 4f lens relay. The laser was filtered out by a 420nm long pass filter placed within the lens relay. The signal was then dispersed using a 300gpmm grating onto a PyLoN 400-BRX CCD array. Once collected, the data was processed by method described in the supplementary. The raw and processed data can be seen in figure S8.
**Acknowledgement.** D. J., S.B., J. L. and B.C. acknowledge partial support from Asian Office of Aerospace Research and Development of the Air Force Office of Scientific Research (AFOSR) (FA2386-21-1-4063), and the Office of Naval Research (N00014-23-1-203). J.N.H. and M. A. were supported by the US Department of Energy Integrated Computational and Data Infrastructure for Scientific Discovery grant DE-SC0022215. The research performed by C.E.S. at the Air Force Research Laboratory was supported by contract award FA807518D0015. J.R.H. acknowledges support from the Air Force Office of Scientific Research (Program Manager Dr. Gernot Pomrenke) under award number FA9550-20RYCOR059. A.M. and C.M. are supported by NSF-CAREER grant and NSF Intern program. T.P.D. and P.J.S. gratefully acknowledge support by Programmable Quantum Materials, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Basic Energy Sciences, under award DE-SC0019443. The authors thank Christopher Chen for ellipsometry support. Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
## References
* [1] Imamoglu, A., Ram, R. J., Pau, S. & Yamamoto, Y. Nonequilibrium condensates and lasers without inversion: Exciton-polariton lasers. _Phys Rev A (Coll Park)_**53**, 4250 (1996).
* [2] Byrnes, T., Kim, N. Y. & Yamamoto, Y. Exciton-polariton condensates. _Nature Physics 2014 10:11_**10**, 803-813 (2014).
* [3] Tischler, J. R., Bradley, M. S., Bulovic, V., Song, J. H. & Nurmikko, A. Strong coupling in a microcavity LED. _Phys Rev Lett_**95**, 036401 (2005).
* [4] Tsintzos, S. I., Pelekanos, N. T., Konstantinidis, G., Hatzopoulos, Z. & Savvidis, P. G. A GaAs polariton light-emitting diode operating near room temperature. _Nature 2008 453:7193 **453**, 372-375 (2008).
* [5] Winkler, K. _et al._ Photocurrent readout and electro-optical tuning of resonantly excited exciton polaritons in a trap. _Phys Rev B Condens Matter Mater Phys_**91**, 045127 (2015).
* [6] Myers, D. M. _et al._ Superlinear increase of photocurrent due to stimulated scattering into a polariton condensate. _Phys Rev B_**98**, 045301 (2018).
* [7] Stenzel, O. Light-Matter Interaction. (2022) doi:10.1007/978-3-030-87144-4.
* [8] Reithmaier, J. P. _et al._ Strong coupling in a single quantum dot-semiconductor microcavity system. _Nature 2004 432:7014 **432**, 197-200 (2004).
* [9] Anappara, A. A. _et al._ Signatures of the ultrastrong light-matter coupling regime. _Phys Rev B Condens Matter Mater Phys_**79**, 201303 (2009).
* [10] Ciuti, C., Bastard, G. & Carusotto, I. Quantum vacuum properties of the intersubband cavity polariton field. _Phys Rev B Condens Matter Mater Phys_**72**, 115303 (2005).
* [11] Frisk Kockum, A., Miranowicz, A., De Liberato, S., Savasta, S. & Nori, F. Ultrastrong coupling between light and matter. _Nature Reviews Physics 2019 1:1_**1**, 19-40 (2019).
* [12] Benenti, G., D'Arrigo, A., Siccardi, S. & Strini, G. Dynamical Casimir effect in quantum-information processing. _Phys Rev A_**90**, 052313 (2014).
* [13] Koghee, S. & Wouters, M. Dynamical casimir emission from polariton condensates. _Phys Rev Lett_**112**, 036406 (2014).
* [14] De Liberato, S., Ciuti, C. & Carusotto, I. Quantum vacuum radiation spectra from a semiconductor microcavity with a time-modulated vacuum rabi frequency. _Phys Rev Lett_**98**, 103602 (2007).
* [15] Gunter, G. _et al._ Sub-cycle switch-on of ultrastrong light-matter interaction. _Nature 2009 458:7235 **458**, 178-181 (2009).
* [16] Kena-Cohen, S., Maier, S. A., C Bradley S Kena-Cohen, D. D., Maier, S. A. & C Bradley, D. D. Ultrastrongly Coupled Exciton-Polaritons in Metal-Clad Organic Semiconductor Microcavities. _Adv Opt Mater_**1**, 827-833 (2013).
* [17] Gambino, S. _et al._ Exploring Light-Matter Interaction Phenomena under Ultrastrong Coupling Regime. _ACS Photonics_**1**, 1042-1048 (2014).
* [18] Bujalance, C. _et al._ Ultrastrong Exciton-Photon Coupling in Broadband Solar Absorbers. _Journal of Physical Chemistry Letters_**12**, 10706-10712 (2021).
* [19] Kavokin, A. _et al._ Polariton condensates for classical and quantum computing. _Nature Reviews Physics__2022__4:7_**4**, 435-451 (2022).
* [20] Gu, J., Chakraborty, B., Khatoniar, M. & Menon, V. M. A room-temperature polariton light-emitting diode based on monolayer WS2. _Nature Nanotechnology__2019__14:11_**14**, 1024-1028 (2019).
* [21] Gonzalez Marin, J. F. _et al._ Room-temperature electrical control of polarization and emission angle in a cavity-integrated 2D pulsed LED. _Nature Communications__2022__13:1_**13**, 1-9 (2022).
* [22] Kumar, P. _et al._ Light-matter coupling in large-area van der Waals superlattices. _Nature Nanotechnology__202__1__17:2_**17**, 182-189 (2021).
* [23] Kang, J. W. _et al._ Room temperature polariton lasing in quantum heterostructure nanocavities. _Sci Adv_**5**, (2019).
* [24] Chen, H. _et al._ Room-temperature polariton lasing in GaN microrods with large Rabi splitting. _Optics Express, Vol. 30, Issue 10, pp. 16794-16801_**30**, 16794-16801 (2022).
* [25] Wang, G., Luo, S., Di, T., Fu, Z. & Xu, G. Layered Organic Metal Chalcogenides (OMCs): From Bulk to Two-Dimensional Materials. _Angewandte Chemie_**134**, (2022).
* [26] Schriber, E. A. _et al._ Mithrene is a self-assembling robustly blue luminescent metal-organic chalcogenolate assembly for 2d optoelectronic applications. _ACS Appl Nano Mater_**1**, 3498-3508 (2018).
* [27] Yao, K. _et al._ Strongly Quantum-Confined Blue-Emitting Excitons in Chemically Configurable Multiquantum Wells. _ACS Nano_**15**, 4085-4092 (2021).
* [28] Mueller, T. & Malic, E. Exciton physics and device application of two-dimensional transition metal dichalcogenide semiconductors. _npj 2D Materials and Applications__2018__2:1_**2**, 1-12 (2018).
* [29] Chen, Y. _et al._ 2D Ruddlesden-Popper Perovskites for Optoelectronics. _Advanced Materials_**30**, 1703487 (2018).
* [30] Schriber, E. A., Rosenberg, D. J., Kelly, R. P., Ghodsi, A. & Hohman, J. N. Investigation of Nucleation and Growth at a Liquid-Liquid Interface by Solvent Exchange and Synchrotron Small-Angle X-Ray Scattering. _Front Chem_**9**, 429 (2021).
* [31] Trang, B. _et al._ Tarnishing Silver Metal into Mithrene. _J Am Chem Soc_**140**, 13892-13903 (2018).
* [32] Schriber, E. A. _et al._ Chemical crystallography by serial femtosecond X-ray diffraction. _Nature__2022__601:7893_**601**, 360-365 (2022).
Kats, M. A., Blanchard, R., Genevet, P. & Capasso, F. Nanometre optical coatings based on strong interference effects in highly absorbing media. _Nature Materials 2012 12:1_**12**, 20-24 (2012).
* [34] Jariwala, D. _et al._ Near-Unity Absorption in van der Waals Semiconductors for Ultrathin Optoelectronics. _Nano Lett_**16**, 5482-5487 (2016).
* [35] Pettersson, L. A. A., Roman, L. S. & Inganas, O. Modeling photocurrent action spectra of photovoltaic devices based on organic thin films. _J Appl Phys_**86**, 487 (1999).
* [36] Rabi, I. I. Space Quantization in a Gyrating Magnetic Field. _Physical Review_**51**, 652 (1937).
* [37] Forn-Diaz, P., Lamata, L., Rico, E., Kono, J. & Solano, E. Ultrastrong coupling regimes of light-matter interaction. _Rev Mod Phys_**91**, 025005 (2019).
* [38] Baranov, D. G. _et al._ Ultrastrong coupling between nanoparticle plasmons and cavity photons at ambient conditions. _Nature Communications 2020 11:1_**11**, 1-9 (2020).
* [39] Muravev, V. M., Andreev, I. V., Kukushkin, I. V., Schmult, S. & Dietsche, W. Observation of hybrid plasmon-photon modes in microwave transmission of coplanar microresonators. _Phys Rev B Condens Matter Mater Phys_**83**, 075309 (2011).
* [40] Kena-Cohen, S., Maier, S. A., C Bradley S Kena-Cohen, D. D., Maier, S. A. & C Bradley, D. D. Ultrastrongly Coupled Exciton-Polaritons in Metal-Clad Organic Semiconductor Microcavities. _Adv Opt Mater_**1**, 827-833 (2013).
* [41] La Rocca, G. C. Wannier-Mott Excitons in Semiconductors. **31**, 97-128 (2003).
* [42] Chen, H. _et al._ Room-temperature polariton lasing in GaN microrods with large Rabi splitting. _Optics Express, Vol. 30, Issue 10, pp. 16794-16801_**30**, 16794-16801 (2022).
* [43] Van Vugt, L. K. _et al._ Exciton polaritons confined in a ZnO nanowire cavity. _Phys Rev Lett_**97**, 147401 (2006).
* [44] Kang, J. W. _et al._ Room temperature polariton lasing in quantum heterostructure nanocavities. _Sci Adv_**5**, (2019).
* [45] Anantharaman, S. B. _et al._ Self-Hybridized Polaritonic Emission from Layered Perovskites. _Nano Lett_**21**, 6245-6252 (2021).
* [46] Kumar, P. _et al._ Light-matter coupling in large-area van der Waals superlattices. _Nature Nanotechnology 2021 17:2_**17**, 182-189 (2021).
* [47] Chikkaraddy, R. _et al._ Single-molecule strong coupling at room temperature in plasmonic nanocavities. _Nature 2016 535:7610_**535**, 127-130 (2016).
* [48] Wersall, M., Cuadra, J., Antosiewicz, T. J., Balci, S. & Shegai, T. Observation of mode splitting in photoluminescence of individual plasmonic nanoparticles strongly coupled to molecular excitons. _Nano Lett_**17**, 551-558 (2017).
* [49] Pandya, R. _et al._ Microcavity-like exciton-polaritons can be the primary photoexcitation in bare organic semiconductors. _Nature Communications 2021 12:1_**12**, 1-11 (2021).
* [50] Graf, A., Tropf, L., Zakharko, Y., Zaumseil, J. & Gather, M. C. Near-infrared exciton-polaritons in strongly coupled single-walled carbon nanotube microcavities. _Nature Communications 2016 7:1_**7**, 1-7 (2016).
* [51] Mony, J., Hertzog, M., Kushwaha, K. & Borjesson, K. Angle-Independent Polariton Emission Lifetime Shown by Perylene Hybridized to the Vacuum Field Inside a Fabry-Perot Cavity. _Journal of Physical Chemistry C_**122**, 24917-24923 (2018).
* [52] Houdre, R. _et al._ Measurement of Cavity-Polariton Dispersion Curve from Angle-Resolved Photoluminescence Experiments. _Phys Rev Lett_**73**, 2043 (1994).
* [53] Paritmongkol, W. _et al._ Size and Quality Enhancement of 2D Semiconducting Metal-Organic Chalcogenolates by Amine Addition. _J Am Chem Soc_**143**, 20256-20263 (2021).
* [54] Paritmongkol, W. _et al._ Morphological Control of 2D Hybrid Organic-Inorganic Semiconductor AgSePh. _ACS Nano_**16**, 2054-2065 (2022).
* [55] Jariwala, D. _et al._ Near-Unity Absorption in van der Waals Semiconductors for Ultrathin Optoelectronics. _Nano Lett_**16**, 5482-5487 (2016).
Supporting Information
**Ultrastrong Light-Matter Coupling in**
**2D Metal-Organic Chalcogenolates**
Surendra B. Anantharaman\({}^{1,\ast,\dagger}\), Jason Lynch\({}^{1,\dagger}\), Mariya Aleksich\({}^{2,3}\), Christopher E. Stevens\({}^{4,5}\), Christopher Munley\({}^{6}\), Bongjun Choi\({}^{\dagger}\), Sridhar Shenoy\({}^{1}\), Thomas Darlington\({}^{7}\), Arka Majumdar\({}^{6,8}\), P. James Shuck\({}^{7}\), Joshua Hendrickson\({}^{5}\), J. Nathan Hohman\({}^{2,3}\), Deep Jariwala\({}^{1,\ast}\)
\({}^{1}\) Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104, United States
\({}^{2}\) Institute of Materials Science, University of Connecticut, Storrs CT 06269
\({}^{3}\) Department of Chemistry, University of Connecticut, Storrs CT 06269
\({}^{4}\) KBR Inc., Beavercreek, Ohio 45431, United States
\({}^{5}\) Air Force Research Laboratory, Sensors Directorate, Wright-Patterson Air Force Base, Ohio 45433, United States
\({}^{6}\) Department of Physics, University of Washington, Seattle, Washington 98195, United States
\({}^{7}\) Department of Mechanical Engineering, Columbia University, New York, new York 10027, United States
\({}^{8}\) Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington 98195, United States
\({}^{\ast}\) Corresponding authors: [email protected], [email protected]
\({}^{\dagger}\) These authors contributed equally to this work.
**Section 1. Weak-, Strong-, and Ultrastrong Light-Matter Coupling in Exciton-Polaritons**
The Hamiltonians of exciton-polaritons can be described as the sum of the Hamiltonian of the unperturbed exciton (\(\mathrm{H_{exciton}}=\frac{1}{2}\mathrm{E_{x}}\mathrm{\sigma_{z}}\) where \(\mathrm{E_{x}}\) is the exciton energy and \(\mathrm{\sigma_{z}}\) is the exciton transition operator), the unperturbed cavity (\(\mathrm{H_{cavity}}=\mathrm{E_{c}}\mathrm{a^{\dagger}}\mathrm{a}\) where \(\mathrm{E_{c}}\) is the cavity energy and \(\mathrm{a^{\dagger}}\) (\(\mathrm{a}\)) is the creation (destruction) operator for a photon in the cavity), and the interaction Hamiltonian (\(\mathrm{H_{int}}\))\({}^{\dagger}\). In the cases of weak coupling (WC) and strong coupling (SC), only the first-order effects of an exciton absorbing and emitting a photon have to be considered for \(\mathrm{H_{int}}\). Therefore, the interaction Hamiltonian for WC and SC can be expressed as\({}^{\dagger}\):
\[\mathrm{H_{int}}=\mathrm{g}(\mathrm{\sigma_{+}}\mathrm{a}+\mathrm{a^{\dagger}} \mathrm{\sigma_{-}}) \tag{1}\]
Where \(\mathrm{g}\) is the coupling parameter and \(\mathrm{\sigma_{+}}\) (\(\mathrm{\sigma_{-}}\)) is the absorption (emission) operator of the exciton. It is clear then that the coupling parameter can be interpretted as the rate at which energy oscillates between the exciton and cavity modes. When solving this Hamiltonian, called the Jaynes-Cummings Hamiltonian, under the condition of zero detuning (\(\Delta=\mathrm{E_{x}}-\mathrm{E_{c}}\)), the complex eigenenergies are found to be\({}^{2}\):
\[E_{\pm}=E_{x}-i\frac{\Gamma_{x}+\Gamma_{c}}{4}\pm\sqrt{g^{2}-\frac{(\Gamma_{x}- \Gamma_{c})^{2}}{16}} \tag{2}\]
Where \(\Gamma_{x}\) (\(\Gamma_{c}\)) is the loss rate of the exciton (cavity). Through inspection of Eq. 2, the WC regime is defined as the region where \(g^{2}<\frac{(\Gamma_{x}-\Gamma_{c})^{2}}{16}\). In this regime, the coupling parameter only affects the imaginary part of the eigenenergy, and therefore, it only affects the decay rate of the states and does not affect their energies. However, when \(g^{2}>\frac{(\Gamma_{x}-\Gamma_{c})^{2}}{16}\), the system enters the SC regime, and the states become hybridized as the coupling parameter shifts the energies of the two eigenstates.
Weakly and strongly coupled systems can both be accurately modelled while only considering the first order effects. However, as the coupling parameter increases, and energy oscillates between the exciton and cavity more rapidly, higher-order effects such as the fast-rotating components of the exciton-cavity interactions must be considered to accurately model the system. Although these effects are always present in exciton-polaritons, their effects are not significant until the coupling parameter is 10% of the exciton energy. Therefore, by convention, the ultrastrong coupling (USC) regime is when \(\frac{E}{E_{x}}>0.1\). In the USC regime, the polariton energies are the positive solutions to the bi-quadratic[3]:
\[\left(E_{\pm}^{2}-E_{c}^{2}\right)\left(E_{\pm}^{2}-E_{x}^{2}\right)-\frac{4g^ {2}E_{\pm}^{2}E_{c}}{E_{x}}=0 \tag{3}\]
The coupling parameter for both the SC and USC regimes can then be extracted by using the transfer matrix method (TMM) to calculate the thickness dependence of the UEP and LEP since the cavity wavelength depends linearly on the mithrene thickness. The dispersion of the UEP and LEP are then fitted to Eq. 2 for the SC regime and Eq. 3 for the USC regime using a least-squared-error method to determine the coupling parameter of the exciton-polaritons.
**Figure S2. Simulated higher order (HO) and lower exciton-polariton (LEP) modes in anthracene on Au.** Mithrene thickness-dependent reflectance spectra for a mithrene flake on an Au substrate. The HO mode can be seen to convert into an LEP mode as the mithrene thickness increases since the detuning between the exciton and cavity modes decreases.
Figure S3: **Simulation studies on layer-dependent absorption for open-cavity polaritons.** (a) Using Au substrate for open cavity polaritons results in almost 40% light absorption around the mithrene exciton peak (475 nm). (b) By using Ag with a 10 nm Al\({}_{2}\)O\({}_{3}\), the contribution from metal absorption can be drastically reduced from 40% to \(\sim\)5%. Also, the 10 nm Al\({}_{2}\)O\({}_{3}\) layer protects the silver from oxidation and avoids unwanted reaction between Ag and mithrene.
Figure S5: **Dispersion of exciton-polaritons in mithrene using silver reflectors.** The mithrene thickness dependence of the reflectance spectra for an (a) open and (b) closed cavity system. From top to bottom, the open cavity system is mithrene/Al\({}_{2}\)O\({}_{3}\) (10 nm)/Ag, and the closed cavity system is Ag (15 nm)/Al\({}_{2}\)O\({}_{3}\) (10 nm)/mithrene/Al\({}_{2}\)O\({}_{3}\) (10 nm)/Ag. The coupling parameters were extracted using the quantum Rabi model as discussed in section S1.
Figure S6. **Temperature-dependent optical properties of exciton and exciton-polariton from mithrene.** (a) reflectance and (b) PL spectra from the mithrene on quartz substrate shows a blue-shift in the exciton peak position upon cooling. (c) reflectance and (d) PL from the mithrene on Au substrate shows multiple polariton branches below the exciton peak due to the contribution from higher-order cavities. The line spectra at 300 K (e) and 80 K (f) shows the linewidth narrowing of the E-P peak and multiple polariton branches upon cooling to 80 K.
### Slope correction to E-K measurements.
To correct for the slope in the E-K measurements caused by the slight tiltof the CCD array, a linear transform was performed on the raw data. By analyzing the spectral tilt, a slope was determined from the spectra and could be used as a correction value. The slope was measured on several spectra and an average value of 0.0176 sin(\(\theta\))/pixel was determined. This correlates to the spectra being tilted one vertical pixel approximately every 29 horizontal pixels. Once this was determined, a linear transform was performed on the dataset correcting the data only in the background region, outside of the E-K measurement. Before and after correction data can be seen in Figure S8, Supporting Information.
Figure S8: **E-k data correction.** The raw data without any data treatment (a) and correcting the drift by a slope value (b) is shown here.
**Table S1. Anisotropic, complex refractive index of mithrene**
\begin{tabular}{|c|c|c|c|c|} \hline Wavelength (nm) & \(n_{\text{ord}}\) & \(k_{\text{ord}}\) & \(n_{\text{ext}}\) & \(k_{\text{ext}}\) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular}
\begin{tabular}{|c|c|c|c|c|} \hline
[MISSING_PAGE_POST]
\end{tabular} |
2303.04111 | Smart patterning for topological pumping of elastic surface waves | Topological pumping supplies a robust mechanism to steer waves across a
sample without being affected by disorders and defects. For the first time, we
demonstrate the pumping of elastic surface waves, achieved by a smart
patterning of a surface that creates a synthetic dimension, which is explored
by the wave as it is launched perpendicularly to the steering direction.
Specifically, we design and fabricate an elastic medium decorated with arrays
of pillar-type resonators whose eigenmodes are locate below the sound cone,
together with coupling bridges edged according to a specific algorithm. We
establish a connection between the collective dynamics of the pillars and that
of electrons in a magnetic field by deriving an accurate tight-binding model
and developing a WKB-type analysis suitable for such discrete aperiodic systems
with spatially slow-varying couplings. This enable us to predict topological
pumping pattern, which is numerically and experimentally demonstrated by
steering waves from one edge of the system to the other. Finally, the immune
character of the topologically pumped surface waves against disorder and
defects is evidenced. The principle of surface patterning together with the
WKB-analysis could provide a powerful new platform for surface wave control and
exploration of topological matter in higher dimensions. | Shaoyun Wang, Zhou Hu, Qian Wu, Hui Chen, Emil Prodan, Rui Zhu, Guoliang Huang | 2023-03-07T18:21:50Z | http://arxiv.org/abs/2303.04111v2 | # Smart patterning for topological pumping of elastic surface waves
###### Abstract
Topological pumping supplies a robust mechanism to steer waves across a sample without being affected by disorders and defects. For the first time, we demonstrate the pumping of elastic surface waves, achieved by a smart patterning of a surface that creates a synthetic dimension, which is explored by the wave as it is launched perpendicularly to the steering direction. Specifically, we design and fabricate an elastic medium decorated with arrays of pillar-type resonators whose eigenmodes are locate below the sound cone, together with coupling bridges edged according to a specific algorithm. We establish a connection between the collective dynamics of the pillars and that of electrons in a magnetic field by deriving an accurate tight-binding model and developing a WKB-type analysis suitable for such discrete aperiodic systems with spatially slow-varying couplings. This enable us to predict topological pumping pattern, which is numerically and experimentally demonstrated by steering waves from one edge of the system to the other. Finally, the immune character of the topologically pumped surface waves against disorder and defects is evidenced. The principle of surface patterning together with the WKB-analysis could provide a powerful new platform for surface wave control and exploration of topological matter in higher dimensions.
+
Footnote †: These three authors contributed equally
+
Footnote †: These three authors contributed equally
+
Footnote †: These three authors contributed equally
+
Footnote †: These three authors contributed equally
## I Introduction
Topological matter is a rapidly growing field in which topological concepts are exploited to discover and classify new phases of matter [1; 2; 3; 4]. In this context, a hallmark achievement was the discovery of the integer quantum Hall effect [citation]. In the past decade, topological phases analogous to quantum Hall insulators have been engineered across a wide range of time-modulated platforms, including electronics [5; 6; 7], photonics [8; 9; 10; 11; 12; 13], acoustics [14; 15; 16; 17], and mechanics [18; 19; 20; 21; 22; 23]. The existence of the conventional gapless edge states and surface states is guaranteed by the bulk-boundary correspondence. These time-dependent systems can provide outstanding opportunities not possible with passive materials, enabled by the high controllability and flexibility of these platforms. However, a physical realization of a dynamically controlled topological pumping that produces topological transport is extremely challenging because external or active physical fields are typically needed [24].
To overcome the challenges associated with time-modulated system, rendering synthetic dimensions via space modulations was recently suggested because it does not require any active materials or other external mechanisms to break the time-reversal symmetry [19; 20]. The phases of the space-modulations can be used as adiabatic parameters that augment the physical space [22]. It is intriguing to see these phases as additional global degrees of freedom, usually called phasons, living on a torus. The central idea of synthetic dimensions is to exploit and harness such degrees of freedom with atoms, photons or phonons to mimic the dynamic motion along extra spatial directions. The key advantage of synthetic dimensions is that pumping parameters can be engineered very naturally in the strength of the couplings along the extra dimension. Synthetic dimensions have led to new discoveries of the 2D and 4D quantum Hall systems in ultracold atomic gases [25; 26; 27], photonics [28; 29; 30; 31] and acoustics and mechanics [32; 33; 34] due to their flexibility. Rendering of the synthetic space is growing into one of most appealing approach to control and steer topological wave transport in different systems.
Surface elastic waves are a class of polarized waves that propagate on the surface of a semi-infinite elastic medium. They are confined within a superficial region whose thickness is comparable with their wavelength [35]. Manipulating surface waves has been of considerable interest with widespread applications in earthquake mitigation, nondestructive evaluation, wave filtering and sensing [36; 37; 38]. Based on the Bragg scattering and local resonance mechanisms, manipulation and control of surface waves has been recently investigated in the phononic and metamaterial community for various applications such as exotic wave transmission and reflection, wave focusing and cloaking [39; 40; 41]. Among existing approaches, the metamaterial with pillar-type resonators is regarded as one of the most promising microstructure designs because of their simple structure and process-friendly fabrication. However, it is not trivial to apply pillar-type metamaterials for the topological surface wave transport. Indeed,
there is of fundamental and practical significance to physically realize space-modulated pillar-type metamaterials for topological surface wave transport along desired orbits [42; 43].
In this study, we present theoretical, numerical and experimental investigations of Rayleigh wave topological pumping by leveraging a pillar-based platform with space modulations. The proposed structures can be described as aperiodic mechanical wave-channels carrying different phason values that are stacked and coupled with each other. By slowly varying the phason along the stacking direction, we demonstrate here that, with such an approach, we can explore any continuous orbit inside the phason space, and even control the speed along the path to shape the surface pumped pattern. As a result, we can render these abstract trajectories, occurring in the synthetic dimensions, on the physical dimension along the stackings. In turn, this enables us to control the propagation of the surface waves in space as well as the temporal phases of the signals.
With the control over the phason, we experimentally demonstrate edge-to-edge topological wave pattern on the space-modulated mechanical metasurface, which is robust against random fluctuations in the couplings. The analytical study of pumping process under adiabatic condition is formulated by using the WKB approximation and the modulation functions of parameters with non-trivial topological phase is also analytically obtained. Based on that, we further explore various ways in which we can control these pumping processes and validate topological mode steerings in time-domain simulation. It is believed that our work breaks ground for engineering applications, where the couplings in a space-modulated mechanical metasurface can be programmed for selective and robust point-to-point transport of surface wave signals.
## II Results
**Physical rendering of synthetic spaces.** We start by explaining the principles of physical rendering of synthetic spaces in the context of surface wave transport. Figure 1a, b show our surface wave platform featuring a planar array of elastic pillar-type resonators coupled horizontally and vertically through thin plates (see Methods sample preparation for fabrication details). Each resonator is assigned an address \((i,j)\in\mathbb{Z}^{2}\) in the \(x\)-\(z\) plane. The heights of the connecting plates in the \(x\)-direction are modulated according to the protocol \(h_{ij}=h_{0}[1+\Delta_{0}\cos(2\pi i/3+\phi_{j})]\), while the geometry of the connecting thin plate along \(z\) direction is uniform across the sample. Any such modulation has a phase that can take any value in the abstract interval \([0,2\pi]\), representing here the synthetic space. In a time-modulated setting, one will dynamically drive the phase \(\phi\) by rapid reconfigurations of the systems [44]. Instead, by setting the phason value of the \(j\)-th row as \(\phi_{j}(z)=\phi_{s}+(\phi_{f}-\phi_{s})\frac{j}{N}\) with \(N\) being the total number of rows, we effectively render the synthetic space along the \(z\)-axis. The parameters will be fixed as \(\phi_{s}=0.6\pi\) and \(\phi_{f}=1.4\pi\).
As shown in Fig. 1a, each \(x\)-directional row displays a unit cell containing three pillars. The dispersion curves of the unit cell obtained with COMSOL Multiphysics is shown in Fig. 1c. The computation was carried out by imposing Floquet boundary conditions in both \(x\)- and \(z\)-directions. Since the modulation amplitude \(\Delta_{0}\) is small, \(\phi_{j}\) is irrelevant for the dispersion curves and can be assumed as 0. Below the sound cone (white region), one can see three surface wave branches, whose eigenmodes are localized on the surface and decay quickly into the bulk; see also Supplementary Information for detailed illustrations of the corresponding mode shapes. The region above the sound cone, shown in gray, is referred to as the "bulk modes" region.
To facilitate the physical interpretation of the surface wave pumping and illuminate the function of the phason, we develop a discrete mass-spring model for the surface wave eigenmodes using the mode-coupling theory. This model takes the form of the following difference equation for the amplitudes \(\psi_{i,j}\) of the local resonances carried by the individual pilars (see Supplementary Information for derivation details)
\[\begin{split}\kappa^{0}&\psi_{i,j}+\kappa^{h}_{i-1,j}\psi_{i-1,j}+\kappa^{h}_{i,j}\psi_{i+1,j}\\ &+\kappa^{v}[\psi_{i,j-1}-2\psi_{i,j}+\psi_{i,j+1}]=-M\omega^{2} \psi_{i,j}.\end{split} \tag{1}\]
Here, \(M\), \(\kappa^{0}\), \(\kappa^{v}\) and \(\kappa^{h}\) are interpreted as the effective mass and grounded, vertical and horizontal spring stiffnesses of the model, respectively. The values of these effective parameters are determined by fitting the dispersion curves of the continuous model (blue dots in Fig. 1c). Specifically, we obtain \(M=1\) kg, \(\kappa^{0}=49.6\) GN/m, \(\kappa^{v}=1.9\) GN/m, and \(\kappa^{h}_{i,j}=\kappa^{h}_{0}\left[1+\Delta\cos(2i\pi/3+\phi_{j})\right]\), where the modulation coefficients read \(\kappa^{h}_{0}=5.5\) GN/m and \(\Delta=0.67\). As shown in Fig. 1c, the continuous and discrete dispersion curves exhibit satisfactory agreement, thereby demonstrating the reliability of the discrete model.
**WKB-type analysis.** By replacing the index \(j\) with the coordinate \(z=ja\), we rewrite \(\psi_{i,j}=\psi_{i}(z)\) and \(\phi_{j}=\phi(z)\), as well as
\[\kappa^{h}_{i,j}=\kappa^{h}_{i}(z)=h_{0}[1+\Delta_{0}\cos(2\pi i/3+\phi(z))]. \tag{2}\]
We also introduce the second-order central difference operator
\[\delta^{2}f(z)=\frac{f\left(z+a\right)-2f(z)+f\left(z-a\right)}{a^{2}}. \tag{3}\]
In addition, a vector \(\mathbf{\psi}(z)=[\psi_{0}(z),\psi_{1}(z),...,\psi_{3M}(z)]^{\mathrm{T}}\) is defined including all the mode coefficients. By doing so, the dispersion equation (1) can be written very compactly as
\[a^{2}\kappa^{v}\delta^{2}\mathbf{\psi}(z)+[\mathbf{K}(z)+\omega^{2}]\mathbf{\psi}(z)=0, \tag{4}\]
in which \(\mathbf{K}(z)\) is the matrix with the entries
\[\mathrm{K}_{ik}(z)=\kappa^{0}\delta_{ik}+\kappa_{i}^{h}(z)\delta_{i,k+1}+\kappa_{ i}^{h}(z)\delta_{i+1,k}, \tag{5}\]
where \(\delta_{ik}\) is the Kronecker delta. Equation (4) is very close in spirit with the Schroedinger equation appearing in the setting of WKB approximation theory [45; 20]. The difference is that, instead of dealing with a potential, we are dealing with the non-diagonal matrix \(\mathbf{K}\)(z) which, nevertheless, is slowly varying with \(z\). In this regime, the following WKB-type expansion is justified
\[\mathbf{\psi}(z)=e^{i\theta(z)/a}[\mathbf{\psi}^{(0)}(z)(z)+a\mathbf{\psi}^{(1)}(z)(z)+ \cdots], \tag{6}\]
and, by keeping track of the powers of \(a\), we can derive the exact equations satisfied by each \(\mathbf{\psi}^{(\alpha)}\). In particular, we find for the leading term that this equation is (see Supplementary Information)
\[\big{(}\mathbf{K}(z)+\omega^{2}\big{)}\mathbf{\psi}^{(0)}(z)=4\sin^{2}\left(\frac{ \delta\theta(z)}{2}\right)\mathbf{\psi}^{(0)}(z), \tag{7}\]
where \(\delta\theta(z)=[\theta(z+a/2)-\theta(z-a/2)]/a\). This equation has solutions of the form
\[\mathbf{\psi}_{n}(z)=A_{n}(z)e^{\mathrm{i}\sum_{t=0}^{t=s}q_{n}(\xi)}\mathbf{\varphi} _{n}(z)+o(a), \tag{8}\]
where \(\mathbf{\varphi}_{n}(z)\) is the \(n\)-th eigenmode of the \(\mathbf{K}(z)\) matrix
\[\mathbf{K}(z)\mathbf{\varphi}_{n}(z)=-\mu_{n}(z)\mathbf{\varphi}_{n}(z), \tag{9}\]
Figure 1: **Design principle and dispersion analysis.****a** Schematic illustration of the topological surface wave transport system. Each row in \(x\) corresponds to a supercell that includes three unit cells (inset). **b** Photograph of the experimental sample fabricated out of aluminum by a milling machine. **c** Numerically obtained dispersion curves (blue dots) for the unit cell with \(\phi_{j}=\pi\). The orange curves represent the dispersion curves of the discrete mass-spring model obtained by numerical fitting. The gray regions are filled with bulk modes. Their interfaces with the surface wave region define the sound cone. **d** Dispersion diagram for the supercell terminated by free boundary conditions in the \(x\)-direction and Floquet boundary conditions in the \(z\)-direction. The edge-bulk-edge (EBE) mode is represented by the magenta surface, whereas the bulk bands are indicated by gray surfaces. The orange cut plane corresponds to the excitation frequency \(f_{c}=41.88\)Hz. The interaction curve between excitation frequency plane and EBE surface gives the instantaneous wave number \(q(z)\) on which the circle is right edge mode, the triangle is the bulk mode and the square is the left edge mode. **e** The top, middle and bottom panels are the corresponding eigenmodes of supercell at \(\phi=0.6\pi\), \(\pi\), and \(1.4\pi\) with \(q=\pi/a\) in **d**.
at row \(z\) and \(q_{n}(z)\) satisfies the equation
\[4\kappa^{v}\sin^{2}\frac{q_{n}(z)}{2}+\mu_{n}(z)=\omega^{2}. \tag{10}\]
As in the standard WKB-theory [45], an analysis at the order-one level of the asymptotic expansion enables us to pinpoint the \(z\)-dependence amplitude \(A_{n}(z)\) (see Supplementary Information), and to finally present the complete set of solutions for the dispersion equation (4)
\[\mathbf{\psi}_{n}(z)=\frac{c_{n}}{\sqrt{\omega^{2}-\mu_{n}(z)}}e^{\mathrm{i}\sum_ {\xi=0}^{t=z}q_{n}(\xi)}\mathbf{\varphi}_{n}(z)+o(a), \tag{11}\]
We recall that the derivation of these solutions relies only the adiabatic evolution of the phason with \(z\) and no considerations of long-wavelengths or paraxial approximation were made. Thus, our results cover the short-wavelength and nonparaxial regions. Lastly, since our samples are finite, we need to impose free boundary conditions on the top and bottom boundaries in the \(z\)-direction. In this case, the mode shape of the \(n\)th eigenmode is in the form of
\[\mathbf{\psi}_{n}(z)=\frac{c_{n}\sin Q_{n}(z)+d_{n}\cos Q_{n}(z)}{\sqrt{\omega^{2} -\mu_{n}(z)}}\mathbf{\varphi}_{n}(z), \tag{12}\]
where \(c_{n}\) and \(d_{n}\) are coefficients of superposition and \(Q_{n}(z)=\sum_{\xi=0}^{\xi=z}q_{n}(\xi)\) is the dynamical phase produced by our derivation.
**Topological pumping.** The complete set of solutions (11) indicates that, when the meta-surface is excited at pulsation \(\omega\) with a source placed at position \(z=0\), it will resonate very strongly with the mode that has its resonant frequency \(\mu_{n}(z=0)\) close to \(\omega^{2}\). Thus, we have a mechanism to selectively load a specific mode
Figure 2: **Topological surface pumping on the elastic surface with space modulated pillars.****a,b** Experimental **a** and numerical **b** modal profile of out-of-plane displacement field at the frequencies \(f=\)42.45 kHz and \(f=\)41.88 kHz, respectively, by piezoelectric patch (gray part in the bottom of the cuboid) excitation. **c** Frequency spectrum of the resonator with indices \(i=2\) and \(j=19\) from experimental measurement. The resonance peak noted by red dot is the EBE mode. **d** The wavelet transform of the eigenmode from numerical simulation along synthetic dimension. The purple curve is the interaction curve from Fig. 1d. **e** The mode decomposition of the displacement field in the right panel of **a** of each chain for different \(z\).
out of a fairly rich set of resonant modes. Furthermore, Eq. (11) indicates that, with such a source turned on, upon the inspection of row \(z\), we will see the eigenmode \(\mu_{n}(z)\) of the one-dimensional tight-binding operator \(\mathbf{K}(z)\) (up to a multiplicative factor). Since \(\mathbf{K}(z)\) depends only on the phason value \(\phi(z)\), _i.e._\(\mathbf{K}(z)=\mathbf{K}_{\phi(z)}\), one can now see explicitly how the dependence of the spectral properties of \(\mathbf{K}_{\phi}\) on the phason has been rendered along the \(z\)-coordinate, for us to experience, measure and use its resonant modes in future applications. Furthermore, by design, the phason is being pumped from \(\phi_{i}\) to \(\phi_{f}\) as the structure is examined from bottom (\(z=0\)) to the top (\(z=Na\)).
We now turn our attention to \(\mathbf{K}_{\phi}\) defined by the equations (2) and (5), and process it as
\[\mathrm{K}_{ik}(\phi)=\kappa_{i}^{h}(\phi)[\delta_{i,k+1}+\delta_{i+1,k}+ \kappa^{0}\tilde{\kappa}_{i}^{h}(\phi)\delta_{ik}], \tag{13}\]
where
\[\tilde{\kappa}_{i}^{h}=1/\kappa_{i}^{h}\approx 1/h_{0}[1-\Delta_{0}\cos(2\pi i/ 3+\phi(z))]. \tag{14}\]
Except for the multiplicative factor in front, Eq. (13) coincides with the Bloch decomposition along the \(z\)-direction of the Hamiltonian of electrons hopping on a two dimensional lattice in the presence of a uniform magnetic field, if the latter is rendered in the Landau gauge and the phason \(\phi\) is identified with the \(k_{z}\) quasi-momentum. The algorithm (2) used for the height of the coupling bridges sets the value of the virtual magnetic field to \(\frac{1}{3}\)-unit of flux per resonator. This connection enables one to effortlessly understand the topological character of the dynamics and the ensuing bulk-boundary correspondence. Specifically, the three surface dispersion bands seen in Fig. 1c are the three spectral bands seen in the Hofstadter butterfly at \(\frac{1}{3}\)-flux [46] and, as such, the spectral gaps between these bands carry Chern numbers \(\pm 1\). This implies that the \(x\)-terminated sample will display topological edge modes which disperse with the variation of the phason. More formal analysis is supplied in the Supplementary Information.
The spectrum of an entire row of resonators \(q\)-twisted Floquet boundary conditions imposed in the \(z\)-direction is reported in figure 1d as function of \(\phi\) and \(q\) and the topological edge modes can be seen as the sheet colored in purple. Taking a slice at a fixed \(q\) reveals precisely one chiral edge band per edge and the slopes of these bands are consistent with the values of the Chern numbers (see Supplementary Information). Furthermore, examination of the eigenfunctions leads to the observation of right edge, bulk and left edge modes for \(\phi=0.6\pi\), \(\pi\), and \(1.4\pi\) in the top, middle and bottom panels of Fig. 1e, respectively.
**Demonstration of topological surface wave transport.** We now focus on the demonstration of the topological surface wave transport. Experiments are first conducted on the system shown in Fig 1 (see Methods experimental protocol). Figure 2a displays the experimentally measured EBE field profile at 42.45 kHz and the harmonic numerical simulation at 41.88 kHz. In both cases, excitation is applied using piezoelectric actuators on the bottom supercell. We found the EBE mode experimentally by examining the mode shapes of all measured resonance peaks in the spectrum shown in Figure 2b. Vertical oscillation of the field profile is also observed featuring modal nodes and antinodes, owing to the \(z\)-directional dynamical phase. The experimental and numerical results provide satisfactory agreement. To quantitatively compare the retrieved mode profile in Fig. 2a with the analytical solution Eq. (12), we apply wavelet transform and mode decomposition on the numerical mode profile. In detail, we first divided the cuboid into 9 columns. Then, the wavelet transform technique is applied to the wave component of each column to determine the corresponding coefficients (see Supplementary Information). Last, we take average on the absolute values of these coefficients. The outcome after linear interpolation is illustrated as a heat map in Fig. 2d. As a reference, a purple curve is given to provide the \(k_{z}\)-\(\phi\) relation at 41.88 kHz on the cut plane of the dispersion diagram (Fig 1d). Satisfactory agreement is found between the eigenmode analysis of the finite lattice and the dispersion diagram. Next, we adopt mode decomposition on each of the 20 supercells along the \(z\)-direction to determine the relative strengths of all modes. Figure 2e illustrates the corresponding modal coefficients which are normalized with the maximum of coefficient at respective values of \(z\). The bases for mode decomposition are from the corresponding mass-spring model whose parameters are extracted from Fig. 1c. Since only 20 supercells are involved in the synthetic dimension, the stiffness matrix does not evolve strictly adiabatically. As a result, other bulk modes always coexist. However, the EBE mode, labeled as the 7th mode in Fig. 2e, is always dominant in
Figure 3: **Time response of the topological surface wave transport.****a-c**. Magnitudes of displacement field at 0.5 ms, 2.5 ms and 4 ms, respectively. A 50-cycle tone burst excitation centered at 41.88 kHz is applied on the bottom supercell
terms of modal coefficients of all the supercells, meaning that length of the synthetic dimension is sufficiently long to approach adiabatic limit. The consistence of the results from wavelet transform and mode decomposition analysis validates the correctness of the WKB solution.
We conduct the transient analysis for better showcasing the pumping process. In particular, the right edge mode \(\mathbf{\varphi}_{n}(0)\) at the bottom supercell is excited by using a series of piezoelectric patches, each attached on one side of each resonator (see Methods experimental protocol). The polarization directions of these piezoelectric patches are identical, while the applied voltages are distributed as \(V_{0}\mathbf{\varphi}_{n}(0)f_{z}(t)\), where \(V_{0}=1\) V denotes the voltage amplitude, and \(f_{z}(t)\) is a 50-cycle tone burst signal \(f_{z}(t)=H(50/f_{c}-t)[1-\cos(2\pi f_{c}t/50)]\sin(2\pi f_{c}t)\) (top panel of Fig. 3), with \(H(t)\) being the Heaviside function and \(f_{c}=41.88\) kHz. Figure 3a-c displays the snapshots of surface wave propagation at representative time instants. Initially, the right edge mode is excited on the bottom at \(t=0.5\) ms (Fig. 3a). As time progresses, the wave packet propagates in the synthetic dimension \(z\) and transitions into the bulk mode at \(t=2.5\) ms (see Fig. 3b). Eventually, the left edge mode is well formed on the top of the cuboid at \(t=4\) ms (Fig. 3c). The wave packet will follow the same evolution path transitioning from the left edge mode back to the right one if the transient simulation continues. A more detailed demonstration can be found in Supplementary Movies.
**Robustness of topological surface wave transport.** Indeed, the geometric imperfections in sample fabrication are inevitable due to the errors of millers of CNC machines, as minor discrepancies between simulations and measurement are visible in Fig. 2a, b. Nevertheless, the topological surface wave transport is evidently observed, thanks to some intriguing wave transport characteristics, such as robustness against geometrical impurities or defects. To illustrate this, Fig. 4a shows a lattice defect constructed by removing \(3\times 3\) pillars in the middle of the structure. The corresponding eigenfrequency of the EBE mode of the defective cuboid (41.75 kHz) is quite close to that of a perfect cuboid (41.88 kHz). The resulting spatial profile shows that the topological pumping behavior survives and the edge modes can be smoothly pumped from one side to the other despite the large-scale geometric defect. Moreover, we also consider the influence of geometrical disorders. In the sample fabrication, the machining error is about 0.02 mm for our sample. Therefore, we introduce errors that satisfy a normal distribution \(\mathcal{N}(0\ \mathrm{mm},0.02\ \mathrm{mm})\) to the dimensions of all resonators, including their lengths, heights and widths. The spatial profile of the EBE mode with disorders is shown in Fig. 4b. The eigenfrequency of the EBE mode shifts slightly, and the spatial profile agrees with that of the perfect lattice in the \(x\)-direction, indicating the topological pumping is robust against disorders. However, in the \(z\)-direction, we see amplitudes of resonators in the top part are larger than amplitudes of resonators in the bottom part, which is actually a sign of Anderson localization. Since along \(z\) direction, the displacement field is harmonic, the disorders make the eigenmode localized at the top. And it is observed in the experiment (see Fig. 2a) that the eigenmode is localized at the top.
**Application of topological wave transport as wave splitter.** The surface wave topological pumping is promising for practical applications. To show that, we design a topological split-flow device which performs robust surface wave splitting. As shown in Fig. 5a, the splitter is an assembly of two domains with opposite \(\phi\)-\(z\) distributions, separated by a domain wall (yellow in Fig.
Figure 4: **Robust topological surface wave pumping.****a** Eigenmode of the defective structure where 3 by 3 unit cells of pillars are removed in the center of the structure at 41.75 kHz. The defect is constructed by removing resonators and pillars in the dotted line box **b** Eigenmode of the disordered structure where a random normal distribution of errors are added on all geometric parameters at 41.88 kHz.
Figure 5: **Topologically protected surface wave splitter.****a** Schematic of surface wave pumping system and the corresponding phase modulation functions. **b** The displacement field distribution of the surface wave splitter. The surface wave is injected at the center of the bottom edge with the frequency at \(f=\)41.88 kHz.
5a). Specifically, the upper section with 20 supercells of the left domain is designed with a linear \(\phi\) transition from \(0.6\pi\) to \(1.4\pi\), whereas that of the right domain is assigned an opposite \(\phi\) transition, i.e. from \(1.4\pi\) to \(0.6\pi\). As for the lower section with 3 supercells, \(\phi\) keeps constant at \(0.6\pi\) and \(1.4\pi\) for the left and right domains, respectively. The excitation is located in the middle of the bottom. Within the lower section of the surface wave splitter, there exists a localized interface mode. As the incidence reaches the upper half, due to the opposite gradients of \(\phi\), the interface mode is split into two components, each following the typical EBE evolution but tracing opposite paths. Thanks to topological protection, the propagation is immune against back reflection from the discontinuity of the upper and lower sections. As such, our design, based on phason engineering and topological pumping, provides an avenue for the application of elastic surface wave beam splitters and smart patterning. In addition, our design covers the short-wavelength range such that we have the opportunity to engineer the dispersion with respect to \(k_{z}\) quasi-momentum. This involves modulations along the vertical direction and opens up a new dimension in the design space for surface wave, which is yet to be explored.
## Discussion
In conclusion, we have evidenced the topological surface wave transport in modulated phononic crystals through edge-to-edge topological pumpings associated with the 2D quantum Hall effects by the physical rendering of synthetic spaces. These observations imply that the system is characterized by a non-zero Chern number and therefore the topological pumping is immune to bulk scattering and exhibits strong protection against design imperfections. The modulated phononic crystals with synthetic spaces offer a platform and route for efficient surface wave topological mode transport by engineering desired patterns on a phason-torus in the finite structure. The phason space augments the physical space and opens a door to higher-dimensional physics in acoustics and mechanics. Although we focused on the elastic implementation using synthetic spaces, our approach can be generalized to other degrees of freedom, such as additional frequency dimensions can also be harnessed for the frequency modulation. Going forward, it will be important to develop and explore such broader connections, as the idea of topological matter in synthetic dimensions is very general and the extension of this approach to other complex orbits is much awaited. At last, we emphasize that, in order to achieve a reasonable adiabatic regime, the number of chains in our experimental setups is appreciable and, whereas this is perfectly fine for the demonstration purposes, it could be an obstacle for practical applications. It will be interesting to explore if this strategy can be deployed for our phononic crystals in order to reduce the number of chains needed for the topological pumping of surface wave.
## Methods
### Sample preparation
The experimental sample made of aluminum, having Young's module \(E=69\) GPa, Poisson's ratio \(\nu=0.33\), and density \(\rho=2700\) kg/m\({}^{3}\), is fabricated using the computer numerical control (CNC) milling machine with a manufacturing precision of 0.02 mm. It consists of an array of resonators (6 mm \(\times\) 3.5 mm \(\times\) 10 mm) with the number of 9 along the \(x\)-direction and 20 along the z direction, which are integrated with a cuboid (\(150\text{ mm}\times 50\text{ mm}\times 200\text{ mm}\)). For convenience, each resonator has an address \((i,j)\). In the x direction, resonators \((i,j)\) and \((i+1,j)\) are connected by height-modulated pillars (\(4\text{ mm}\times 1.5\text{ mm}\times h_{ij}\)) which satisfy the protocol \(h_{ij}=h_{0}[1+\Delta_{0}\cos(2\pi i/3+\phi_{j})]\), where \(h_{0}=7\text{ mm},\Delta_{0}=0.15\) is the average thickness of horizontal channels. Besides, \(\phi_{j}=\phi_{s}+(\phi_{f}-\phi_{s})j/N\), where \(\phi_{s}=0.6\pi,\phi_{f}=1.4\pi,N=20\). In the \(z\)-direction, all resonators are connected by pillars of the same size (\(2\text{ mm}\times 6.5\text{ mm}\times 3.8\text{ mm}\)).
### Experimental protocol
In the experiment, the sample is supported by four points to mimic the free boundary condition. A piezoelectric ceramic patch (PZT) is attached to the right side of the cuboid to excite the target eigenmode state. A wide spectrum pseudo-random excitation within the probing ranges from \(20-50\) kHz is generated by a Tektronix AFG3051C arbitrary waveform generator and amplified by a Krohn-Hite high-voltage power amplifier, which is finally applied across the PZT source. A 1D scanning laser doppler vibrometer (SLDV, Polytech PSV-500) is used to measure the vibration velocity of resonators in the \(y\)-direction, where high-gain reflective tape is stuck on the surface of each resonator to enhance the reflection of the laser. The velocity signal from the vibrometer is further recorded by the PSV-500 data acquisition. Note that the experiment is repeated and averaged 5 times on each resonator of the system to filter out part of the noise. The normalized amplitude spectrum obtained by applying the Fourier transform to the time-domain signals collected at the resonator \((2,18)\) is shown in Fig. 2c. A series of resonant peaks are observed in the frequency range. By checking the mode shape of each resonance peak in the frequency spectrum, the EBE state corresponding to the frequency at 42.45 kHz is identified. Moreover, a full field measurement at 42.45 kHz is conducted by exciting the system with a 200-cycle sine burst, and the same EBE state is measured.
### Numerical simulations
The full-wave finite-element method simulations in this work are all performed using the commercial software COMSOL Multiphysics. The material of 3D structure is implemented by Aluminum [solid] from COMSOL Material Library. Eigenfrequency analysis within the "Solid
Mechanics" is carried out to calculate the eigenfrequencies and eigenmode of the unit cell, supercell, and cuboid. The boundary conditions for all the cases are set as free boundary conditions except for Floquet periodicity boundary conditions of the unit cell along \(x\) and \(z\) direction, and of the supercell along \(z\) direction. For the transient analysis in Fig. 3, time-dependent analysis in the "Solid Mechanics" are used. Piezoelectric patches (PZT-5H in COMSOL Material Library), are attached on one side of each resonator of the bottom supercell. The polarization directions of these piezoelectric patches are identical, while the applied voltages are distributed as \(V\mathbf{\varphi}_{n}(0)f_{z}(t)\).
###### Acknowledgements.
This work is supported by the Air Force Office of Scientific Research under Grant No. AF 9550-18-1-0342 and AF 9550-20-1-0279 with Program Manager Dr. Byung-Lip (Les) Lee, the Army Research Office under Grant No. W911NF-18-1-0031 with Program Manager Dr. Daniel P Cole, and the NSF CMMI under Award No. 1930873. Rui Zhu acknowledges support from the National Natural Science Foundation of China (NSFC) under Grant No. 11991033. Emil Prodan acknowledges support from the U.S. National Science Foundation through the grants DMR-1823800 and CMMI-2131760.
|
2310.04029 | Splitting unramified Brauer classes by abelian torsors and the
period-index problem | We use twisted relative Picard varieties to split Brauer classes on
projective varieties over algebraically closed fields by torsors for a fixed
abelian scheme independent of the Brauer class. The construction is also used
to prove that the index of an unramified Brauer class divides a fixed power of
its period. | Daniel Huybrechts, Dominique Mattei | 2023-10-06T05:45:49Z | http://arxiv.org/abs/2310.04029v2 | # Splitting unramified Brauer classes by abelian torsors and the period-index problem
###### Abstract.
We use twisted relative Picard varieties to split Brauer classes on projective varieties over algebraically closed fields by torsors for a fixed abelian scheme independent of the Brauer class. The construction is also used to prove that the index of an unramified Brauer class divides a fixed power of its period.
The authors are supported by the ERC Synergy Grant HyperK (ID 854361).
## 1. Introduction
We begin by stating the two main results of the paper. For motivation and further comments on the history we refer to Sections 1.3 & 1.4.
Our first theorem shows that Brauer classes on a projective variety can be split after pull-back to torsors for abelian varieties, its proof will be explained in Section 3 and concluded in Section 3.5.
**Theorem 1.1**.: _Let \(X\) be an integral projective variety over an algebraically closed field \(k\) and let \(K(X)\) be its function field. Then there exists an abelian variety \(A\) over \(K(X)\) such that for every Brauer class \(\alpha\in\operatorname{Br}(X)\) one finds an \(A\)-torsor \(B_{\alpha}\) that splits \(\alpha\)._
In other words, for every \(\alpha\in\operatorname{Br}(X)\) the composition
maps \(\alpha\) to the trivial Brauer class on \(B_{\alpha}\). Thus, the theorem shows how to split unramified Brauer classes over the function field \(K(X)\) by passing to torsors for a fixed abelian variety.
We emphasise that our construction produces an abelian variety \(A\) that is independent of the Brauer class \(\alpha\), but the torsor \(B_{\alpha}\) does depend on \(\alpha\). However, as all the \(B_{\alpha}\) are torsors for the same abelian variety \(A\), their dimensions are independent of \(\alpha\). Let us also point out that typically \(A\) and \(B_{\alpha}\) are not unique and may even come in families.
The second main result concerns the period-index problem. Recall that the period of a central simple algebra \(A\) over a field \(K\) is by definition the order \(\operatorname{per}(A)=\operatorname{per}(\alpha)=|\alpha|\) of its class \(\alpha=[A]\in\operatorname{Br}(K)\) in the Brauer group of \(K\). Its index is defined as \(\operatorname{ind}(A)=\operatorname{ind}(\alpha)=\dim(D)^{1/2}\), where \(D\) is the unique division algebra such that \(A\simeq M_{n}(D)\) for some \(n\). It is
## 1. Introduction
The \(\alpha\)-stable
that Theorem 1.1 is the best result in this direction one can hope for in general. Note however that Antieau and Auel [2, Thm. C] show that every cyclic Brauer class over a field containing an algebraically closed field is split by a torsor for an elliptic curve.
Theorem 1.1 can be seen as an improvement of a result by Ho and Lieblich [20], it essentially answers their Question 6.0.1 for unramified classes, and by Antieau and Auel [2, Thm. 3.18]. More concretely, in [20] it is shown that for a fixed field \(K\) there exists for every class \(\alpha\in\operatorname{Br}(K)\) of index \(\not\equiv 2\left(4\right)\) a smooth projective curve \(C_{\alpha}\) such that \(\alpha\) is split by its Albanese \(\operatorname{Alb}(C_{\alpha})=\operatorname{Pic}^{1}(C_{\alpha})\), a torsor for the abelian variety \(\operatorname{Pic}^{0}(C_{\alpha})\). If the index is \(\equiv 2(4)\), then \(\alpha\) is split by \(C^{\prime}\times\operatorname{Pic}^{1}(C_{\alpha})\), where \(C^{\prime}\) is an additional curve of genus one. In both cases, the curve \(C_{\alpha}\) itself is obtained by taking complete intersections in a Brauer-Severi variety representing \(\alpha\) as described above. In particular, its genus and hence the dimension of \(\operatorname{Pic}^{1}(C_{\alpha})\) depends on the index of \(\alpha\) and cannot be bounded. As was explained to us by Auel, splitting by torsors for abelian varieties can also be achieved by using the Merkurjev-Suslin theorem [29], cf. [19, Ch. 8], which shows that \(\operatorname{Br}(K(X))\) is generated by cyclic classes. Combined with a result by Matzri [28] bounding the symbol length of Brauer classes over function fields, the dimension of the abelian variety can be again bounded in terms of the period of the Brauer class. The argument in [2, SS3] uses instead a result of Raynaud about realizing finite abelian group schemes as subschemes of abelian schemes.
In [20] one finds further questions about splitting Brauer classes. Torsors for elliptic curves and abelian varieties seem geometrically the most interesting cases, but one could try to use K3 surfaces or Calabi-Yau varieties just as well. We have nothing to say about those.
### Period-index problem
The period-index problem, asking for an exponent \(e_{\alpha}\) that is independent of \(\alpha\), has a long history. For number fields, it has been proved by Albert, Brauer, Hasse, and Noether [1, 6] that \(\operatorname{per}(\alpha)=\operatorname{ind}(\alpha)\), i.e. \(e_{\alpha}=1\), and the same holds for function fields of transcendence degree one over finite fields.
In the geometric situation, where \(K\) is the function field of a curve over an algebraically closed field, the question is vacuous by Tsen's theorem. For surfaces, de Jong [15] proved \(\operatorname{per}(\alpha)=\operatorname{ind}(\alpha)\), so again \(e_{\alpha}=1\), for Brauer classes \(\alpha\) with \(\operatorname{per}(\alpha)\) prime to the characteristic of the ground field, so in particular for all classes in characteristic zero. Again de Jong [17] and Lieblich [26, 27] later extended the result to all classes. Already in [12], Colliot-Thelene proposed the bound \(e_{\alpha}=\dim(X)-1\) in arbitrary dimensions, see also [13, SS2.4].
In [18, Conj. 1.2], de Jong and Perry formulated a weaker a conjecture that asks for a bound \(e=e_{\alpha}\) that only depends on the variety \(X\) and not on the (unramified) Brauer class \(\alpha\). Moreover, they proved the existence of such uniform bound under the assumption of that the Lefschetz standard conjecture holds true in degree two. Theorem 1.2 proves their conjecture unconditionally.
It is known that techniques developed by de Jong and Starr [17, 33] allow one to deduce the conjecture \(e_{\alpha}=\dim(X)-1\) for all Brauer classes from the case of unramified ones. Unfortunately, as our exponent depends on the geometry of \(X\) and not only on its dimension, this argument does not work which limits our approach to the case of unramified Brauer classes.
The proof of Theorem 1.2 relies on the construction of a module \(V_{\alpha}\) of bounded dimension over the Azumaya algebra \(A\) representing \(\alpha\). This module \(V_{\alpha}\) is eventually constructed as a space of theta functions on the twisted Picard variety \(B_{\alpha}\), see (4.1) in Section 4.1.
After the first version of our paper appeared on the arxiv, an alternative argument for Theorem 1.2 was communicated to us by M. Lieblich and also by B. Antieau (coming out of an independent discussion with A. Auel). Colliot-Thelene informed us that also D. Krashen, has independently found a proof for the existence of a uniform bound.
The geometric setup is the same. The tensor product \(E\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!E^{ \otimes\operatorname{per}(\alpha)}\) is used to define a morphism \(\operatorname{Pic}^{d}_{\alpha}(C)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the pull-back \(\mathcal{A}^{\prime}\) of \(\mathcal{A}\) under the projection to \(X\). By definition of the twisted Picard variety as a moduli space of certain sheaves, the rank of \(\mathcal{P}\) is \(d(\mathcal{A})\coloneqq\sqrt{\operatorname{rk}(\mathcal{A})}\). This implies that the pull-back of \(\alpha\) is trivial, for \(\mathcal{A}^{\prime}\simeq\mathcal{E}\mathsf{nd}(\mathcal{P})\) on a dense Zariski open subset.
There are various ways of dealing with moduli spaces in a twisted situation. Each of them involves an additional choice, e.g. of a Cech cocycle, an Azumaya algebra, a Brauer-Severi varieties, or \(\mu_{n}\)-gerbe, representing the fixed Brauer class. The moduli space is then constructed as a space of sheaves twisted with respect to the Cech cocycle, of modules over the Azumaya algebra, of sheaves on the Brauer-Severi variety, or of invertible sheaves of weight one on a \(\mu_{n}\)-gerbe. We decided to represent the Brauer class by an Azumaya algebra but every other choice would have worked just as well.
In the end the triviality of the pull-back \(\alpha^{\prime}\) of \(\alpha\) follows from the existence of a locally free \(\alpha^{\prime}\)-twisted sheaf of rank one or, in terms of Azumaya algebras, from the existence of a locally free \(\mathcal{A}^{\prime}\)-module of rank \(d(\mathcal{A})\), where \(\mathcal{A}\) represents \(\alpha\) and \(\mathcal{A}^{\prime}\) is its pull-back.
**Acknowledgements:** We wish to thank Asher Auel for inspiration and for help with the literature, as well as for comments on the first version of the paper. We also grateful to Evgeny Shinder for critical comments on the first draft and to Gebhard Martin for interesting questions. The first author gratefully acknowledges the hospitality of the ITS-ETH Zurich during his stay in the spring of 2023.
## 2. Warmup: Elliptic curves on K3 surfaces
Before presenting the general proof of Theorem 1.1, we discuss the special case of elliptic K3 surfaces. It is the first instance where universal sheaves are seen to split Brauer classes and it is instructive to study this case first before dealing with the general situation in the next section. As it seems more customary, we here work with twisted sheaves instead of modules over Azumaya algebras, but this does not effect the essence of the argument.
We will find that for elliptic surfaces with a section all Brauer classes are split by a genus one fibration (over a fixed elliptic curve) and so Clark's original questions has an affirmative answer in this case, see Proposition 2.1. For elliptic K3 surfaces without a section, only Brauer classes of an order coprime to the multi-section index of the elliptic fibration can be split by the same method, see Proposition 2.3.
We will conclude this section by discussing families of (singular) K3 surfaces. Something can still be said but the result is less compelling, see Proposition 2.4.
In fact, the discussion in this section applies to elliptic surfaces, with and without a section, that are not necessarily K3 surfaces. However, for simplicity and as the main purpose of this section is to illustrate the proof in the general setting, we stick to K3 surfaces.
### Elliptic K3 surfaces with a section
Consider an elliptic K3 surface \(S_{0}\rTo\mathbb{P}^{1}\) with a section and a class \(\alpha\in\Sha(S_{0}/\mathbb{P}^{1})\). By definition of the Tate-Safarevic group \(\Sha(S_{0}/\mathbb{P}^{1})\), the class \(\alpha\) corresponds to an elliptic K3 surface \(S\rTo\mathbb{P}^{1}\) (without a section) together with an isomorphism \(\Pic^{0}(S/\mathbb{P}^{1})\simeq S_{0}\) relative over \(\mathbb{P}^{1}\). Here, \(\Pic^{0}(S/\mathbb{P}^{1})\) denotes the minimal smooth compactification of the Jacobian fibration of the family of smooth fibres of \(S\rTo\mathbb{P}^{1}\). Such a compactification is provided by the moduli space \(M(0,f,0)\) of stable sheaves on \(S\) with Mukai vector \((0,f,0)\), where \(f\) is the class of a fibre, cf. [23, Ch. 11]. However, such a moduli space is not fine, i.e. there is no universal family on \(S_{0}\times S\).
The obstruction for a universal family to exist is a class in \(\Br(S_{0})\). In fact, this class is nothing but \(\alpha\) itself under the well-known isomorphism \(\Sha(S_{0}/\mathbb{P}^{1})\simeq\Br(S_{0})\). To be more precise, the obstruction class in \(\Br(S_{0})\) is the class of the Cech cocycle that comes up naturally when we try to glue the local (etale or analytic) universal families, which always exist. This point of view also shows that a universal family \(\mathcal{P}\) exists as a twisted sheaf \(\mathcal{P}\) on \(S\times S_{0}\), where the twist is with respect to the pull-back of the cocycle on \(S_{0}=\Pic^{0}(S/\mathbb{P}^{1})\) representing the obstruction class \(\alpha\). The construction was systematically studied by Caldararu [9].
By definition of the moduli space, the \((1\times\alpha)\)-twisted sheaf \(\mathcal{P}\) on \(S\times S_{0}\) has support on the closed subscheme \(S\times_{\mathbb{P}^{1}}S_{0}\subset S\times S_{0}\). Furthermore, there it is of rank one. Hence, on a non-empty Zariski open subset \(V\subset S\times_{\mathbb{P}^{1}}S_{0}\), the twisted sheaf \(\mathcal{P}\) is locally free of rank one which implies that \((1\times\alpha)|_{V}\) is a trivial Brauer class.
The generic fibre of the second projection is a smooth curve of genus one. More precisely, it is the generic fibre \(S_{\zeta}\) of \(S\rTo\mathbb{P}^{1}\) base changed to the generic point \(\eta\in S_{0}\) which, of course, maps to \(\zeta\) under the projection. We have proved the following result.
**Proposition 2.1**.: _Let \(S_{0}\rTo\mathbb{P}^{1}\) be an elliptic K3 surface with a section and let \(\alpha\in\Br(S_{0})\). Then there exists a genus one curve \(C_{\alpha}\) over its function field \(K(S_{0})\) such that the image of \(\alpha\) under the composition is trivial. _
We can be more precise about the curve \(C_{\alpha}\). By construction, it is the generic fibre of which is the base change to \(K(S_{0})\) of the moduli space of line bundles on the generic fibre of twisted with respect to the restriction of \(\alpha\). But this moduli space is naturally a torsor for \(\Pic^{0}\) of the generic fibre of \(S_{0}\rTo\mathbb{P}^{1}\) where the action is given by tensor product.
Since has a section, its relative \(\Pic^{0}\) is just itself. Hence, all the genus one curves \(C_{\alpha}\) in Proposition 2.1 are torsors for the same elliptic curve, namely the base change to \(K(S_{0})\) of the generic fibre of \(S_{0}\rTo\mathbb{P}^{1}\).
### Elliptic K3 surfaces without a section
Since in the above discussion, the K3 surface \(S\) could be seen as a certain moduli space \(\Pic^{d}_{\alpha}(S_{0}/\mathbb{P}^{1})\) of twisted sheaves on the fibres of \(S_{0}\rTo\mathbb{P}^{1}\), cf. [24], one could try to extend the argument to elliptic K3 surfaces without a section
by simply considering \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\). In [24] we explain why the notation \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\) makes only sense when a lift of \(\alpha\) to an element in the special Brauer group \(\operatorname{SBr}(S_{0})\) is chosen: Only then the degree \(d\) is well defined. This subtlety is of no importance in the discussion here and we will just ignore it, but see Section 3.2.
So the idea would be to use a universal family \(\mathcal{P}\) on \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\times_{\mathbb{P}^{1}}S_{0}\) to argue that the pull-back of \(\alpha\) under the second projection has to be trivial. And indeed, by definition of \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\) as a moduli space of \(\alpha\)-twisted sheaves, \(\mathcal{P}\) would be twisted with respect to \(\alpha\) on the second factor.
However, there is no guarantee that such a universal family \(\mathcal{P}\) exists as a \((1\times\alpha)\)-twisted sheaf, i.e. that the moduli space \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\) is a fine moduli space of twisted sheaves on \(S_{0}\). In other words, \(\mathcal{P}\) might only exist as a sheaf that is not only twisted by \(\alpha\) on the second factor but also by some obstruction class on the first. If this is the case, the argument breaks down and we cannot conclude the triviality of the pull-back of \(\alpha\).
But something can be salvaged from this approach by exploiting a certain flexibility in the choice of \(d\) (after choosing a lift of \(\alpha\) to a class in the special Brauer group). For this we make use of a well-known criterion for \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})=M_{\alpha}(0,f,d)\) to be a fine moduli space: It suffices to find an \(\alpha\)-twisted locally free sheaf \(E\) such that \(\chi(F\otimes E^{*})=1\) for \(F\in\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\). See [22, Ch. 4.6] for the untwisted case, the proof in the twisted case is identical.
**Lemma 2.2**.: _Assume the order of \(\alpha\in\operatorname{Br}(S_{0})\) is coprime to the generator \(m\) of \((\operatorname{NS}(S_{0}).f)\). Then there exists a (twisted) degree \(d\in\mathbb{Q}\) with \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\) non-empty and an \(\alpha\)-twisted locally free sheaf \(E\) on \(S\) such that \(\chi(F\otimes E^{*})=1\) for every \(F\) parametrised by \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\). In particular, the moduli space \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\) of \(\alpha\)-twisted sheaves on \(S_{0}\) is fine._
Proof.: We start with any \(\alpha\)-twisted line bundle \(F\) supported on a smooth fibre \(f\) of \(S_{0}\smash{\mathop{\longrightarrow}\limits^{\vbox to 0.0pt{\vfill}}}\,\mathbb{P}^{1}\) that defines a point in some non-empty moduli space \(\operatorname{Pic}_{\alpha}^{d}(S_{0}/\mathbb{P}^{1})\). Let us also fix some \(\alpha\)-twisted locally free sheaf \(E\) on \(S_{0}\) of rank \(|\alpha|\). The existence of the latter is guaranteed by de Jong's solution [15] of the period-index problem for algebraic surfaces.
We will now modify \(E\), keeping it \(\alpha\)-twisted and locally free of rank \(|\alpha|\), and \(F\), possibly changing \(d\) in the process, such that eventually \(\chi(F\otimes E^{*})=1\).
Observe first that the kernel \(F^{\prime}\) of any surjection, defines again an \(\alpha\)-twisted line bundle on \(f\) which satisfies \(\chi(F^{\prime}\otimes E^{*})=\chi(F\otimes E^{*})-|\alpha|\). Second, we pick a curve \(C\subset S_{0}\) with \((C.f)=m^{1}\) and consider an elementary transformation \(E^{\prime}\) of \(E\) along \(C\), i.e. \(E^{\prime}\) is the \(\alpha\)-twisted locally free sheaf given as the kernel of some surjection onto some \(\alpha\)-twisted line bundle \(\mathcal{L}\) on \(C\). Then \(\chi(F\otimes E^{\prime*})=\chi(F\otimes E^{*})-m\).
Since \(|\alpha|\) and \(m\) are coprime, applying the procedure multiple times eventually provides us with \(E\) and \(F\) as claimed.
Note that in particular we reprove the assertion in the case that \(S_{0}\rTo\mathbb{P}^{1}\) has section, for then \((\operatorname{NS}(S_{0}).f)=\mathbb{Z}\) and hence \(m=1\). If there is no section, then the discussion eventually leads to the following generalisation of Proposition 2.1.
**Proposition 2.3**.: _Let \(S_{0}\rTo\mathbb{P}^{1}\) be an elliptic K3 surface and let \(m\in\mathbb{Z}\) be such that \(m\cdot\mathbb{Z}=(\operatorname{NS}(S_{0}).f)\) for the fibre class \(f\). Then one finds an elliptic curve \(E\) over \(K(S_{0})\) such that for every \(\alpha\in\operatorname{Br}(S_{0})\) of order \(|\alpha|\) coprime to \(m\) there exists a torsor \(C_{\alpha}\) for \(E\) over \(K(S_{0})\) such that the image of \(\alpha\) under \(\operatorname{Br}(S_{0})\rTo\mathbb{B}(K(S_{0}))\rTo\mathbb{B}(C_{\alpha})\) is trivial. _
### Families of elliptic curves on arbitrary K3 surfaces
Every K3 surface admits a covering family of elliptic curves. Does this mean that Brauer classes on arbitrary K3 surfaces are split by curves of genus one? Unfortunately, our approach fails in this generality, but something can still be said.
For any K3 surface \(S\) there exists a one-dimensional smooth family of curves of genus one \(\mathcal{C}\rTo T\) with a dominant map \(\mathcal{C}\rTo S\). Following the same strategy as in the last two sections, one considers \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/T)\times_{T}\mathcal{C}\rTo\mathbb{ C}\rTo S\). As soon as \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/T)\) is a fine moduli space, which can again be phrased as a numerical condition on \(|\alpha|\) being coprime to a fixed integer \(m\), there exists a universal family of \(\alpha\)-twisted sheaves on \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/T)\times_{T}\mathcal{C}\). This, as before, implies that the pull-back of \(\alpha\) is trivial. Now,
is a family of curves of genus one, generically a torsor for \(\operatorname{Pic}^{0}(\mathcal{C}/T)\times_{T}\mathcal{C}\rTo\mathbb{C}\).
The difference to the case of elliptic K3 surfaces, with or without a section, is that the projection is typically not birational. In other words, there often exists more than one elliptic curve passing through a generic point in \(S\). Thus, by this method, one only proves the following.
**Proposition 2.4**.: _Let \(S\) be a complex projective K3 surface. Then there exists an integer \(m>0\), a finite extension \(K^{\prime}/K(S_{0})\), and an elliptic curve \(E\) over \(K^{\prime}\) such that for every Brauer class \(\alpha\in\operatorname{Br}(S)\) of order \(|\alpha|\) coprime to \(m\), one finds a torsor \(C_{\alpha}\) for \(E\) over \(K^{\prime}\) such that under the image of \(\alpha\) is trivial. _
Note that any Brauer class splits under some finite extension of \(K(S_{0})\). So the interest of the last proposition stems from the fact that \(K/K(S_{0})\) is a fixed finite extension that works for all Brauer classes (of order coprime to \(m\)).
**Remark 2.5**.: Apart from elliptic surfaces, we do not know of any other types of surfaces for which covering families of (singular) elliptic curves have been studied, but the techniques here would certainly also apply to those and prove splitting of Brauer classes by genus one curves over a fixed finite extension of the function field. See also Remark 3.3.
### Period-index problem for elliptic K3 surfaces with a section
We now outline the proof of Theorem 1.2 in the case of elliptic K3 surfaces with a section, which might again serve as an illustration of the argument in the general situation. Here, we use the language of twisted sheaves while in the actual proof in Section 4 we employ sheaves over Azumaya algebras.
As in Section 2.1, we assume that \(\pi\colon S_{0}\mathbin{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.02pt depth -0.2 pt width 0.4pt\hss}\raise 2.0pt\hbox{$\longrightarrow$}}\mathbb{P}^{1}\) is an elliptic K3 surface with a section and that \(S\mathbin{\hbox to 0.0pt{\kern 2.0pt\vrule height 6.02pt depth -0.2 pt width 0.4pt\hss}\raise 2.0pt\hbox{$\longrightarrow$}}\mathbb{P}^{1}\) is the elliptic K3 surface corresponding to a fixed class \(\alpha\in\operatorname{Br}(S_{0})\simeq\operatorname{\
After introducing the setup, Section 3.2 proves a version of Theorem 1.1, namely the existence of \(A\) and \(B_{\alpha}\) not over the function field of \(X\), but over some universal extension of it obtained by the family of complete intersection curves. In fact, the argument is explained first under the additional assumption that the moduli spaces used in the proof are fine. In Section 3.4, we then show how to turn this assumption into a numerical condition on the period of the Brauer class and in Section 3.5 how to avoid it altogether. Section 3.3 explains how the universal family of all complete intersection curve is cut down to a family that dominates \(X\) birationally.
### Setting: Complete intersection curves
We now set the stage for the proof of Theorem 1.1. So we let \(X\) be an integral projective variety of dimension \(n\) over an algebraically closed field \(k\). Passing to its normalisation, we may assume from the start that \(X\) itself is normal. In characteristic zero we can even assume that \(X\) is smooth, but we will not need this.
By replacing \(X\) by its blow-up in a smooth closed point \(x\in X\) and, letting \(Y\) be the exceptional divisor, we can assume that there exists one (smooth) hypersurface \(Y\subset X_{\mathrm{sm}}\) to which all Azumaya algebras \(\mathcal{A}\) on \(X\) restrict trivially or, even stronger, such that the Brauer group of \(Y\) is trivial.
Next, we fix a very ample linear system \(|h|\simeq\mathbb{P}_{k}^{N}\) on \(X\) and produce a family of complete intersection curves
of smooth curves \(\mathcal{C}_{t}\subset X\) parametrised by an open dense subset \(U\) of the Grassmannian \(\mathbb{G}(n-2,|h|)=\mathrm{Gr}(n-1,H^{0}(X,\mathcal{O}(h)))\). Later it will be convenient to assume that all curves parametrised by \(U\) are contained in the smooth part of \(X\), for which we will need the normality of \(X\). Also note that we may assume \(\dim(X)\geq 2\), so that we really can talk about a family of curves contained in \(X\). Indeed, the Brauer group of a smooth projective curve over an algebraically closed field is trivial.
The relative Picard scheme \(\mathrm{Pic}^{0}(\mathcal{C}/U)\,\raisebox{-1.0pt}{\includegraphics[height=14.226378pt]{./.png}}\,U\) of the family will be central for our discussion and eventually leads to the abelian variety \(A\).
### Twisted relative Picard variety
For any class \(\alpha\in\mathrm{SBr}(X)\) in the special Brauer group \(\mathrm{SBr}(X)^{2}\) and any \(d\in\mathbb{Q}\), we can consider the relative twisted Picard variety, cf. [24],
\[\mathrm{Pic}^{d}_{\alpha}(\mathcal{C}/U)\,\raisebox{-1.0pt}{\includegraphics[height=14.22637 8pt]{./.png}}\,U,\]
which, if not empty, is a torsor for the Picard scheme \(\operatorname{Pic}^{0}(\mathcal{C}/U)\mathop{\hbox to 0.0pt{\vbox{\hbox{\scalebox{.5}{ \rotatebox[origin={0.9}{$\bullet$}}\vspace{-0.2cm}}}}}\nolimits\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Assume that \(d\) can be chosen such that \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\) is a fine moduli space, i.e. that there exists a universal sheaf \(\mathcal{P}\) on \(\mathcal{C}\times_{U}\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\). By definition, \(\mathcal{P}\) is a module over the pull-back \(\mathcal{A}^{\prime}\) of \(\mathcal{A}\) via the projection
(3.2)
The class \(\alpha^{\prime}\in\operatorname{SBr}(\mathcal{C}\times_{U}\operatorname{Pic}_ {\alpha}^{d}(\mathcal{C}/U))\) of \(\mathcal{A}^{\prime}\) is then the pull-back of \(\alpha\), i.e. its image under
We will use the same notation for the corresponding classes in the Brauer groups \(\operatorname{Br}(X)\), \(\operatorname{Br}(\mathcal{C})\), and \(\operatorname{Br}(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\times_{U} \mathcal{C})\).
Note that the restriction of \(\mathcal{P}\) to \([F]\times C\subset\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\times_{U} \mathcal{C}\) is the sheaf \(F\) on \(C\). Thus, \(\mathcal{P}\) is a locally free \(\mathcal{A}^{\prime}\)-module of rank \(d(\mathcal{A}^{\prime})\) and, therefore, the natural map is an isomorphism. Hence, the image \(\alpha^{\prime}=[\mathcal{A}^{\prime}]\in\operatorname{Br}(\operatorname{Pic} _{\alpha}^{d}(\mathcal{C}/U)\times_{U}\mathcal{C})\) of \(\alpha\in\operatorname{Br}(X)\) is trivial.
In other words, the pull-back \(\alpha_{\mathcal{C}}\in\operatorname{Br}(\mathcal{C})\) is split by \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/|U|)\times_{U}\mathcal{C}\, \smash{\mathop{\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.0pt\vrule height 6.0pt width 0pt depth 0.0pt\vrule height 6.0pt width 0.0pt depth 0.
from the blow-up of \(X\) in the points of intersections \(\{x_{1},\dots,x_{d}\}=X\cap P\), where \(d=\deg(X)\). The fibre of (3.3) over a point \(t\in\mathbb{P}^{n-1}\) is the complete intersection curve \(X\cap\overline{Pt}\). Restricting to those points with smooth fibres leads to
(3.4)
In more invariant terms, the situation can be described by \(P=\mathbb{P}(W)\subset\mathbb{P}^{N}=\mathbb{P}(V)\) and \(\mathbb{P}^{n-1}=\mathbb{P}(W_{0})\), with \(V=H^{0}(X,\mathcal{O}(h))^{*}\) of dimension \(N+1\), a generic linear subspace \(W\subset V\) of dimension \(N-n+1\), and a subspace \(W_{0}\subset V\) of dimension \(n\) for which the projection \(W_{0}\raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}V \raisebox{-0.5pt}{\includegraphics[height=5.690551pt]{figs/2.eps}}V/W\) is an isomorphism. Mapping a point \(t=[v]\in\mathbb{P}(W_{0})\) to the subspace of all sections \(s\in V^{*}=H^{0}(X,\mathcal{O}(h))\) vanishing on the subspace spanned by \(W\) and \(v\in V\) defines an embedding
\[\mathbb{P}^{n-1}=\mathbb{P}(W_{0})\raisebox{-0.5pt}{\includegraphics[height=5.690 551pt]{figs/2.eps}}\mathbb{G}(n-1,|h|).\]
We can think of the family as the pull-back of admits a section.
What has been achieved by this construction is that the family (3.4) of curves in \(X\) dominates \(X\) birationally, i.e. there exists exactly one curve \(\mathcal{C}_{t}\) passing through the generic point of \(X\). In particular, the function fields of \(\mathcal{C}_{0}\) and \(X\) can be identified: \(K(\mathcal{C}_{0})=k(\eta_{\mathcal{C}_{0}})=k(\eta_{X})=K(X)\). Another feature of the construction is that the projection admits a section. Indeed, each of the exceptional divisors \(E_{i}\subset\operatorname{Bl}_{\{x_{i}\}}(X)\) over the point \(x_{i}\in X\) defines one.
**Remark 3.2**.: The idea to use families of complete intersection curves to study Brauer classes is not new. For example, Colliot-Thelene [11, Lem. 1] proves the existence of (3.4) and Lieblich [26, Lem. 4.2.1.1] shows how to relax the assumption on the ground field.
The restriction of (3.2) to \(\mathcal{C}_{0}\), which is nothing but
(3.5)
can now be viewed as a torsor for the abelian scheme
(3.6)
Since the projection is a birational morphism, the generic fibres of (3.5) and of (3.6), which we denote by \(B_{\alpha}\) resp. \(A\), can both be viewed as varieties over \(k(\eta_{X})=K(X)\).
The rest of the argument is as before. The restriction of the universal sheaf \(\mathcal{P}\) on to the subscheme is locally free of rank \(d(\mathcal{A})\) and a module over the pull-back of the Azumaya algebra representing the Brauer class \(\alpha\) under (3.5).
### Reducing to fine moduli spaces
The next step is to ensure that for a given class \(\alpha\) we can choose a rational number \(d\in\mathbb{Q}\) such that \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\) is non-empty and fine. It is easy to find a \(d\) for which the twisted Picard variety is not empty, but the existence of a universal family will be possible only under the additional assumption that \(|\alpha|\) and the top self-intersection number \(m\coloneqq(h^{n-1}.Y)\) are coprime (an assumption that we will get rid of eventually in the next section). The idea is the same as in Section 2.2 for elliptic K3 surfaces without a section.
For the non-emptiness consider any smooth curve \(C\) parametrised by \(U\) and the restriction \(\mathcal{A}|_{C}\) of the Azumaya algebra \(\mathcal{A}\). Since \(\operatorname{Br}(C)\) is trivial, there exists a locally free sheaf \(F\) on \(C\) with an isomorphism of algebras \(\mathcal{A}\simeq\operatorname{\mathcal{E}nd}(F)\). In particular, \(F\) can be viewed as an \(\mathcal{A}\)-module with \(\operatorname{ch}(F)=r\cdot[C]+\deg(F)\) with \(r^{2}=\operatorname{rk}(\mathcal{A})\) and, hence, \(\operatorname{ch}_{\mathcal{A}}(F)=[C]+d\), where \(d=\deg(F)/r\). In other words, \(F\) defines a point in \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\) over the point \([C]\in U\) and so \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\) is not empty.
Let us now turn to the existence of the universal family. As in Section 2.2, it is more convenient to use the language of twisted sheaves. We first remark that there is a more flexible version of the criterion for the existence of a universal family than the one already used in Section 2.2: A universal sheaf \(\mathcal{P}\) on \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\times_{U}\mathcal{C}\), twisted with respect to the pull-back of \(\alpha\), exists if one finds a formal linear combination \(u=\sum a_{i}[G^{i}]\) of \(\alpha\)-twisted sheaves on \(X_{\operatorname{sm}}\) (or a class in the corresponding Grothendieck group) with \(\chi(F\otimes u)=\sum a_{i}\cdot\chi(F\otimes G^{i})=1\) for some \(F\in\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}/U)\). Here we are using that we can assume that all our curves \(C\) parametrised by \(U\) are contained in the smooth part of \(X\).
Start with any \(u\). Then, switching from a twisted line bundle \(F\) on a curve \(C\) in \(U\) to the kernel \(F^{\prime}\) of some surjection \(F\smash{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\leftleftleftleft({\leftleft({ \leftleft({\leftleftleftleft({{ \leftleftleftleftleft({ \leftleftleftleftleftleftleftleft( {{ \leftleftleftleftleftleft({ \leftleftleft({ { }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\ \)\)\)\)\)\) \ \rightright\right\right\right\right\right\right\right\right\right\right\ \\}\}\}\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\}\\\}\\\}\
### Removing the coprimality assumption
So far we have proved Theorem 1.1 for all Brauer classes \(\alpha\in\operatorname{Br}(X)\) such that the order \(|\alpha|\) is coprime to the top intersection number \(m=(h^{n-1}.Y)\), where \(|h|\) is a very ample linear system on \(X\). We now show how to avoid this numerical assumption by passing to a blow-up of \(X\) and thus conclude the proof of Theorem 1.1.
Fix a smooth point \(x\in Y\subset X\) in the distinguished hypersurface \(Y\). Then consider the blow-up \(\sigma\colon X^{\prime}\coloneqq\operatorname{Bl}_{x}(X)\rTo X\) together with the strict transform \(Y^{\prime}\subset X^{\prime}\) of \(Y\) which is the blow-up \(Y^{\prime}=\operatorname{Bl}_{x}(Y)\rTo Y\) and which has the property that \(\operatorname{Br}(X^{\prime})\rTo\operatorname{Br}(Y^{\prime})\) is trivial.
If we denote by \(E\) the exceptional divisor of \(\sigma\), then \(|h^{\prime}\coloneqq a\cdot\sigma^{*}(h)-E|\) is a very ample linear system on \(X^{\prime}\) for \(a\gg 0\). The degree of \(Y^{\prime}\) with respect to this very ample linear system is
\[m^{\prime}\coloneqq(h^{\prime n-1}.Y^{\prime})=a^{n-1}(h^{n-1}.Y)+(-1)^{n-1}( E|_{Y^{\prime}})^{n-1}=a^{n-1}\cdot m\pm 1,\]
for \(E|_{Y^{\prime}}\) is the exceptional divisor of \(\sigma|_{Y^{\prime}}\colon Y^{\prime}\rTo Y\). Hence, \(m=(h^{n-1}.Y)\) and \(m^{\prime}=(h^{\prime n-1}.Y^{\prime})\) are coprime.
According to our previous discussion, there exist two abelian varieties \(A\) and \(A^{\prime}\) over \(K(X)=K(X^{\prime})\), such that every \(\alpha\in\operatorname{Br}(X)\) with \((|\alpha|,m)=1\) is split by an \(A\)-torsor \(B_{\alpha}\) and every \(\alpha^{\prime}\in\operatorname{Br}(X^{\prime})\) with \((|\alpha^{\prime}|,m^{\prime})=1\) is split by an \(A^{\prime}\)-torsor \(B^{\prime}_{\alpha^{\prime}}\).
Since \(\operatorname{Br}(X)\simeq\operatorname{Br}(X^{\prime})\), this can be applied as follows. We write any given class \(\gamma\in\operatorname{Br}(X)\simeq\operatorname{Br}(X^{\prime})\) as a product
\[\gamma=\alpha\cdot\alpha^{\prime}\]
with \((|\alpha|,m)=1\) and \((|\alpha^{\prime}|,m^{\prime})=1\). Then \(\gamma\) is split by the product \(B_{\alpha}\times B_{\alpha^{\prime}}\) which is a torsor for the abelian variety \(A\times A^{\prime}\).
This concludes the proof of Theorem 1.1.
### Comments on the assumptions
Our construction is geometric and we need our ground field to be algebraically closed. For example, even in the untwisted version, a Poincare line bundle may not exist if \(k\) is not algebraically closed. More geometrically, if \(X\) is a smooth projective curve over a field which is not algebraically closed, then \(\operatorname{Br}(X)\) might be non-trivial but the construction of system of curves does not make sense.
As our arguments use the existence of the (twisted) Picard variety of curves contained in the variety \(X\), it seems unlikely that the ideas could be extended to also cover ramified classes in \(\operatorname{Br}(K(X))\), i.e. those that are not contained in \(\operatorname{Br}(X)\).
**Remark 3.3**.: As a continuation of the discussion in Section 2.3 and especially Remark 2.5, we note that the techniques above can be modified to prove similar results for varieties \(X\) with a dominating family of curves \(\mathcal{C}\rTo X\), potentially of much lower genus than the complete intersection curves used above. In the vain of Proposition 2.4, one would then prove splitting
of Brauer classes over torsors for abelian varieties \(A\) over some finite extension \(K^{\prime}/K(X)\) of the function field, but again with a certain condition on the period of the Brauer class.
## 4. Period-index problem
This section contains the proof of Theorem 1.2. As in the previous section, we can assume that there exists a smooth hypersurface \(Y\subset X\) such that \(\operatorname{Br}(X)\rTo\operatorname{Br}(Y)\) is trivial or, in fact, that \(\operatorname{Br}(Y)\) is trivial.
### Bounding the index of central simple algebras
The main argument to prove Theorem 1.2 can be given using twisted sheaves, sheaves on Brauer-Severi varieties, or sheaves on gerbes. We chose to present it in the language of Azumaya algebras, as it makes the argument most transparent. We begin by recalling the following basic fact.
Assume \(A\) is an Azumaya algebra over a field \(K\) and \(V\) is a module over \(A\). Then
\[(\operatorname{ind}(A)\cdot d(A))\mid\dim_{K}(V).\]
Indeed, writing \(A\simeq M_{\ell}(D)\) and using Morita equivalence, we know that \(V\) is of the form \(W^{\oplus\ell}\) for some module \(W\) over the division algebra \(D\). Since any \(D\)-module is free, this proves \(V\simeq D^{\oplus\ell\cdot m}\) for some \(m\) and, therefore, \(\operatorname{ind}(A)\cdot d(A)=\dim_{K}(D)^{1/2}\cdot\dim_{K}(D)^{1/2}\cdot\ell\) divides \(\dim_{K}(V)=\dim_{K}(D)\cdot(\ell\cdot m)\).
So, in order to prove Theorem 1.2, it suffices to find for each Azumaya algebra \(\mathcal{A}\) on \(X\) an \(A\)-module \(V_{\mathcal{A}}\) with \(\dim_{K}(V_{\mathcal{A}})=d(A)\cdot\operatorname{per}(A)^{e}\) for a certain fixed \(e\). Here, \(A=\mathcal{A}_{K}\) denotes the Azumaya algebra over the function field \(K=K(X)\) obtained as the generic fibre of \(\mathcal{A}\).
The basic idea is to produce such an \(A\)-module \(V\), cf. (4.3), as the space of global sections
\[H^{0}(\operatorname{Pic}^{d}_{\alpha}(\mathcal{C}_{0}/U_{0})_{K},\mathcal{P} |_{\operatorname{Pic}^{d}_{\alpha}(\mathcal{C}_{0}/U_{0})\times\eta_{\mathcal{ C}_{0}}}\otimes M) \tag{4.1}\]
of the restriction of the universal sheaf \(\mathcal{P}\) twisted by an appropriate line bundle \(M\) and to control its dimension. So, ultimately \(V\) is a space of theta functions on a twisted Picard variety. Since \(\mathcal{P}\) is a sheaf of modules over the pull-back of \(\mathcal{A}\), its space of global sections is indeed an \(A\)-module.
**Remark 4.1**.: The above can also be phrased in more geometric terms. Assume \(E\) is a locally free sheaf on \(X\) and a module over the Azumaya algebra \(\mathcal{A}\). If \(\operatorname{rk}(E)=d(\mathcal{A})\), then the natural injection is generically an isomorphism. Moreover, since both sheaves have trivial determinant, it is an isomorphism in codimension one and hence, at least when \(X\) is smooth, everywhere. In particular, the class of \(\mathcal{A}\) is trivial.
The basic idea to construct modules over Azumaya algebras (or twisted sheaves) of small rank to bound the index has been exploited before by Lieblich [26, Prop. 4.2.1.3]. Instead of trying to produce rational points of moduli spaces to produce those, we propose here to use spaces of global sections of certain twisted sheaves.
### Direct image of the twisted Poincare bundle
With the notation of the last section, we consider a birationally dominating family of very ample complete intersection curves
parametrised by an open subset \(U_{0}\subset\mathbb{P}^{n-1}\).
As in Section 3.4, we shall first assume that for some \(d\) there exists a universal sheaf \(\mathcal{P}\) on \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}_{0}/U_{0})\times_{U_{0}}\mathcal{C} _{0}\) over \(U_{0}\). If \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}_{0}/U_{0})\) is viewed as a moduli space of sheaves over the Azumaya algebra \(\mathcal{A}\) picked to represent the class \(\alpha\in\operatorname{SBr}^{\operatorname{o}}(X)\), then the universal sheaf \(\mathcal{P}\) is a module over the pull-back of \(\mathcal{A}\) under the projection \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}_{0}/U_{0})\times_{U_{0}}\mathcal{ C}_{0}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\rightharpoonup$}\hss}} \nolimits\mathcal{C}_{0}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \rightharpoonup$}\hss}}\nolimits X\). As always, the universal sheaf is only unique up to twist coming from \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}_{0}/U_{0})\) and the first step consists of twisting \(\mathcal{P}\) appropriately so that it parametrises (the analogue of) degree zero line bundles on \(\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}_{0}/U_{0})\).
To simplify notation, let us restrict the situation to the generic point \(\xi\in U_{0}\). We denote the fibre of \(\mathcal{C}_{0}\) by \(C\), a curve over \(K_{0}:=k(\xi)=K(U_{0})\), and set
\[P\coloneqq\operatorname{Pic}_{\alpha}^{d}(\mathcal{C}_{0}/U_{0})_{\xi}= \operatorname{Pic}_{\alpha|_{C}}^{d}(C),\]
which is a torsor for the abelian variety \(\operatorname{Pic}^{0}(C)\) over \(K_{0}\).
Intersecting \(C\) with one of the exceptional divisors \(E_{i}\subset\operatorname{Bl}_{\{x_{i}\}}(X)\) produces a closed point \(x\in C\), which can also be viewed as the generic point of \(E_{i}\). As \(E_{i}\cap\mathcal{C}_{0}\) is a section of \(\mathcal{C}_{0}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\rightharpoonup$}\hss}} \nolimits U_{0}\), the residue field of \(x\) is \(k(x)=K_{0}\). Moreover, the pull-back \(A_{x}\) of \(\mathcal{A}\), which is the pull-back of \(\mathcal{A}\otimes k(x_{i})\) under the projection \(E_{i}\mathop{\hbox to 0.0pt{\lower 3.0pt\hbox{$\rightharpoonup$}\hss}} \nolimits\{x_{i}\}\), is Morita trivial, as \(k(x_{i})\simeq k\) is algebraically closed by assumption. Hence, \(A_{x}\simeq M_{d(A)}(K_{0})\).
The restriction \(\mathcal{P}|_{P\times\{x\}}\) of the universal sheaf can then be seen as a a locally free sheaf of modules on \(P\simeq P\times\{x\}\) over the Azumaya algebra \(\mathcal{O}_{P}\boxtimes A_{x}\simeq M_{d(A)}(\mathcal{O}_{P\times\{x\}})\) and is thus of the form \(L^{\boxplus d(A)}\) for some invertible sheaf \(L\) on \(P\times\{x\}\). Thus, tensoring the twisted sheaf \(\mathcal{P}\) on \(P\times C\) by the invertible sheaf \(L^{*}\boxtimes\mathcal{O}\) results in a universal sheaf \(\mathcal{P}^{\prime}\) on \(P\times C\) for which the restriction to \(P\times\{x\}\) is \(\mathcal{O}^{\boxplus d(A)}\).
If \(M\) is any ample invertible sheaf on \(P\), then for \(i>0\)
\[H^{i}(P\times\{x\},(\mathcal{P}^{\prime}\otimes(M\boxtimes\mathcal{O}))|_{P \times\{x\}})=H^{i}(P\times\{x\},(M\boxtimes\mathcal{O})^{\boxplus d(A)}|_{P \times\{x\}})=0,\]
which by semi-continuity implies the same vanishing for the restriction to \(P\times\{\eta\}\), where \(\eta\) is the generic point of \(C\), whose residue field is the function field \(K=K(X)\).
The next step is to find an appropriate \(M\) for which \(\chi(P\times\{\eta\},(\mathcal{P}^{\prime}\otimes(M\boxtimes\mathcal{O}))|_{P \times\{\eta\}})\) can be bounded. For this we consider the natural morphism
(4.2)
given by tensor product (over \(\mathcal{A}|_{C}\)), where \(D=\operatorname{per}(\alpha)\cdot d\). As \(\alpha^{\operatorname{per}(\alpha)}=1\), the twisted Picard variety on the right hand side is actually untwisted, i.e. isomorphic \(\operatorname{Pic}^{a}(C)\) for a
certain \(a\). Since \(C(K_{0})\neq\emptyset\), the latter is isomorphic to \(\operatorname{Pic}^{0}(C)\) and admits an ample invertible sheaf \(\mathcal{O}(\Theta)\) satisfying \((\Theta)^{g(C)}=g!\), i.e. an invertible sheaf inducing a principal polarisation.
Thus, (4.2) induces a finite surjective morphism
\[\varphi\colon P\smash{\mathop{\hbox to 0.0pt{\rightarrowfill}}\limits_{ \longrightarrow}}\operatorname{Pic}^{0}(C)\]
of degree \(\operatorname{per}(\alpha)^{2g(C)}\) and the pull-back \(M\coloneqq\varphi^{*}\mathcal{O}(\Theta)\) is ample with top intersection number \((M)^{g(C)}=(\Theta)^{g(C)}\cdot\deg(\varphi)=g!\cdot\operatorname{per}(\alpha) ^{2g(C)}\). For this \(M\) we then have by Riemann-Roch and standard vanishing results for abelian varieties
\[\dim_{K}H^{0}(P\times\{\eta\},(\mathcal{P}^{\prime}\otimes(M \boxtimes\mathcal{O}))|_{P\times\{\eta\}})=h^{0}(P\times\{\eta\},(\mathcal{P}^{ \prime}\otimes(M\boxtimes\mathcal{O}))|_{P\times\{\eta\}})\] \[= \chi(P\times\{\eta\},(\mathcal{P}^{\prime}\otimes(M\boxtimes \mathcal{O}))|_{P\times\{\eta\}})=\chi(P\times\{x\},(\mathcal{P}^{\prime} \otimes(M\boxtimes\mathcal{O}))|_{P\times\{x\}})\] \[= \chi(P\times\{x\},(M\boxtimes\mathcal{O})^{\otimes d(A)}|_{P \times\{x\}})=d(A)\cdot\operatorname{per}(\alpha)^{2g(C)}.\]
Thus, the \(A\)-module
\[V\coloneqq H^{0}(P\times\{\eta\},(\mathcal{P}^{\prime}\otimes(M\boxtimes \mathcal{O}))|_{P\times\{\eta\}}) \tag{4.3}\]
is of dimension
\[\dim_{K}(V)=d(A)\cdot\operatorname{per}(\alpha)^{2g(C)},\]
which by the discussion in Section 4.1 implies
\[\operatorname{ind}(A)\mid\operatorname{per}(\alpha)^{2g(C)}. \tag{4.4}\]
Hence, under the assumption of the existence of a universal family this proves Theorem 1.2.
### End of proof of Theorem 1.2
As in Section 3.5, it remains to show how to argue in the case that for a given \(\alpha\), we cannot find a \(d\) for which \(\operatorname{Pic}^{d}_{\alpha}(\mathcal{C}_{0}/U_{0})\) is fine. We use the same notation and again decompose a Brauer class \(\gamma\in\operatorname{Br}(X)\) as \(\gamma=\alpha\cdot\alpha^{\prime}\). However, this time we pick the decomposition such that addition that \(\operatorname{per}(\alpha)\) and \(\operatorname{per}(\alpha^{\prime})\) are coprime. Then by the arguments above applied to \(X\) and the blow-up \(X^{\prime}\), there exist integers \(e\) and \(e^{\prime}\) such that \(\operatorname{ind}(\alpha)\mid\operatorname{per}(\alpha)^{e}\) and \(\operatorname{ind}(\alpha^{\prime})\mid\operatorname{per}(\alpha^{\prime})^{e^ {\prime}}\). This implies
\[\operatorname{ind}(\gamma)\mid\operatorname{ind}(\alpha)\cdot\operatorname{ ind}(\alpha^{\prime})\mid\operatorname{per}(\alpha)^{e}\cdot\operatorname{per}( \alpha^{\prime})^{e^{\prime}}\mid(\operatorname{per}(\alpha)\cdot\operatorname {per}(\alpha^{\prime}))^{e+e^{\prime}}=\operatorname{per}(\gamma)^{e+e^{ \prime}}.\]
This concludes the proof of Theorem 1.2.
|
2302.13744 | Growth of $p$-parts of ideal class groups and fine Selmer groups in
$\mathbb{Z}_q$-extensions with $p\neq q$ | Fix two distinct odd primes $p$ and $q$. We study "$p\ne q$" Iwasawa theory
in two different settings. Let $K$ be an imaginary quadratic field of class
number 1 such that both $p$ and $q$ split in $K$. We show that under
appropriate hypotheses, the $p$-part of the ideal class groups is bounded over
finite subextensions of an anticyclotomic $\mathbb{Z}_q$-extension of $K$. Let
$F$ be a number field and let $A_{/F}$ be an abelian variety with
$A[p]\subseteq A(F)$. We give sufficient conditions for the $p$-part of the
fine Selmer groups of $A$ over finite subextensions of a
$\mathbb{Z}_q$-extension of $F$ to stabilize. | Debanjana Kundu, Antonio Lei | 2023-02-27T13:18:19Z | http://arxiv.org/abs/2302.13744v1 | Growth of \(p\)-parts of ideal class groups and fine Selmer groups in \(\mathbb{Z}_{q}\)-extensions with \(p\neq q\)
###### Abstract.
Fix two distinct odd primes \(p\) and \(q\). We study \({}^{n}p\neq q^{n}\) Iwasawa theory in two different settings.
(1) Let \(K\) be an imaginary quadratic field of class number \(1\) such that both \(p\) and \(q\) split in \(K\). We show that under appropriate hypotheses, the \(p\)-part of the ideal class groups is bounded over finite subextensions of an anticyclotomic \(\mathbb{Z}_{q}\)-extension of \(K\).
(2) Let \(F\) be a number field and let \(A_{/F}\) be an abelian variety with \(A[p]\subseteq A(F)\). We give sufficient conditions for the \(p\)-part of the fine Selmer groups of \(A\) over finite subextensions of a \(\mathbb{Z}_{q}\)-extension of \(F\) to stabilize.
Key words and phrases:Ideal class groups, fine Selmer groups, \(p\neq q\) Iwasawa theory 2020 Mathematics Subject Classification: Primary 11R23, 11R29; Secondary 11R20, 11J95. \({}^{1}\)In this article, a split prime of \(K\) refers to a prime ideal of \(\mathcal{O}_{K}\) that lies above a rational prime that splits in \(K\).
## 1. Introduction
Let \(F/\mathbb{Q}\) be an algebraic number field and \(F_{\infty}/F\) be a Galois extension with Galois group isomorphic to the additive group \(\mathbb{Z}_{q}\) of \(q\)-adic integers. For each integer \(n\geq 0\), there is a unique subfield \(F_{n}/F\) of degree \(q^{n}\). Let \(h(F_{n})\) be the class number of \(F_{n}\). K. Iwasawa showed that if \(q^{e_{n}}\) is the highest power of \(q\) dividing \(h(F_{n})\), then there exist integers \(\lambda,\mu,\nu\) independent of \(n\), such that \(e_{n}=\mu q^{n}+\lambda n+\nu\) for \(n\gg 0\). On the other hand, in [20, 21], L. C. Washington proved that for distinct primes \(p\) and \(q\), the \(p\)-part of the class number stabilizes in the _cyclotomic_\(\mathbb{Z}_{q}\)-extension of an abelian number field. Washington's results have been extended to other \(\mathbb{Z}_{q}\)-extensions where primes are finitely decomposed. In particular, J. Lamplugh proved the following in [15]: if \(p,q\) are distinct primes \(\geq 5\) that split in an imaginary quadratic field \(K\) of class number \(1\) and \(F/K\) is a prime-to-\(p\) abelian extension which is also unramified at \(p\), then the \(p\)-class group stabilizes in the \(\mathbb{Z}_{q}\)-extension of \(F\) which is unramified outside precisely one of the primes above \(q\). There have also been speculations by J. Coates on the size of the whole class group in a cyclotomic tower; see [14], especially the discussion in SS3 and Conjecture D.
Let \(p\) and \(q\) be two distinct odd primes and \(K\) an imaginary quadratic field of class number \(1\) in which both \(p\) and \(q\) split. We write \(p\mathcal{O}_{K}=\mathfrak{p}\overline{\mathfrak{p}}\) and \(q\mathcal{O}_{K}=\mathfrak{q}\overline{\mathfrak{q}}\). Given an ideal \(\mathfrak{h}\) of \(\mathcal{O}_{K}\), we write \(\mathscr{R}(\mathfrak{h})\) for the ray class field of \(K\) of conductor \(\mathfrak{h}\). In the first half of this article, we study the growth of the \(p\)-part of the ideal class group in a \(\mathbb{Z}_{q}\)-anticyclotomic tower. This generalizes [15, Theorem 1.3], where the stability of the \(p\)-part of the class numbers \(\mathscr{R}(\mathfrak{qq}^{n})\) is studied. More precisely, we prove the following result.
**Theorem A**.: _Let \(K\) be an imaginary quadratic field of class number 1. Let \(p\) and \(q\) be distinct primes (\(\geq 5\)) which split in \(K\). Let \(\mathfrak{r}\) be a fixed ideal of \(\mathcal{O}_{K}\) coprime to \(pq\) such that \(\mathfrak{r}\) is a product of split primes1. Let \(\mathcal{F}=\mathscr{R}(\mathfrak{r}q)\). We assume that \(p\nmid[\mathcal{F}:K]\). Let \(\mathscr{R}(\mathfrak{r}q^{\infty})^{\mathrm{ac}}/\mathcal{F}\) denote the anticyclotomic \(\mathbb{Z}_{q}\)-extension and write \(\mathcal{F}_{n}\) for the unique subextension of \(\mathscr{R}(\mathfrak{r}q^{\infty})^{\mathrm{ac}}/\mathcal{F}\) whose degree is \(q^{n}\). Then there exists an integer \(N\) such that for all \(n\geq N\),_
Footnote 1: In this article, a split prime of \(K\) refers to a prime ideal of \(\mathcal{O}_{K}\) that lies above a rational prime that splits in \(K\).
\[\mathrm{ord}_{p}(h(\mathcal{F}_{n}))=\mathrm{ord}_{p}(h(\mathcal{F}_{N})).\]
The hypothesis on \(\mathfrak{r}\) being a product of split primes is crucial for the use of a theorem of H. Hida, which guarantees the non-vanishing modulo \(p\) of the algebraic \(L\)-values of anticyclotomic characters factoring through \(\mathscr{R}(\mathfrak{r}q^{\infty})^{\mathrm{ac}}\) (see Theorem 3.2). To prove Theorem A, we link this non-vanishing to the stabilization of the \(p\)-class groups via the (\(p\)-adic) Iwasawa main conjecture proved by K. Rubin [16]. Our strategy is inspired by the work of Lamplugh [14], which we outline below.
In SS2, we introduce an auxiliary elliptic curve \(E_{/K}\) with CM by \(\mathcal{O}_{K}\) such that the conductor \(\mathfrak{f}\) of its Hecke character is a product of split primes in \(K\) with \(p\nmid[\mathscr{R}(\mathfrak{f}):K]\). Let \(\mathfrak{g}=\mathrm{lcm}(\mathfrak{f},\mathfrak{r})\). By a result of Lamplugh, when the algebraic \(L\)-value of certain Hecke character is nonzero modulo \(p\), the corresponding modules of local \(p\)-adic units and elliptic units over an extension generated by \(E[\mathfrak{p}^{\infty}]\) coincide after taking appropriate isotypic components (see Theorem 4.2 for the precise statement). Combining this with Hida's theorem, we prove in Theorem 4.3 that the \(p\)-primary Galois modules featured in the Iwasawa main conjecture stabilize in the anticyclotomic \(\mathbb{Z}_{q}\)-extension \(\mathscr{R}(\mathfrak{g}q^{\infty})^{\mathrm{ac}}/\mathscr{R}(\mathfrak{g}q)\). This can be translated into a statement on \(p\)-class groups, proving a special case of Theorem A, where the ideal \(\mathfrak{r}\) is divisible by \(\mathfrak{f}\) (see Theorem 4.4). To complete the proof of Theorem A, we bound the \(p\)-class groups over the tower \(\mathscr{R}(\mathfrak{q}q^{\infty})^{\mathrm{ac}}/\mathscr{R}(\mathfrak{r}q)\) by those over \(\mathscr{R}(\mathfrak{g}q^{\infty})^{\mathrm{ac}}/\mathscr{R}(\mathfrak{g}q)\).
In the second half of the article, we prove a general statement (see Theorem 5.3) which shows that in certain \(\mathbb{Z}_{q}\)-extensions of a number field \(F\), the growth of the \(p\)-part of the class group is closely related to that of the \(p\)-primary _fine_ Selmer group of an abelian variety \(A_{/F}\). This subgroup of the classical \(p\)-primary Selmer group is denoted by \(\mathrm{Sel}_{0}(A/F)\), and is obtained by imposing stronger vanishing conditions at primes above \(p\) (the precise definition is reviewed in SS5.1). The following result is an application of the aforementioned theorem to the growth of the \(p\)-part of fine Selmer group of a fixed abelian variety \(A\) over a \(\mathbb{Z}_{q}\)-tower (which is not necessarily anticyclotomic).
**Theorem B**.: _Let \(p\) and \(q\) be distinct odd primes. Let \(F\) be any number field and \(A_{/F}\) be an abelian variety such that \(A[p]\subseteq A(F)\). Let \(F_{\infty}/F\) be a \(\mathbb{Z}_{q}\)-extension where the primes above \(q\) and the primes of bad reduction of \(A\) are finitely decomposed. If there exists \(N\geq 0\) such that for all \(n\geq N\),_
\[\mathrm{ord}_{p}(h(F_{n}))=\mathrm{ord}_{p}(h(F_{N})),\]
_then there exists an integer \(N^{\prime}\geq N\) such that for all \(n\geq N^{\prime}\), there is an isomorphism_
\[\mathrm{Sel}_{0}(A/F_{n})\simeq\mathrm{Sel}_{0}(A/F_{N^{\prime}}).\]
In particular, Theorem B applies to the setting studied by Washington [17, 18]. Finally, we remark that unlike what we have found for fine Selmer groups in Theorem B, it has been shown by T. Dokchitser and V. Dokchitser that the \(p\)-part of the Tate-Shafarevich group of an abelian variety in a \(\mathbb{Z}_{q}\)-tower can be unbounded; see [1, Example 1.5].
### Acknowledgement
We thank Ming-Lun Hsieh, Filippo A. E. Nuccio Mortarino Majno di Capriglio, and Lawrence Washington for answering our questions during the preparation of this article. We are also indebted to the anonymous referees for their valuable comments and suggestions on earlier versions of the article. DK acknowledges the support of a PIMS Postdoctoral Fellowship. AL is supported by the NSERC Discovery Grants Program RGPIN-2020-04259 and RGPAS-2020-00096.
## 2. Finding auxiliary CM elliptic curves
Let \(K=\mathbb{Q}(\sqrt{-d})\) be an imaginary quadratic field of class number \(1\). As discussed in the introduction, we shall work with an auxiliary CM elliptic curve \(E_{/K}\) in order to prove Theorem A. Recall that the imaginary quadratic fields of class number \(1\) are precisely the following
\[\mathbb{Q}(\sqrt{-1}),\ \mathbb{Q}(\sqrt{-2}),\ \mathbb{Q}(\sqrt{-3}),\ \mathbb{Q}(\sqrt{-7}),\ \mathbb{Q}(\sqrt{-11}),\ \mathbb{Q}(\sqrt{-19}),\ \mathbb{Q}(\sqrt{-43}),\mathbb{Q}(\sqrt{-67}),\ \mathbb{Q}(\sqrt{-163}).\]
For each choice of \(K\), we shall write down an explicit elliptic curve \(E_{/K}\) such that
1. \(E\) has CM by \(\mathcal{O}_{K}\);
2. If \(\mathfrak{f}\) denotes the conductor of the Hecke character \(\psi\) attached to \(E\), then \(\mathfrak{f}\) is only divisible by split primes of \(K\);
3. The rational primes dividing \([\mathscr{R}(\mathfrak{f}):K]\) are either \(2,3\) or primes that are non-split in \(K\).
We remark that condition (c) ensures that the prime \(p\) in the statement of Theorem A does not divide \([\mathscr{R}(\mathfrak{f}):K]\).
If \(E_{/K}\) is an elliptic curve with CM by an order \(\mathcal{O}\) in \(K\), the \(j\)-invariant \(j(E)\) is an integer in this case, so \(E\) must be a twist of the base extension of an elliptic curve \(A_{/\mathbb{Q}}\). For \(d>3\), \(A\) is uniquely determined (up to isomorphism over \(K\)) by the condition that it has CM by \(\mathcal{O}_{K}\) and its base change to \(K\) has bad reduction at the ramified prime \(\mathfrak{P}=(\sqrt{-d})\). For \(d=1,2\), and \(3\) there are several choices for the elliptic curve over \(\mathbb{Q}\) (see [1, Remark 3.1]).
When \(d>3\), it follows from [1, Theorem 3.3] that if we twist \(A_{/K}\) by a character corresponding to \(K(\sqrt{\alpha})\) where \(\alpha=\mathfrak{P}\mathfrak{Q}\) such that \(\mathfrak{Q}\) is a prime of \(K\) distinct from \(\mathfrak{P}\) satisfying \(\mathfrak{Q}\equiv u^{2}\sqrt{-d}\mod 4\mathcal{O}_{K}\) for some \(u\in\mathcal{O}_{K}\), then the twisted elliptic curve (over \(K\)) has good reduction everywhere except at \(\mathfrak{Q}\). Therefore, for our purposes, it is enough to find such \(\mathfrak{Q}\) that is a split prime in \(K\). Indeed, we may choose \(r\in\mathbb{Z}\) such that \((4r+\sqrt{-d})(4r-\sqrt{-d})=16r^{2}+d\) is an odd rational prime. Such \(r\) exists for all possible values of \(d\). For example, we may take \(r\) to be \(1\) when \(d=43,67,163\). Then \(4r+\sqrt{-d}\) is a split prime of \(K\) and \(4r+\sqrt{-d}\equiv 1^{2}\sqrt{-d}\mod 4\mathcal{O}_{K}\). In particular, we may apply [1, Theorem 3.3] with \(\mathfrak{Q}=(4r+\sqrt{-d})\) and \(u=1\), resulting in a CM curve \(E\) satisfying properties (a) and (b) above.
When \(d<43\), we find \(E_{/K}\) by inspection using the data available on [1]. In all our examples below, \(E_{/K}\) has bad reduction at one or two split primes which are coprime to \(6\). In particular, the conductor of \(E_{/K}\) is given by square of the product of the bad prime(s), whereas the conductor \(\mathfrak{f}\) of the Hecke character \(\psi\) attached to \(E\) is given by the product of the bad prime(s) (see [1, Theorem 12]). The ray class group \(\operatorname{Gal}(\mathscr{R}(\mathfrak{f})/K)\) (and hence \([\mathscr{R}(\mathfrak{f}):K]\)) is computed using MAGMA [1].
## 3. A result of Hida on \(L\)-values of anticyclotomic Hecke characters
Throughout this section and the next, \(K\) is a fixed imaginary quadratic field of class number \(1\). We fix an elliptic curve \(E_{/K}\) with CM by \(\mathcal{O}_{K}\) as given in SS2. Recall that \(\psi\) denotes the Hecke character over \(K\) with conductor \(\mathfrak{f}\) attached to \(E\).
We review a special case of a result of Hida from [1] that will play a crucial role in our proof of Theorem A.
**Definition 3.1**.: Let \(\mathfrak{h}\) be any integral ideal of \(K\) and let \(\epsilon\) be any Hecke character of \(K\). The \(\mathfrak{h}\)-_imprimitive \(L\)-function_ of \(\epsilon\) is defined as follows
\[L_{\mathfrak{h}}(\epsilon,s) =\prod_{\gcd(\nu,\mathfrak{h})=1}\left(1-\frac{\epsilon(\nu)}{(N \nu)^{s}}\right)^{-1}\] \[=\sum_{\gcd(\mathfrak{a},\mathfrak{h})=1}\frac{\epsilon( \mathfrak{a})}{(N\mathfrak{a})^{s}},\]
where the product runs over _prime ideals_\(\nu\) of \(K\) coprime to \(\mathfrak{h}\), and sum is taken over _integral ideals_\(\mathfrak{a}\) coprime to \(\mathfrak{h}\).
Fix an integral ideal \(\mathfrak{g}\) of \(K\) which is divisible by \(\mathfrak{f}\), relatively prime to \(pq\), and such that only split primes of \(K\) divide \(\mathfrak{g}\). Let \(F=\mathscr{R}(\mathfrak{g}q)\) be the _ray class field_ of \(K\) of conductor \(\mathfrak{g}q\) and write \(\Delta=\operatorname{Gal}(F/K)\). Set \(F_{\infty}=\bigcup_{n\geq 1}\mathscr{R}(\mathfrak{g}q^{n})\); this is a \(\mathbb{Z}_{q}^{2}\)-extension of \(F\). We fix an isomorphism
\[\operatorname{Gal}(F_{\infty}/K)\simeq\operatorname{Gal}(F/K)\times \operatorname{Gal}(K_{\infty}/K)=\Delta\times\mathbb{Z}_{q}^{2}.\]
Let \(\epsilon\) be a character of \(\operatorname{Gal}(F_{\infty}/K)\). For our purpose, \(\epsilon\) will be of the form \(\overline{\varphi\psi^{k}}\), where \(\varphi\) is a finite-order character and \(k\) is an integer between \(1\) and \(p-1\). Denote by \(L(\epsilon,s)\) the _primitive Hecke \(L\)-function_ of \(\epsilon\). Recall that the imprimitive (or partial) \(L\)-function differs from the primitive (or classical) \(L\)-function by a finite number of Euler factors. Let \(N_{K/\mathbb{Q}}\) denote the norm map. We can further define the _primitive algebraic Hecke \(L\)-value_,
\[L^{\operatorname{alg}}(\overline{\varphi\psi^{k}})=L^{\operatorname{alg}}( \epsilon):=\frac{L\left(\epsilon,k\right)}{\Omega_{\infty}^{k}}=\frac{L\left( \overline{\varphi\psi^{k}}N_{K/\mathbb{Q}}^{-k},0\right)}{\Omega_{\infty}^{k}}.\]
Here, \(\Omega_{\infty}\) denotes a complex period for \(E_{/\mathbb{C}}\). Similarly, given an integral ideal \(\mathfrak{h}\) of \(K\), we define the _\(\mathfrak{h}\)-imprimitive algebraic Hecke \(L\)-value_,
\[L^{\operatorname{alg}}_{\mathfrak{h}}(\overline{\varphi\psi^{k}})=L^{ \operatorname{alg}}_{\mathfrak{h}}(\epsilon):=\frac{L_{\mathfrak{h}}\left( \epsilon,k\right)}{\Omega_{\infty}^{k}}=\frac{L_{\mathfrak{h}}\left( \overline{\varphi\psi^{k}}N_{K/\mathbb{Q}}^{-k},0\right)}{\Omega_{\infty}^{k}}.\]
Note that \(L\) and \(L_{\mathfrak{h}}\) differ by the omission of the Euler factors at primes dividing \(\mathfrak{h}\).
In what follows, we say that a Hecke character \(\epsilon\) of \(K\) is of _infinity type_\((a,b)\) if its infinity component sends \(x\) to \(x^{a}\overline{x}^{b}\). Under this convention, \(\psi\) has infinity type \((-1,0)\), whereas the norm map \(N_{K/\mathbb{Q}}\) is of infinity type \((-1,-1)\). Thus, the Hecke character \(\overline{\psi^{k}}N_{K/\mathbb{Q}}^{-k}\) is of infinity type \((k,0)\).
Henceforth, we fix a prime \(v\mid\mathfrak{p}\) of \(F\) and an embedding \(\overline{\mathbb{Q}}\subset\overline{\mathbb{Q}_{p}}\) so that \(v\) is sent into the maximal ideal \(\mathfrak{m}_{\overline{\mathbb{Q}_{p}}}\) of \(\mathcal{O}_{\overline{\mathbb{Q}_{p}}}\). This allows us to consider \(L^{\operatorname{alg}}_{\mathfrak{h}}(\overline{\varphi\psi^{k}})\) as elements of \(\overline{\mathbb{Q}_{p}}\). Throughout, \(\pi\) is a fixed uniformizer of \(F_{v}\) and we write \(\operatorname{ord}_{\pi}\) for the valuation on \(\overline{\mathbb{Q}_{p}}\) normalized so that \(\operatorname{ord}_{\pi}(\pi)=1\).
**Theorem 3.2** (Hida).: _For all but finitely many characters \(\varphi\) that factor through \(\mathscr{R}(\mathfrak{g}q^{\infty})^{\operatorname{ac}}\), we have_
\[\operatorname{ord}_{\pi}\left(L^{\operatorname{alg}}_{(q)}(\overline{ \varphi\psi^{k}})\right)=0.\]
Proof.: For each \(\varphi\), we have \(\overline{\varphi}=\phi\eta\), where \(\phi\) is a character of \(\Delta\) and \(\eta\) is a character of the Galois group \(\operatorname{Gal}(\mathscr{R}(\mathfrak{g}q^{\infty})^{\operatorname{ac}}/F)\). We may further decompose \(\phi\) into \(\phi^{\prime}\nu^{-1}\), where \(\nu\) is a character of \(\operatorname{Gal}(F/\mathscr{R}(\mathfrak{g}))\) and \(\phi^{\prime}\) is a character of \(\operatorname{Gal}(\mathscr{R}(\mathfrak{g})/K)\). We have the field diagram:
\[\mathscr{R}(\mathfrak{g}q^{\infty})^{\operatorname{ac}}\] \[\mathscr{R}(\mathfrak{g}q)=F\] \[\Delta\left(\left|\begin{array}{c}\includegraphics[height=65.0pt]{ -2.0pt}{-2.0pt}\end{array}\right.\right.\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)=F\] \[\Delta\left(\left|\begin{array}{c}\includegraphics[height=65.0pt]{ -2.0pt}{-2.0pt}\end{array}\right.\right.\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)=F\] \[\Delta\left(\left|\begin{array}{c}\includegraphics[height=65.0pt]{ -2.0pt}{-2.0pt}\end{array}\right.\right.\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)\] \[\mathscr{R}(\mathfrak{g}q)=F\] \[\Delta\left(\left|\begin{array}{c}\includegraphics[height=65.0pt]{ -2.0pt}{-2.0pt}\end{array}\right.\right.\] \[\mathscr{R}(\mathfrak{g}q)\]
We take the CM field \(M\) in [10] to be the imaginary quadratic field \(K\). We take the CM type \(\Sigma\) there to be the one that corresponds to the infinity type \((1,0)\) and \(\kappa=0\). Then the infinity type of the character \(\lambda\) in _op. cit._ becomes
\[k\Sigma+0(1-c)=k(1,0)+(0,0)-(0,0)=(k,0).\]
The condition (M1) in [10, Theorem 4.3] does not hold since \(K/\mathbb{Q}\) is not unramified everywhere (it ramifies at the primes dividing the discriminant of \(K\), which is nontrivial). Hence, we can apply the aforementioned theorem with \(\lambda\) and \(\chi^{-1}\) taken to be \(\overline{\psi^{k}}N^{-k}\phi^{\prime}\) and \(\eta\), respectively.
_Remark 3.3_ ([14, proof of Theorem 3.1.9]).: Let \(\mathfrak{g}\) be a fixed ideal as before. Fix an ideal \(\mathfrak{h}\) of \(\mathcal{O}_{K}\) which is coprime to \(\mathfrak{p}\) and divisible by \(\mathfrak{g}q\). Recall that the \(\mathfrak{h}\)-imprimitive algebraic \(L\)-value of \(\overline{\varphi\psi^{k}}\) is given by
\[L_{\mathfrak{h}}^{\mathrm{alg}}(\overline{\varphi\psi^{k}})= \frac{L_{\mathfrak{h}}(\overline{\varphi\psi^{k}},k)}{\Omega_{\infty}^{k}}.\]
Then, for almost all characters of \(\mathrm{Gal}\left(\mathscr{R}(\mathfrak{g}q^{\infty})^{\mathrm{ac}}/F\right) \cong\mathbb{Z}_{q}\), we have that
\[\mathrm{ord}_{\pi}\left(L_{(q)}^{\mathrm{alg}}(\overline{\varphi \psi^{k}})\right)=\mathrm{ord}_{\pi}\left(L_{\mathfrak{h}}^{\mathrm{alg}}( \overline{\varphi\psi^{k}})\right).\]
This follows from the observation that for a given prime ideal \(\mathfrak{a}\) of \(K\) that is coprime to \(q\), for almost all characters \(\eta\),
\[\mathrm{ord}_{\pi}\left(1-\frac{\overline{\varphi\psi^{k}}( \mathfrak{a})}{(N\mathfrak{a})^{k}}\right)=0\]
since \(\eta\) sends \(\mathfrak{a}\) to a \(q\)-power root of unity, and the images of \(q\)-power roots of unity under the reduction map on \(\mathcal{O}_{\overline{\mathbb{Q}_{p}}}\) modulo \(\mathfrak{m}_{\overline{\mathbb{Q}_{p}}}\) are distinct.
## 4. Consequences on class groups
We now use Theorem 3.2 to study the growth of the \(p\)-part of the class group in an anticyclotomic \(\mathbb{Z}_{q}\)-extension. Let us introduce the necessary notation. Throughout, \(p\nmid 6q\) is a fixed prime that is split in \(K\) and \(E_{/K}\) is a fixed CM elliptic curve as in the previous section (with Hecke character \(\psi\) whose conductor is \(\mathfrak{f}\)). Let \(K_{0}\) be any finite abelian extension of \(K\) such that \(p\) is unramified in \(K_{0}\) and \(p\nmid[K_{0}:K]\) (in the next subsection, we will let \(K_{0}\) vary inside the anticyclotomic tower \(\mathscr{R}(\mathfrak{g}q^{\infty})^{\mathrm{ac}}\)). Fix a prime \(\mathfrak{p}\) of \(K\) lying above \(p\). Set \(L=K_{0}(E_{\mathfrak{p}})\) and \(L_{\infty}=L(E_{\mathfrak{p}^{\infty}})\). Let \(\Delta=\mathrm{Gal}(L/K)\) and \(\Gamma=\mathrm{Gal}(L_{\infty}/L)\simeq\mathbb{Z}_{p}\). Let \(\mathcal{G}=\mathrm{Gal}(L_{\infty}/K)\cong\Delta\times\Gamma\) and \(\Lambda=\mathbb{Z}_{p}\llbracket\mathscr{G}\rrbracket\).
Following [14], we write \(\overline{\mathcal{C}}(L_{\infty})\) (resp. \(U(L_{\infty})\)) for the inverse limits over all finite sub-extensions inside \(L_{\infty}\) of the completion of the elliptic units (resp. local principal units) at \(\mathfrak{p}\).
Fix an ideal \(\mathfrak{h}\) of \(\mathcal{O}_{K}\) which is coprime to \(\mathfrak{p}\), is divisible by \(\mathfrak{f}\), and is such that \(K_{0}\subset K(E_{\mathfrak{h}})=\mathscr{R}(\mathfrak{h})\). Let \(\mu_{K}\) be the group of roots of unity of \(K\) and \(\lambda\in\mathcal{O}_{K}\backslash\mu_{K}\) such that \(\lambda\equiv 1\mod\mathfrak{h}\) with \((\lambda,6\mathfrak{hp})=1\). We let \(\sigma_{(\lambda)}\in\mathrm{Gal}(K_{0}/K)\) denote the Artin symbol associated to \(\lambda\).
We further decompose \(\Delta\) as \(H\times I\), where \(H=\mathrm{Gal}(K_{0}/K)\) and \(I=\mathrm{Gal}(K_{0}(E_{\mathfrak{p}})/K_{0})\). Here, \(I\) is the inertia subgroup at \(\mathfrak{p}\) inside \(\Delta\). Let \(\theta_{\mathfrak{p}}\) denote the canonical character given by the Galois action on \(E_{\mathfrak{p}^{\infty}}\) restricted to \(I\). Given a character \(\chi\) of \(\Delta\), we write it as \(\varphi\theta_{\mathfrak{p}}^{k}\), where \(\varphi\) is a character of \(H\) and \(1\leq k\leq p-1\). We have the following diagram:
\[\begin{split} L_{\infty}&=L(E_{\mathfrak{p}^{\infty}})\\ &\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}&\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}&\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\quad\quad\begin{split}\quad\
**Theorem 4.3**.: _There exists an integer \(N\geq 0\) such that \(X_{n,\infty}^{I}=X_{N,\infty}^{I}\) for all \(n\geq N\), where \(M^{I}\) denotes the subgroup of \(M\) fixed by \(I\)._
Proof.: Let \(n\geq 0\) be an integer and consider a character \(\eta\) of \(\operatorname{Gal}(F_{n}/F)\cong\mathbb{Z}/q^{n}\). Let \(\phi_{0}\) be a character of \(\operatorname{Gal}(F/K)\) and \(k\) an integer that is a multiple of \(p-1\) so that \(\theta_{\mathfrak{p}}^{k}\) is the trivial character. Set \(\varphi=\eta\phi_{0}\). We draw the following field diagram for the convenience of the reader.
Let \(\mathcal{O}\) denote the ring of integers of the unique unramified \(\mathbb{Z}_{q}\)-extension of \(F_{v}\). In other words, \(\mathcal{O}=\mathcal{O}_{F_{v}(\mu_{q\infty})}\). Let \(\lambda\in\mathcal{O}_{K}\setminus\mu_{K}\) such that \(\lambda\equiv 1\mod\mathfrak{h}\) and \((\lambda,6\mathfrak{p})=1\) (where \(\mathfrak{h}=\mathfrak{g}q^{n+1}\)). We have
\[\left(\lambda\overline{\lambda}-\lambda^{k}\varphi(\sigma_{( \lambda)})\right)\equiv 0\pmod{\pi\mathcal{O}} \Leftrightarrow\varphi(\sigma_{(\lambda)})\equiv\overline{ \lambda}\lambda^{1-k}\pmod{\pi\mathcal{O}}\] \[\Leftrightarrow\eta\phi_{0}(\sigma_{(\lambda)})\equiv\overline{ \lambda}\lambda^{1-k}\pmod{\pi\mathcal{O}}\] \[\Leftrightarrow\eta(\sigma_{(\lambda)})\equiv\overline{\lambda} \lambda^{1-k}\phi_{0}^{-1}(\sigma_{(\lambda)})\pmod{\pi\mathcal{O}}.\]
Note that \(\eta\) has exact order \(q^{m}\) for some \(m\geq 1\). Therefore, \(\eta(\sigma_{(\lambda)})\) is a primitive \(q^{m}\)-th root of unity. But in \(\mathcal{O}/\pi\mathcal{O}\), the \(q\)-power roots of unity are distinct. Therefore, by the same argument outlined in Remark 3.3, there exists an integer \(N_{1}\) such that for all characters \(\eta\) of \(\operatorname{Gal}(F_{n}/F)\) which do not factor through \(\Delta_{N_{1}}\) (with \(n\geq N_{1}\)),
\[\operatorname{ord}_{\pi}\left(N_{K/\mathbb{Q}}\left(\lambda\right)-\lambda^{k }\varphi(\sigma_{(\lambda)})\right)=0. \tag{2}\]
By Theorem 3.2 and Remark 3.3, one can choose a sufficiently large \(N_{2}\) such that
\[\operatorname{ord}_{\pi}\left(L_{\mathfrak{h}}^{\operatorname{alg}}(\overline {\varphi\psi^{k}})\right)=0 \tag{3}\]
for all characters \(\varphi\) of \(\Delta_{n}\) which do not factor through \(\Delta_{N_{2}}\) (with \(n\geq N_{2}\)).
Set \(N=\max(N_{1},N_{2})\). If \(\chi=\varphi\) is a character of \(\Delta_{n}\) which does not factor through \(\Delta_{N}\) (with \(n\geq N\)), then (2) and (3) imply that
\[\operatorname{ord}_{\pi}\left(\left(N_{K/\mathbb{Q}}\left(\lambda\right)- \lambda^{k}\varphi(\sigma_{(\lambda)})\right)\cdot L_{\mathfrak{h}}^{ \operatorname{alg}}(\overline{\varphi\psi^{k}})\right)=0.\]
Take \(K_{0}\) in Theorem 4.2 to be \(F_{n}\). Since the restriction of the character \(\varphi\) to \(I\) is trivial, the hypothesis regarding \(E/K_{0}\) when a prime above \(\mathfrak{p}\) is an anomalous prime always holds. Therefore, we deduce that
\[U_{n,\infty}^{\chi}=\overline{\mathcal{C}}_{n,\infty}^{\chi}\]
for all characters \(\chi\) of \(\Delta_{n}\) that do not factor through \(\Delta_{N}\) with \(\chi|_{I}=1\). This implies
\[U_{n,\infty}^{I}/\overline{\mathcal{C}}_{n,\infty}^{I}=U_{N,\infty}^{I}/ \overline{\mathcal{C}}_{N,\infty}^{I}.\]
Next, via the main conjecture of Iwasawa theory for imaginary quadratic fields (see [11, Theorem 4.1(i)]) we can conclude that there exists an integer \(N\geq 0\) such that
\[\operatorname{char}_{\Lambda}(X^{I}_{n,\infty})=\operatorname{char}_{\Lambda}(X^ {I}_{N,\infty})\]
for all \(n\geq N\). Now, consider the restriction map
\[\pi_{n,N}:X^{I}_{n,\infty}\twoheadrightarrow X^{I}_{N,\infty}.\]
Since characteristic ideals are multiplicative in short exact sequences, the kernel of the above surjective map must be finite. However, a theorem of R. Greenberg (see [16, Theorem SS1]) ensures that there are no non-trivial finite submodules inside \(X^{I}_{n,\infty}\). This forces the kernel to be trivial, i.e.,
\[X^{I}_{n,\infty}=X^{I}_{N,\infty}.\]
The proof of the theorem is now complete.
We can now state and prove the auxiliary result that will allow us to conclude Theorem A. Our proof follows the proof of [1, Theorem 7.10] very closely. We repeat the statement below for the convenience of the reader.
**Theorem 4.4**.: _Let \(K\) be an imaginary quadratic field of class number 1. Let \(p\) and \(q\) be distinct primes (\(\geq 5\)) which split in \(K\). Let \(\mathfrak{g}\) be a fixed ideal of \(\mathcal{O}_{K}\) coprime to \(pq\) such that \(\mathfrak{g}\) is a product of split primes and is divisible by the conductor of an elliptic curve over \(K\) with CM by \(\mathcal{O}_{K}\). Let \(F=\mathscr{R}(\mathfrak{q})\). We assume that \(p\nmid[F:K]\). Let \(\mathscr{R}(\mathfrak{q}^{\infty})^{\mathrm{ac}}/F\) denote the anticyclotomic \(\mathbb{Z}_{q}\)-extension and write \(F_{n}\) for the unique subextension of \(\mathscr{R}(\mathfrak{q}^{\infty})^{\mathrm{ac}}/F\) whose degree is \(q^{n}\). Then, there exists an integer \(N\) such that for all \(n\geq N\),_
\[\operatorname{ord}_{p}(h(F_{n}))=\operatorname{ord}_{p}(h(F_{N})).\]
Proof.: Let the \(p\)-class group of \(F_{n}\) (resp. \(F_{N}\)) be denoted by \(A(F_{n})\) (resp. \(A(F_{N})\)). Since \(p\) does not divide \([F_{n}:F_{N}]\), we have an injection
\[A(F_{N})\hookrightarrow A(F_{n}). \tag{4}\]
It follows from global class field theory that for all \(n\geq 0\), we have the identification
\[A_{n,\infty}\simeq\operatorname{Gal}(M_{n,\infty}/L_{n,\infty}),\]
where \(M_{n,\infty}\) is the maximal abelian unramified \(p\)-extension of \(L_{n,\infty}\). Consider the following diagram
where the vertical maps are given by restriction and are surjective because the extension \(L_{n,\infty}/F_{n}\) and \(L_{N,\infty}/F_{N}\) are totally ramified at primes above \(\mathfrak{p}\). Furthermore, the top horizontal map is surjective by Theorem 4.3 and the exact sequence (1). Therefore, the bottom row is a surjective map as well. When combined with (4), we see that the bottom row is in fact an isomorphism. This completes the proof of the theorem.
The following lemma allows us to complete the proof of Theorem A via Theorem 4.4.
**Lemma 4.5**.: _Let \(\mathfrak{a}\) and \(\mathfrak{b}\) be ideals of \(\mathcal{O}_{K}\). If \(p\nmid[\mathscr{R}(\mathfrak{a}):K]\cdot[\mathscr{R}(\mathfrak{b}):K]\), then \(p\nmid[\mathscr{R}(\operatorname{lcm}(\mathfrak{a},\mathfrak{b})):K]\)._
Proof.: Let us write \(\mathfrak{a}=\prod\mathfrak{p}_{i}^{m_{i}},\,\mathfrak{b}=\prod\mathfrak{p}_{i }^{n_{i}}\), where \(\mathfrak{p}_{i}\) are distinct prime ideals of \(\mathcal{O}_{K}\). Recall that \(K\) is of class number 1. By the theory of complex multiplication, if \(I\) is an ideal of \(\mathcal{O}_{K}\), we have
\[\operatorname{Gal}(\mathscr{R}(I)/K)=\operatorname{Gal}(K(E[I])/K)\cong( \mathcal{O}_{K}/I)^{\times}.\]
Thus, by the Chinese remainder theorem,
\[p\nmid\big{|}(\mathcal{O}_{K}/\mathfrak{p}_{i}^{m_{i}})^{\times}\big{|}\,, \quad p\nmid\big{|}(\mathcal{O}_{K}/\mathfrak{p}_{i}^{n_{i}})^{\times}\big{|}\]
for all \(i\). As \(\operatorname{lcm}(\mathfrak{a},\mathfrak{b})=\prod\mathfrak{p}_{i}^{\max(m_{i},n_{i })}\), we deduce that
\[p\nmid\bigl{(}\mathcal{O}_{K}/\operatorname{lcm}(\mathfrak{a},\mathfrak{b}) \bigr{)}^{\times}\bigr{|}=[\mathscr{R}(\operatorname{lcm}(\mathfrak{a}, \mathfrak{b})):K].\qed\]
We can now prove Theorem A from the introduction.
**Theorem**.: _Let \(K\) be an imaginary quadratic field of class number 1. Let \(p\) and \(q\) be distinct primes (\(\geq 5\)) which split in \(K\). Let \(\mathfrak{r}\) be a fixed ideal of \(\mathcal{O}_{K}\) coprime to \(pq\) such that \(\mathfrak{r}\) is a product of split primes. Let \(\mathcal{F}=\mathscr{R}(\mathfrak{r}q)\) and write \(\mathscr{R}(\mathfrak{rq}^{\infty})^{\mathrm{ac}}/\mathcal{F}\) for the anticyclotomic \(\mathbb{Z}_{q}\)-extension. Assume that \(p\nmid[\mathcal{F}:K]\). Then, there exists an integer \(N\) such that for all \(n\geq N\),_
\[\operatorname{ord}_{p}(h(\mathcal{F}_{n}))=\operatorname{ord}_{p}(h(\mathcal{ F}_{N})).\]
Proof.: Let \(E_{/K}\) be a CM elliptic curve of conductor \(\mathfrak{f}\) such that all the prime divisors of \(\mathfrak{f}\) are split in \(K\) but the prime divisors (which are \(\geq 5\)) of \([\mathscr{R}(\mathfrak{f}):K]\) are not split in \(K\). Such elliptic curves exist as we have seen in SS2.
Let \(\mathfrak{r}\) be any ideal of \(\mathcal{O}_{K}\) and \(p,q\) be two distinct primes satisfying the hypotheses in the statement of the theorem. Set \(\mathfrak{g}=\operatorname{lcm}(\mathfrak{f},\mathfrak{r})\) and define \(F=\mathscr{R}(\mathfrak{g}q)\). By assumption, \(p\nmid[\mathscr{R}(\mathfrak{rq}):K]\) and we have chosen our auxiliary CM elliptic curve so that \(p\nmid[\mathscr{R}(\mathfrak{f}):K]\). Thus, it follows from Lemma 4.5 that \(p\nmid[\mathscr{R}(\mathfrak{g}q):K]\). Furthermore, both \(\mathfrak{f}\) and \(\mathfrak{r}\) are only divisible by split primes. Therefore, Theorem 4.4 holds for the ideal \(\mathfrak{g}\).
Since \(p\nmid[F_{n}:\mathcal{F}_{n}]\) and \(p\nmid[\mathcal{F}_{n+1}:\mathcal{F}_{n}]\) for all \(n\geq 0\), we have
\[A(\mathcal{F}_{n})\hookrightarrow A(F_{n}),\quad A(\mathcal{F}_{n}) \hookrightarrow A(\mathcal{F}_{n+1}).\]
Theorem 4.4 asserts that \(\operatorname{ord}_{p}(h(F_{n}))\) stabilizes as \(n\to\infty\). Hence, the same is true for \(\operatorname{ord}_{p}(h(\mathcal{F}_{n}))\).
## 5. Asymptotic growth of fine Selmer groups of abelian varieties
### Definition of fine Selmer groups
Suppose \(F\) is a number field. Throughout, \(A_{/F}\) is a fixed abelian variety. We fix a finite set \(S\) of primes of \(F\) containing \(p\), the primes dividing the conductor of \(A\), as well as the Archimedean primes. We write \(S_{f}\) to denote the set of finite primes. Denote by \(F_{S}\), the maximal algebraic extension of \(F\) unramified outside \(S\). For every (possibly infinite) extension \(L\) of \(F\) contained in \(F_{S}\), write \(G_{S}\left(L\right)=\operatorname{Gal}\left(F_{S}/L\right)\). Write \(S\left(L\right)\) for the set of primes of \(L\) above \(S\). If \(L\) is a finite extension of \(F\) and \(w\) is a place of \(L\), we write \(L_{w}\) for its completion at \(w\); when \(L/F\) is infinite, it is the union of completions of all finite sub-extensions of \(L\).
**Definition 5.1**.: Let \(L/F\) be an algebraic extension. The _\(p\)-primary fine Selmer group of \(A\)_ over \(L\) is defined as
\[\operatorname{Sel}_{0}(A/L)=\ker\left(H^{1}\left(G_{S}\left(L\right),A[p^{ \infty}]\right)\to\bigoplus_{v\in S}H^{1}\left(L_{v},A[p^{\infty}]\right) \right).\]
Similarly, the _\(p\)-fine Selmer group of \(A\)_ over \(L\) is defined as
\[\operatorname{Sel}_{0}(A[p]/L)=\ker\left(H^{1}\left(G_{S}\left(L\right),A[p] \right)\to\bigoplus_{v\in S}H^{1}\left(L_{v},A[p]\right)\right).\]
Note that \(\operatorname{Sel}_{0}(A/L)\) is independent of the choice of \(S\), whereas the definition of \(\operatorname{Sel}_{0}(A[p]/L)\) depends on \(S\); see for example [16, Lemma 4.1 and p. 86]. Since our main result concerns \(\operatorname{Sel}_{0}(A/L)\), we suppress \(S\) from the notation of \(\operatorname{Sel}_{0}(A[p]/L)\) for simplicity.
It is easy to observe that if \(F_{\infty}/F\) is an infinite extension,
\[\operatorname{Sel}_{0}\left(A/F_{\infty}\right)=\varinjlim_{L}\operatorname{ Sel}_{0}\left(A/L\right),\quad\operatorname{Sel}_{0}\left(A[p]/F_{\infty} \right)=\varinjlim_{L}\operatorname{Sel}_{0}\left(A[p]/L\right),\]
where the inductive limits are taken with respect to the restriction maps and \(L\) runs over all finite extensions of \(F\) contained in \(F_{\infty}\). Next, we define the notion of \(p\)-rank of an abelian group \(G\).
**Definition 5.2**.: Let \(G\) be an abelian group. Define the \(p\)_-rank_ of \(G\) as
\[r_{p}(G)=r_{p}(G[p]):=\dim_{\mathbb{F}_{p}}\left(G[p]\right).\]
### Growth of fine Selmer groups in \(\mathbb{Z}_{q}\)-extensions
In this section, we prove the following theorem which essentially says that the \(p\)-part of the class group and the \(p\)-primary fine Selmer group have similar growth behaviour in \(\mathbb{Z}_{q}\)-extensions. Our result is motivated by [16, Section 5].
**Theorem 5.3**.: _Let \(A\) be a \(d\)-dimensional abelian variety defined over a number field \(F\). Let \(S(F)\) be a finite set of primes in \(F\) consisting precisely of the primes above \(q\), the primes of bad reduction of \(A\), and the Archimedean primes. Let \(F_{\infty}/F\) be a fixed \(\mathbb{Z}_{q}\) extension such that primes in \(S_{f}(F)\) are finitely decomposed in \(F_{\infty}/F\) and suppose \([F_{n}:F]=q^{n}\). Further suppose that \(A[p]\subseteq A(F)\). Then as \(n\to\infty\),_
\[\left|r_{p}\left(\operatorname{Sel}_{0}\left(A/F_{n}\right)\right)-2dr_{p} \left(\operatorname{Cl}(F_{n})\right)\right|=O(1).\]
If \(A[p]\subseteq A(F)\), then the action of \(G_{F}\) on \(A[p]\) is trivial. Let \(A^{\vee}\) be the dual abelian variety. The action on the dual representation, \(A^{\vee}[p]\) is also trivial. This tells us that \(A^{\vee}[p]\subseteq A^{\vee}(F)\). Therefore, Theorem 5.3 allows us to deduce the following result.
**Corollary 5.4**.: _With the same hypothesis as in Theorem 5.3_
\[\left|r_{p}\left(\operatorname{Sel}_{0}\left(A/F_{n}\right)\right)-r_{p}\left( \operatorname{Sel}_{0}\left(A^{\vee}/F_{n}\right)\right)\right|=O(1).\]
To prove Theorem 5.3, we need a few lemmas.
**Lemma 5.5**.: _Consider the following short exact sequence of co-finitely generated abelian groups_
\[P\to Q\to R\to S.\]
_Then,_
\[\left|r_{p}\left(Q\right)-r_{p}\left(R\right)\right|\leq 2r_{p}\left(P\right)+r_{ p}\left(S\right).\]
Proof.: See [16, Lemma 3.2].
**Lemma 5.6**.: _Let \(F_{\infty}\) be any \(\mathbb{Z}_{q}\)-extension of \(F\) such that all the primes in \(S_{f}(F)\) are finitely decomposed. Let \(F_{n}\) be the subfield of \(F_{\infty}\) such that \([F_{n}:F]=q^{n}\). Then_
\[\left|r_{p}\left(\operatorname{Cl}(F_{n})\right)-r_{p}\left(\operatorname{Cl} _{S}(F_{n})\right)\right|=O(1).\]
Proof.: For each \(F_{n}\), we write \(S_{f}(F_{n})\) for the set of finite primes of \(F_{n}\) above \(S_{f}\). For each \(n\), we have the following exact sequence
\[\mathbb{Z}^{\left|S_{f}(F_{n})\right|}\longrightarrow\operatorname{Cl}(F_{n}) \xrightarrow{\alpha_{n}}\operatorname{Cl}_{S}(F_{n})\longrightarrow 0\]
(see [14, Lemma 10.3.12]). Since the class group is always finite, it follows that \(\ker(\alpha_{n})\) is finite. Also, \(r_{p}\left(\ker(\alpha_{n})\right)\leq\left|S_{f}(F_{n})\right|\) and \(r_{p}\left(\ker(\alpha_{n})/p\right)\leq\left|S_{f}(F_{n})\right|\). By Lemma 5.5,
\[\left|r_{p}\left(\operatorname{Cl}(F_{n})\right)-r_{p}\left(\operatorname{Cl} _{S}(F_{n})\right)\right|\leq 2\left|S_{f}(F_{n})\right|=O(1).\qed\]
**Lemma 5.7**.: _Let \(F_{\infty}/F\) be a \(\mathbb{Z}_{q}\)-extension and let \(F_{n}\) be the subfield of \(F_{\infty}\) such that \([F_{n}:F]=q^{n}\). Let \(A\) be an abelian variety defined over \(F\). Suppose that all primes of \(S_{f}(F)\) are finitely decomposed in \(F_{\infty}/F\). Then_
\[\left|r_{p}\left(\operatorname{Sel}_{0}(A[p]/F_{n})\right)-r_{p}\left( \operatorname{Sel}_{0}(A/F_{n})\right)\right|=O(1).\]
Proof.: Consider the commutative diagram
\[\begin{array}{ccccccccc}0&\to&\operatorname{Sel}_{0}(A[p]/F_{n})&\to&H^{1}(G_{S} (F_{n}),A[p])&\to&\bigoplus_{v\in S(F_{n})}H^{1}(F_{n,v_{n}},A[p])\\ &&\left|\begin{array}{ccccc}s_{n}&&\left|\begin{array}{ccccc}f_{n}&&\left| \begin{array}{ccccc}f_{n}\\ \end{array}\right|\gamma_{n}\\ \end{array}\right|\\ 0&\to&\operatorname{Sel}_{0}(A/F_{n})[p]&\to&H^{1}(G_{S}(F_{n}),\ A[p^{ \infty}])[p]&\to&\bigoplus_{v_{n}\in S(F_{n})}H^{1}(F_{n,v_{n}},\ A[p^{\infty}] )[p].\end{array}\right|\gamma_{n}\]
Both \(f_{n}\) and \(\gamma_{n}\) are surjective. Since \(A[p]\subset A(F_{n})\), the kernel of these maps are given by
\[\ker(f_{n}) =A(F_{n})[p^{\infty}]\big{/}p\simeq\left(\mathbb{Z}/p\mathbb{Z} \right)^{2d},\] \[\ker(\gamma_{n}) =\bigoplus_{v_{n}\in S(F_{n})}A(F_{n,v_{n}})[p^{\infty}]\big{/}p \simeq\bigoplus_{v_{n}\in S_{f}(F_{n})}\left(\mathbb{Z}/p\mathbb{Z}\right)^{2 d},\]
where the last isomorphism follows from our assumption that \(p\) is odd.
Observe that \(r_{p}\left(\ker\left(s_{n}\right)\right)\leq r_{p}\left(\ker\left(f_{n}\right) \right)=2d\) and that \(r_{p}\left(\ker\left(\gamma_{n}\right)\right)=2d\big{|}S_{f}(F_{n})\big{|}\). By hypothesis, \(S_{f}(F_{n})\) is bounded as \(n\) varies. It follows from the snake lemma that both \(r_{p}\left(\ker\left(s_{n}\right)\right)\) and \(r_{p}\left(\operatorname{coker}\left(s_{n}\right)\right)\) are finite and bounded. Applying Lemma 5.5 to the following exact sequence
\[0\to\ker(s_{n})\to\operatorname{Sel}_{0}(A[p]/F_{n})\to\operatorname{Sel}_{0}( A/F_{n})[p]\to\operatorname{coker}(s_{n})\to 0\]
completes the proof.
Proof of Theorem 5.3.: By hypothesis, \(A[p]\subseteq A(F)\). Therefore, \(A[p]\simeq(\mathbb{Z}/p)^{2d}\). We have
\[H^{1}\left(G_{S}(F_{n}),\ A[p]\right)=\operatorname{Hom}\left(G_{S}(F_{n}),\ A [p]\right).\]
There are similar identifications for the local cohomology groups. Thus,
\[\operatorname{Sel}_{0}\left(A[p]/F_{n}\right)\ \simeq\operatorname{Hom}\left( \operatorname{Cl}_{S}(F_{n}),A[p]\right)\simeq\operatorname{Cl}_{S}(F_{n})[p] ^{2d}\]
as abelian groups. Therefore,
\[r_{p}\left(\operatorname{Sel}_{0}\left(A[p]/F_{n}\right)\right)=2dr_{p}\left( \operatorname{Cl}_{S}(F_{n})\right).\]
The theorem now follows from Lemmas 5.6 and 5.7.
Let \(p^{e_{n}}\) be the largest power of \(p\) that divides the class number of \(F_{n}\). If \(e_{n}\) is bounded then it follows (trivially) that the \(p\)-rank is bounded. Thus, the following corollary is immediate.
**Corollary 5.8**.: _Let \(p\neq q\). Let \(F/\mathbb{Q}\) be any finite extension of \(\mathbb{Q}\) and \(F_{\infty}/F\) be any \(\mathbb{Z}_{q}\)-extension of \(F\). Let \(p^{e_{n}}\) be the exact power of \(p\) dividing the class number of the \(n\)-th intermediate field \(F_{n}\). Let \(A_{/F}\) be an abelian variety such that \(A[p]\subseteq A(F)\). If \(e_{n}\) is bounded as \(n\to\infty\), then \(r_{p}\left(\operatorname{Sel}_{0}\left(A/F_{n}\right)\right)\) is bounded independently of \(n\)._
In addition to Theorem A, there are some other results in the literature where it is known that the \(p\)-part of the class group stabilizes in a \(\mathbb{Z}_{q}\)-extension (when \(p,q\) are distinct primes). These were discussed briefly in the introduction and are recorded here more precisely.
1. ([11, Theorem]) Let \(F/\mathbb{Q}\) be an abelian extension of \(\mathbb{Q}\) and \(F_{\infty}/F\) be the cyclotomic \(\mathbb{Z}_{q}\)-extension of \(F\). If \(p^{e_{n}}\) be the exact power of \(p\) dividing the class number of the \(n\)-th intermediate field \(F_{n}\), then \(e_{n}\) is bounded as \(n\to\infty\).
2. ([1, Theorem 7.10]) Let \(p,q\) be fixed odd distinct primes both \(\geq 5\), \(K\) be an imaginary quadratic field of class number \(1\) where \(p\) and \(q\) split, and \(E_{/K}\) be an elliptic curves with CM by \(\mathcal{O}_{K}\) and good reduction at \(p,q\). Let \(K_{\infty}\) be the \(\mathbb{Z}_{q}\) extensions of \(K\) which is unramified outside \(\mathfrak{q}\) (resp. \(\overline{\mathfrak{q}}\)). Let \(\mathfrak{g}\) be a fixed ideal of \(\mathcal{O}_{K}\) such that it is coprime to \(pq\) and \(F=\mathscr{R}(\mathfrak{qq})\) is of degree prime-to-\(p\) over \(F\). Then, the \(p\)-part of the class number stabilizes in \(FK_{\infty}=\mathscr{R}(\mathfrak{qq}^{\infty})\). However, since \(p\) is assumed to be unramified in \(F\) in _loc. cit._, the hypothesis \(A[p]\subseteq A(F)\) in Theorem 5.3 is unlikely to hold. The same can be said regarding the setting studied in Theorem A.
**Theorem 5.9**.: _With notation as above, suppose that the \(p\)-rank of the fine Selmer group, denoted by \(r_{p}\left(\operatorname{Sel}_{0}(A/F_{n})\right)\) stabilizes in a \(\mathbb{Z}_{q}\)-extension of \(F\). Then there exists \(n\geq 0\), such that for all \(m\geq n\), the restriction map induces an isomorphism_
\[\operatorname{Sel}_{0}(A/F_{n})\simeq\operatorname{Sel}_{0}(A/F_{m}).\]
Proof.: The following argument is similar to the one presented in [1, p. 15], where instead of classical Selmer groups, we consider fine Selmer groups. Consider the extension \(F_{m}/F_{n}\). Then \([F_{m}:F_{n}]=q^{m-n}=t\) (say). The restriction map
\[\operatorname{Gal}\left(\overline{F}/F_{n}\right)\longrightarrow\operatorname {Gal}\left(\overline{F}/F_{m}\right)\]
induces the restriction homomorphism
\[\operatorname{res}:\operatorname{Sel}_{0}(A/F_{n})\longrightarrow\operatorname {Sel}_{0}(A/F_{m}).\]
Since \(\gcd(q,p)=1\), this maps is an injection. Moreover, we have
\[\operatorname{Sel}_{0}(A/F_{n})\xrightarrow{\operatorname{res}}\operatorname {Sel}_{0}(A/F_{m})\xrightarrow{\operatorname{cores}}\operatorname{Sel}_{0}(A /F_{n})\xrightarrow{t^{-1}}\operatorname{Sel}_{0}(A/F_{n})\]
where \(\operatorname{cores}\circ\operatorname{res}=\operatorname{t}\). The composition \(\operatorname{res}\circ\operatorname{cores}\circ\operatorname{t}^{-1}\) is the identity map; thus, the exact sequence
\[0\longrightarrow\operatorname{Sel}_{0}(A/F_{n})\longrightarrow\operatorname {Sel}_{0}(A/F_{m})\longrightarrow\operatorname{Sel}_{0}(A/F_{m})\big{/} \operatorname{Sel}_{0}(A/F_{n})\longrightarrow 0\]
is split exact.
Let us write \(\operatorname{Sel}_{0}(A/F_{n})=(\mathbb{Q}_{p}/\mathbb{Z}_{p})^{s_{n}}\oplus T_ {n}\), where \(s_{n}\geq 0\) and \(T_{n}\) is a finite \(p\)-group. Then,
\[r_{p}\left(\operatorname{Sel}_{0}(A/F_{n})\right)=s_{n}+r_{p}(T_{n}).\]
The injection \(\operatorname{Sel}_{0}(A/F_{n})\hookrightarrow\operatorname{Sel}_{0}(A/F_{m})\) tells us that \(s_{m}\geq s_{n}\). If the \(p\)-rank \(r_{p}\left(\operatorname{Sel}_{0}(A/F_{n})\right)\) eventually stabilizes it follows that \(s_{n}\) also stabilizes. Denote the cokernel of the injection by \(C_{m,n}\). By duality, we have the short exact sequence
\[0\to C_{m,n}^{\vee}\to\mathbb{Z}_{p}^{s_{m}}\oplus T_{m}^{\vee}\to\mathbb{Z} _{p}^{s_{n}}\oplus T_{n}^{\vee}\to 0.\]
When \(s_{m}=s_{n}\), \(C_{m,n}^{\vee}\) must be finite. Consequently, the image of \(C_{m,n}^{\vee}\) in \(\operatorname{Sel}_{0}(A/F_{n})^{\vee}\) is contained inside \(T_{m}^{\vee}\). Furthermore, since the short exact sequence splits, we deduce the isomorphism
\[T_{m}=T_{n}\oplus C_{m,n}.\]
As \(s_{n}\) stabilizes, \(r_{p}(T_{n})\) also stabilizes. Therefore, \(C_{m,n}\) has to be \(0\) eventually.
Theorem B is now an immediate corollary of Theorems 5.3 and 5.9.
**Corollary 5.10**.: _Let \(p,q\) be distinct odd primes. Let \(F\) be any number field and \(A_{/F}\) be an abelian variety such that \(A[p]\subseteq A(F)\). Let \(F_{\infty}/F\) be a \(\mathbb{Z}_{q}\)-extension where the primes above \(q\) and the primes of bad reduction of \(A\) are finitely decomposed. If the \(p\)-part of the class group stabilizes, i.e., there exists \(N\geq 0\) such that for all \(n\geq N\),_
\[\operatorname{ord}_{p}(h(F_{n}))=\operatorname{ord}_{p}(h(F_{N})),\]
_then the growth of the \(p\)-primary fine Selmer group stabilizes in the \(\mathbb{Z}_{q}\)-extension as well, i.e., there exists an integer \(N^{\prime}\geq N\) such that for all \(n\geq N^{\prime}\), the restriction map induces an isomorphism_
\[\operatorname{Sel}_{0}(A/F_{n})\simeq\operatorname{Sel}_{0}(A/F_{N^{\prime}}).\] |
2306.10962 | Electrolyzer Scheduling for Nordic FCR Services | The cost competitiveness of green hydrogen production via electrolysis
presents a significant challenge for its large-scale adoption. One potential
solution to make electrolyzers profitable is to diversify their products and
participate in various markets, generating additional revenue streams.
Electrolyzers can be utilized as flexible loads and participate in various
frequency-supporting ancillary service markets by adjusting their operating set
points. This paper develops a mixed-integer linear model, deriving an optimal
scheduling strategy for an electrolyzer providing Frequency Containment Reserve
(FCR) services in the Nordic synchronous region. Depending on the hydrogen
price and demand, results show that the provision of various FCR services,
particularly those for critical frequency conditions (FCR-D), could
significantly increase the profit of the electrolyzer. | Marco Saretta, Enrica Raheli, Jalal Kazempour | 2023-06-19T14:25:49Z | http://arxiv.org/abs/2306.10962v2 | # Electrolyzer Scheduling for Nordic FCR Services
###### Abstract
The cost competitiveness of green hydrogen production via electrolysis presents a significant challenge for its large-scale adoption. One potential solution to make electrolyzers profitable is to diversify their products and participate in various markets, generating additional revenue streams. Electrolyzers can be utilized as flexible loads and participate in various frequency-supporting ancillary service markets by adjusting their operating set points. This paper develops a mixed-integer linear model, deriving an optimal scheduling strategy for an electrolyzer providing Frequency Containment Reserve (FCR) services in the Nordic synchronous region. Depending on the hydrogen price and demand, results show that the provision of various FCR services, particularly those for critical frequency conditions (FCR-D), could significantly increase the profit of the electrolyzer.
Electrolyzer, scheduling, frequency-supporting ancillary services, mixed-integer linear optimization
## I Introduction
The production of renewable hydrogen through electrolysis is widely acknowledged as a crucial step in the green transition, enabling decarbonization of hard-to-abate sectors, such industry and heavy transport. To support the large-scale development of electrolyzers, several countries in Europe and globally have released national hydrogen strategies. For example, in 2021 the Danish government released a strategy for the national development of Power-to-X, with a goal to construct 4 to 6 GW of electrolysis capacity by 2030 [1]. However, there are numerous challenges to scaling up this technology, including the cost competitiveness of the electrolysis-based hydrogen production [2]. This requires the establishment of new business models by diversification of the products [3].
Electrolyzers are flexible assets that can rapidly change their power consumption level within their operating range with ramp rates around 20% of the nominal power per second [4, 5]. This makes them eligible to produce various frequency-supporting ancillary services, providing an additional promising revenue stream [6]. Examples of potential ancillary services that electrolyzers can produce are Frequency Containment Reserve (FCR) as a primary reserve, automatic Frequency Restoration Reserve (aFRR) as a secondary reserve, and manual Frequency Restoration Reserve (mFRR) as a tertiary reserve. The technical feasibility of electrolyzers for providing various services is investigated in [7] and [8]. The economic feasibility of providing grid services is analyzed in [9] and [10] for the French and German context, respectively. In [11], a scheduling model for an electrolyzer in Western Denmark (DK1) participating in the day-ahead, balancing, and reserve markets is proposed, showing that offering FCR and aFRR services significantly increases the profit. In a similar direction but for batteries, [12] develops a business model by selling FCR services in Eastern Denmark (DK2). All these studies show that the extent of increased profit by selling ancillary services depends significantly on the location of the electrolyzer due to different market products, prices, and eligibility requirements.
This paper develops a scheduling model for an electrolyzer located in DK2, which is part of the Nordic synchronous region. Compared to the Continental Europe region including DK1, the power system in the Nordic region is smaller in scale and capacity, with a higher penetration rate of renewables, and thereby lower inertia. For that, there are three sub-categories of FCR services in the Nordic region designed for different ranges of frequency deviation, including FCR-N (for normal operations) and FCR-D Up/Down (for operations under disturbance with critically low/high frequency). The main contributions of this paper are twofold. First, we develop a mixed-integer linear model for scheduling electrolyzers, aiming to maximize their profit by selling hydrogen as well as FCR-N and FCR-D Up/Down services. Second, we provide a quantitative assessment to evaluate to what extent an electrolyzer located in DK2 earns more by providing FCR services, in comparison to a case that solely produces hydrogen.
The remaining of the paper is organized as follows. Section II provides an introduction to the Nordic FCR markets. Section III presents the proposed optimization model. Section IV provides numerical scheduling results and an economic assessment. Finally, Section V concludes the paper.
_Notation:_ By \(\lambda_{t}^{(l)}\), we refer to forecast for the volume-weighted average price \(\lambda_{t}^{\text{FCR-N}}\) for FCR-N, \(\lambda_{t}^{\text{FCR-D}\dagger}\) for FCR-D, and \(\lambda_{t}^{\text{FCR-D}\dagger}\) for FCR-D Down, all in hour \(t\). Similarly, let \(r_{t}^{(l)}\) denote the quantity bids \(r_{t}^{\text{FCR-N}}\), \(r_{t}^{\text{FCR-D}\dagger}\), and \(r_{t}^{\text{FCR-D}\dagger}\) to be submitted to the corresponding markets.
## II Preliminaries: Nordic FCR markets
### _General overview_
The Transmission System Operator (TSO) is the organization in charge on a national scale for the secure operation of the power grid. TSOs within synchronous areas share responsibility for real-time balance between supply and demand to maintain the grid frequency close to the nominal value, e.g., 50 Hz in Europe. Ancillary services are the measures adopted by TSOs to ensure grid stability. For that, TSOs procure reserves for ancillary services in advance, and activate them in the real-time operation if necessary.
For completeness, Table I provides a nomenclature for frequency-supporting ancillary services in the Nordic and Continental Europe synchronous regions, although the focus of this paper is the FCR services in the Nordic region.
### _Market structure_
The Nordic obligations indicate the reserve requirements that must be collectively secured in every hour among the Nordic TSOs in a proportional share for different services, as reported in Table II. Note that StatNett, FinGrid, Svenska Kraftnat, and Energinet are national TSOs in Norway, Finland, Sweden, and Denmark, respectively. The Danish TSO, Energinet, has a comparatively lower share due to congestion and technical limitations of the DK2-Sweden connection cable. The Nordic obligations for any hour of the day \(\mathrm{D}\) are contracted via two separate auctions, both pay-as-bid structured, on D-2 and D-1 prior to the delivery day, as shown in Figure 1. Approximately, 80% of each FCR service is contracted on the D-2 auction, and the remaining in D-1.
During the daily FCR auctions, Energinet and Svenska Kraftnat jointly procure their share of reserves, hence Danish FCR providers can potentially meet the full Swedish demand for FCR-N and FCR-D services. However, the maximum amount of FCR from a single unit is limited to 100 MW [13] to avoid a significant loss of FCR in case of a unit failure.
### _FCR delivery and payment structure_
The provision of FCR services entails two distinct stages, namely reserve contraction and activation.
Reserve contraction occurs during the D-2 or D-1 auction, wherein the availability of the reserve noted in the FCR bid is approved by the TSO. Recall that both auctions are based on a pay-as-bid scheme. Compensation for the FCR service in hour \(t\) is based on the reserve quantity \(r_{t}^{()}\) (MW) and the submitted bid price \(\lambda_{t}^{()}\) (&/MW), resulting a revenue, the so-called reserve payment. The Nordic TSOs do not currently disclose information about the last accepted bid in the auctions. The only public information is the hourly volume-weighted average bid price for each service once the auction is closed.
The activation payment is linked to the real-time operation, where the FCR provider must activate the reserve according to the frequency level \(f\) in Hz at any instant within the hour declared in the bid. The real-time reserve activation at any instant in hour \(t\) is equal to the product of the amount of the contracted reserve \(r_{t}^{()}\) and the normalized instantaneous response \(y^{()}\), defined below for FCR-N, FCR-D Up, and FCR-D Down, respectively:
\[y^{\text{CR-N}} =\begin{cases}\begin{array}{r@{\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
### _When do we solve the optimization problem?_
Recall from Figure 1 that the electrolyzer has the opportunity to participate in two pay-as-bid auctions for FCR services, one being settled in day D-\(2\) and the other one in D-\(1\). The electrolyzer owner can solve our proposed optimization model for scheduling decision making at two distinct points of time:
1. At any time before the first auction closure at hour 15:00 of day D-\(2\). In this case, the electrolyzer owner should forecast prices \(\lambda_{t}^{(\text{\tiny{j}})}\) for the first auction, as well as the hourly spot prices \(\lambda_{t}^{\text{\tiny{spot}}}\) whose true values will be realized in day D-\(1\). All these forecasted prices are treated as input parameters to our optimization model. By solving it, we will determine reserve quantity bids \(r_{t}^{(\text{\tiny{j}})}\) to be submitted to the first FCR auction. The price bids are the same as the forecasted prices. This optimization problem also gives quantity bids to be submitted to the spot market, i.e., \(p_{t}\), but they can be modified until noon of D-\(1\) by a re-optimization with fixed \(r_{t}^{(\text{\tiny{j}})}\) if updated spot price forecasts are available.
2. At any time before noon of day D-\(1\), i.e., the closure of the spot market, in case the electrolyzer could not sell FCR services in the first auction. This time, \(\lambda_{t}^{(\text{\tiny{j}})}\) are forecasted prices for the second auction, which are not necessarily identical to realized prices of the first auction. The optimization outcomes are quantity bids to be submitted to the spot market, i.e., \(p_{t}\), and to the second FCR auction, i.e., \(r_{t}^{(\text{\tiny{j}})}\). Note that we can also solve this optimization problem between hours 14 and 18 of day D-\(1\), but then the hourly power purchases \(p_{t}\) are fixed based on the spot market outcomes, and therefore hourly power consumptions can no longer be changed unless by trading in the intra-day and subsequent markets, which is outside the scope of this paper.
### _Mathematical formulation_
The proposed model is formulated as (2)-(6). Lower-case symbols are used for variables, whereas upper-case or Greek symbols indicate parameters. The objective function maximizes the total profit over the set of hours \(t\in\mathcal{T}\) as
\[\max_{\mathbf{x}}\sum_{t\in\mathcal{T}}\bigg{(}d_{t}\lambda^{ \text{H}_{2}}+r_{t}^{\text{CR-N}}\lambda_{t}^{\text{CR-N}}+r_{t}^{\text{CR-D }\uparrow}\lambda_{t}^{\text{CR-D}\uparrow}+ \tag{2}\] \[r_{t}^{\text{FCR-D}\downarrow}\lambda_{t}^{\text{CR-D}\downarrow }-p_{t}\left(\lambda_{t}^{\text{spot}}+\lambda^{\text{TSO}}+\lambda^{\text{ DSO}}\right)-z^{\text{su}}K^{\text{su}}\bigg{)},\]
where vector \(\mathbf{x}\) includes the set of variables, which will be defined later. The revenue streams are based on hydrogen sale \(d_{t}\) at a constant price \(\lambda^{\text{H}_{2}}\) and service sales \(r_{t}^{(\text{\tiny{j}})}\) at price \(\lambda_{t}^{(\text{\tiny{j}})}\). The cost incurs by purchasing hourly power \(p_{t}\) at spot market price \(\lambda_{t}^{\text{spot}}\), marked up by the TSO tariff \(\lambda^{\text{TSO}}\) as well as the tariff of the distribution system operator \(\lambda^{\text{DSO}}\), if the electrolyzer is comparatively small and connected to a distribution grid. In addition, (2) accounts for the cold start-up cost of the electrolyzer, where the binary variable \(z^{\text{su}}\) indicates the start-up at hour \(t\), associated with the cost \(K^{\text{su}}\) per start-up. Note that the activation payment is excluded1.
Footnote 1: This is a mild assumption because (_i_) FCR-N is a service being activated in both sides. Historically, the FCR-N activation is almost symmetrical over every week in 2022, and (_ii_) the activation rate of FCR-D Up/Down services in the Nordic area was less than 1% in 2022 [12]. The interested reader in FCR activation data in DKE is referred to [14].
The following set of constraints (3) models the physics and limitations of the electrolyzer and auxiliary assets including compressor and hydrogen storage. The power purchased from the spot market, i.e., \(p_{t}\), supplies the electrolyzer's consumption \(p_{t}^{\text{\tiny{e}}}\) and the compressor's consumption \(p_{t}^{\text{\tiny{e}}}\):
\[p_{t}=p_{t}^{\text{\tiny{e}}}+p_{t}^{\text{\tiny{c}}}\hskip 56.905512pt\forall\ t \in\mathcal{T}.\] (3a) The electrolyzer is either on, or standby, or off, i.e., \[z_{t}^{\text{on}}+z_{t}^{\text{sb}}\leq 1\hskip 56.905512pt\forall\ t \in\mathcal{T}, \tag{3b}\]
including binary variables \(z_{t}^{\text{on}}\) (if 1, the electrolyzer is on) and \(z_{t}^{\text{sb}}\) (if 1, the electrolyzer is on the standby state). If on, the electrolyzer consumes power and produces hydrogen. If standby, the electrolyzer does not produce hydrogen but consumes 1-5% of the nominal power needed to keep the system warm and pressurized for quick activation [4]. If both binary variables are zero, then the electrolyzer is off, neither consuming power nor producing hydrogen2.
Footnote 2: In this formulation, we model three states (on, standby, off) with two binary variables only, instead of three, as it is prevalent in the literature. We hypothesize, depending on the solver used, this may reduce computational time, but a further investigation is required.
The power consumption \(p_{t}^{\text{\tiny{e}}}\) of the electrolyzer defines the operational baseline, constrained by
\[P^{\text{min}}z_{t}^{\text{on}}+P^{\text{sb}}z_{t}^{\text{\tiny{sb}}}\leq p_ {t}^{\text{\tiny{e}}}\leq P^{\text{max}}z_{t}^{\text{on}}+P^{\text{sb}}z_{t}^{ \text{\tiny{b}}}\quad\forall\ t\in\mathcal{T}, \tag{3c}\]
where the lower bound is \(P^{\text{min}}\) and the upper bound is the capacity \(P^{\text{max}}\) when the electrolyzer is on (\(z_{t}^{\text{on}}\)=1). If standby (\(z_{t}^{\text{sh}}\)=1), \(p_{t}^{\text{\tiny{e}}}\) is set to be equal to the standby power \(P^{\text{sb}}\).
Transition from off state in hour \(t-1\) to on state in \(t\) incurs the start-up cost due to the need to reach the desired pressure and temperature levels. For that, (3d) sets the binary variable \(z_{t}^{\text{su}}\) to be 1 during such a transition, otherwise it is 0:
\[z_{t}^{\text{su}}\geq(z_{t}^{\text{on}}-z_{t-1}^{\text{on}})+(z_{t}^{\text{sb} }-z_{t-1}^{\text{sb}})\qquad\forall\ t\in\mathcal{T}. \tag{3d}\]
The power-to-hydrogen conversion efficiency of an alkaline electrolyzer is not constant over the operating range. To model the non-linear dependency between power consumption and hydrogen production, a piece-wise linearization is introduced as proposed in [15]. For each linearization segment \(s\in\mathcal{S}\), the hydrogen production \(h_{t}^{\text{p}}\) is formulated as a linear function of the power consumption \(\widehat{p}_{t,s}^{\text{\tiny{f}}}\):
\[h_{t}^{\text{p}}=\sum_{s\in\mathcal{S}}\big{(}A_{s}\widehat{p}_{t,s}^{\text{ \tiny{e}}}+B_{s}\widehat{z}_{t,s}\big{)}\qquad\forall\ t\in\mathcal{T}, \tag{3e}\]
Fig. 2: Power system design of an electrolyzer system and its auxiliary assets, providing hydrogen and FCR services. The power source is the grid.
where the coefficients \(A_{s}\) and \(B_{s}\) represent the slope and intercept for each linear segment, whereas the binary variable \(\widehat{z}_{t,s}\), if one, indicates segment \(s\) is active in hour \(t\).
The electrolyzer produces hydrogen in the on state with one segment active only in hour \(t\), as enforced by
\[\sum_{s\in\mathcal{S}}\widehat{z}_{t,s}=z_{t}^{\text{on}}\qquad\qquad\qquad \forall\ t\in\mathcal{T}. \tag{3f}\]
The power consumption \(\widehat{p}_{t,s}^{\text{e}}\) for each segment \(s\) is constrained by
\[\underline{P}_{s}\widehat{z}_{t,s}\leq\widehat{p}_{t,s}^{\text{e}}\leq \overline{P}_{s}\widehat{z}_{t,s}\qquad\forall\ t\in\mathcal{T},\ \forall\ s\in\mathcal{S}, \tag{3g}\]
where \(\underline{P}_{s}\) and \(\overline{P}_{s}\) represent lower and upper bounds. The power consumption \(p_{t}^{\text{e}}\) is then calculated as
\[p_{t}^{\text{e}}=P^{\text{sb}}z_{t}^{\text{sb}}+\sum_{s\in\mathcal{S}} \widehat{p}_{t,s}^{\text{e}}\qquad\qquad\forall\ t\in\mathcal{T}. \tag{3h}\]
The hydrogen production of the electrolyzer goes to the compressor to be further pressurized, and then is either stored or is directly injected to tube trailers, representing the demand. The compressor power consumption \(p_{t}^{\text{c}}\) is a function of the hydrogen production \(h_{t}^{\text{p}}\) of the electrolyzer as
\[p_{t}^{\text{c}}=K^{\text{c}}h_{t}^{\text{p}}\qquad\qquad\qquad\forall\ t\in \mathcal{T}, \tag{3i}\]
where \(K^{\text{c}}\) gives the energy required to compress 1 kg of hydrogen from the electrolyzer output pressure to the pressure level of the storage or tube trailers. The hourly hydrogen demand is bounded by the capacity of tube trailers, as
\[d_{t}\leq D^{\text{max}}\qquad\qquad\qquad\forall\ t\in\mathcal{T}. \tag{3j}\]
In case the hydrogen production \(h_{t}^{\text{p}}\) of the electrolyzer in hour \(t\) is more than demand \(d_{t}\) in that hour, the excess is being stored, while in the case of deficit, we discharge the storage. By this, the state of charge of the storage \(h_{t}^{\text{s}}\) is defined as
\[h_{t}^{\text{s}} =h_{t}^{\text{p}}-d_{t} t=1, \tag{3k}\] \[h_{t}^{\text{s}} =h_{t}^{\text{p}}-d_{t}+h_{t-1}^{\text{s}}\qquad\qquad\forall\ t \in\mathcal{T}\backslash 1, \tag{3l}\]
which is upper-bounded by the capacity of the storage, i.e.,
\[h_{t}^{\text{s}}\leq H^{\text{max}}\qquad\qquad\qquad\forall\ t\in\mathcal{T}. \tag{3m}\]
The following set of constraints (4) enforces FCR reserve allocation constraints. To clarify the need for these constraints, Figure 3 provides an example, where the electrolyzer consumes \(p_{t}^{\text{e}}\) in hour \(t\), which is the baseline for reserve activation. Recall from (1) that FCR-N is a market with a two-side product, meaning that the electrolyzer might be activated to consume less power (if frequency is below 50 Hz) or more power (if frequency is above 50 Hz). On the contrary, FCR-D up/Down are markets with one-side products, meaning that if FCR-D Up is activated (i.e., frequency is below 49.9 Hz), the electrolyzer must consume less power, whereas if FCR-D Down is activated (i.e., frequency is above 50.1 Hz), the electrolyzer must consume more power. To operate fully reliably under the worst case wherein frequency drops to 49.5 Hz (threshold defined by the Nordic TSOs), the electrolyzer should be able to respond by the full activation of both FCR-N and FCR-D Up, i.e.,
\[p_{t}^{\text{e}}-r_{t}^{\text{FCR-N}}-r_{t}^{\text{FCR-D}\uparrow}\geq P^{\text {min}}z_{t}^{\text{on}}+P^{\text{sb}}z_{t}^{\text{sb}}\quad\forall\ t\in \mathcal{T},\] (4a) indicating that if fully activated, the electrolyzer's consumption should still not be lower than \[P^{\text{min}}\] (if it is on) or \[P^{\text{sb}}\] (if it is in the standby state. Similarly, for the over-frequency worse case (i.e., 50.5 Hz defined by the Nordic TSOs), we enforce \[p_{t}^{\text{e}}+r_{t}^{\text{FCR-N}}+r_{t}^{\text{CR-D}\downarrow}\leq P^{ \text{max}}z_{t}^{\text{on}}+P^{\text{sb}}z_{t}^{\text{sb}}\quad\forall\ t\in \mathcal{T}, \tag{4b}\]
stating that by the full activation of both FCR-N and FCR-D Down, the electrolyzer's consumption should not go beyond its capacity \(P^{\text{max}}\) (if on) or \(P^{\text{sb}}\) (if standby)3.
Footnote 3: An extension to this work is to make (4a) and (4b) probabilistic, e.g., via chance constraints, making less conservative reserve allocation decisions, which is outside the scope of this paper. We refer the interested reader to [16].
In addition, we enforce the minimum bid requirement \(Q^{\text{FCR}}\), an identical value for all FCR services set by the Nordic TSOs:
\[r_{t}^{\text{FCR-D}\uparrow} \geq z_{t}^{\text{FCR-D}\uparrow}Q^{\text{FCR}} \forall\ t\in\mathcal{T}, \tag{4c}\] \[r_{t}^{\text{CR-D}\uparrow} \leq z_{t}^{\text{FCR-D}\uparrow}\left(P^{\text{max}}-P^{\text{ min}}\right) \forall\ t\in\mathcal{T},\] (4d) \[r_{t}^{\text{FCR-D}\downarrow} \geq z_{t}^{\text{FCR-D}\downarrow}Q^{\text{FCR}} \forall\ t\in\mathcal{T},\] (4e) \[r_{t}^{\text{FCR-D}\downarrow} \leq z_{t}^{\text{FCR-D}\downarrow}\left(P^{\text{max}}-P^{\text{ min}}\right) \forall\ t\in\mathcal{T},\] (4f) \[r_{t}^{\text{FCR-N}\downarrow} \geq z_{t}^{\text{FCR-N}}Q^{\text{FCR}} \forall\ t\in\mathcal{T},\] (4g) \[r_{t}^{\text{FCR-N}\downarrow} \leq z_{t}^{\text{FCR-N}}\left(\frac{P^{\text{max}}-P^{\text{ min}}}{2}\right) \forall\ t\in\mathcal{T}, \tag{4h}\]
where binary variables \(z_{t}^{\text{FCR-N}}\), \(z_{t}^{\text{FCR-D}\uparrow}\), and \(z_{t}^{\text{FCR-D}\downarrow}\) ensure that the reserve quantity bid \(r_{t}^{(j)}\), if takes a non-zero value, is lower bounded by \(Q^{\text{FCR}}\). If a binary variable \(z_{t}^{(j)}\) takes a zero value, combination of the corresponding lower and upper bounds enforces \(r_{t}^{(j)}\) to be zero.
Within a Hydrogen Purchase Agreement (HPA), the electrolyzer owner might be obliged to supply at least a minimum demand \(\mathrm{HPA}^{\text{min}}\) over a time period, e.g., a day, or a week. For example, let \(\mathcal{T}\) indicate the time horizon, then \(\mathcal{H}_{w}\subset\mathcal{T}\) represents the time period \(w\in\mathcal{W}\) where the hydrogen demand must be met. The minimum hydrogen demand is enforced by
\[\sum_{t\in\mathcal{H}_{w}}d_{t}\geq\mathrm{HPA}^{\text{min}}\qquad\qquad\forall\ w\in\mathcal{W}. \tag{5}\]
The non-negativity of variables is enforced by
\[d_{t},h_{t}^{\text{s}},h_{t}^{\text{p}},p_{t}^{\text{e}},\widehat{p}_{t,s}^{ \text{e}},p_{t},p_{t}^{\text{c}},r_{t}^{\text{FCR-D}\uparrow},r_{t}^{\text{FCR-D} \downarrow},r_{t}^{\text{FCR-N}}\in\mathbb{R}^{+}. \tag{6a}\]
whereas binary variables are
\[z_{t}^{\text{on}},z_{t}^{\text{sb}},z_{t}^{\text{su}},\widehat{z}_{t,s},z_{t}^{ \text{FCR-D}\uparrow},z_{t}^{\text{FCR-D}\downarrow},z_{t}^{\text{FCR-N}}\in \{0,1\}. \tag{6b}\]
The vector \(\mathbf{x}\) contains all variables listed in (6).
Fig. 3: An example FCR reserve allocation for an alkaline electrolyzer, consuming power \(p_{t}^{\text{e}}\) in hour \(t\) which is the operational baseline.
## IV Numerical results and analysis
We consider a 10-MW alkaline electrolyzer located in DK2, operating at a pressure of 30 bars, increased up to 350 bars by the compressor for storage purposes. For the hydrogen production curve of the electrolyzer, we use five linearization segments. The minimum weekly hydrogen demand is 9,072 kg, which can be met if the electrolyzer operates with 30% of its capacity all over the week. All parameters are given in Table III. Currently, the minimum bid quantity \(Q^{\text{FCR}}\) in the Nordic area is 0.1 MW, which is not a limit for a 10-MW electrolyzer, but it could be for small-scale electrolyzers.
We solve the proposed optimization problem based on real prices in year 2022. Since we use realized (and not forecasted) prices, this study provides an economic assessment assuming a perfect foresight for year 2022. All source codes and input data are publicly shared4.
Footnote 4: Github repository: [https://github.com/marco-strt/electrolytezer_nordic_FCR](https://github.com/marco-strt/electrolytezer_nordic_FCR)
### _Optimal electrolyzer scheduling_
Figure 4 illustrates the electrolyzer scheduling during a sample 90-hour horizon within 2022. We make four observations.
First, in hours with comparatively low spot prices, e.g., hours 30-40, the electrolyzer operates in its full capacity of 10 MW to maximize hydrogen production. Among FCR services, the electrolyzer sells FCR-D Up reserve only in these hours. The FCR-D Up bid quantity is maximum, which is 8.4 MW, i.e., the capacity of 10 MW minus the minimum operating level of 1.6 MW.
Second, in hours with comparatively high spot prices, e.g., hours 16-19, the electrolyzer operates in its minimum operating level of 1.6 MW. Among FCR services, the electrolyzer sells FCR-D Down reserve only in these hours. Again, the electrolyzer submits its maximum FCR-D Down bid quantity, which is 8.4 MW (i.e., 10 MW minus 1.6 MW).
Third, in hours with extremely high spot prices, e.g., hours 75-90, the electrolyzer switches off, producing neither hydrogen nor any FCR services. Indeed, this might be affected if the minimum weekly hydrogen demand is higher, eventually reducing the profit.
Fourth, in hours with intermediate spot prices, the electrolyzer operates at partial loading between its minimum level of 1.6 MW and the capacity of 10 MW, and produces also FCR-N services. For example, in hours 22-24, among FCR services, it only produces FCR-N. For that, the electrolyzer consumes 5.8 MW to be able to sell 4.2 MW reserve in the FCR-N auction, such that in extreme cases of full activation, the consumption level either drops to the minimum or increases to the maximum level. There are also hours that the electrolyzer sells multiple FCR services, e.g., FCR-N and FCR-D Up in hours 9-13, or FCR-N and FCR-D Down in hours 25-30.
Note that the minimum weekly demand of 9,072 kg hydrogen is met in the reserve contraction stage. In the activation stage it might be violated, although it is unlikely as already explained in footnote 1. However, one may develop a real-time policy to track meeting the weekly demand, which is outside the scope of this paper.
### _Economic assessment_
Figure 5 shows the yearly profit of the electrolyzer in 2022, which is 0.73 million E, as well as the distribution of yearly revenues and expenses. The activation payments are excluded, but as it was mentioned earlier, FCR services are not energy-intensive overall, and thereby the activation payments are expected to be negligible [12].
The total annual revenue is 3.43 million E, for which the contributions of selling hydrogen, FCR-N, FCR-D Down, and FCR-D Up are 28%, 2%, 30%, and 40%, respectively. This implies that the electrolyzer earns 72% of its total revenue from FCR auctions, which is significant. Indeed, these results could be sensitive to the hydrogen price of E2/kg and
Fig. 4: Optimal scheduling of the electrolyzer in FCR-N, FCR-D Up/Down, and spot markets, in an example 90-hour horizon, starting from 22/02/2022.
Fig. 5: Cash flow for a 10-MW alkaline electrolyzer, participating in the Nordic FCR markets in 2022. Minimum weekly hydrogen demand is 9,072 kg, equivalent to 30% of electrolyzer’s capacity. The hydrogen price is 62kg.
the minimum weekly hydrogen demand of 30%. Therefore, we will conduct a sensitivity analysis in the next section. The total expenses over the year 2022 is 2.69 million EUR, 76% of which corresponds to the power consumption of the electrolyzer (baseline). Tariffs cause 20% of total expenses, while the remaining 4% is incurred by the consumption of the compressor and the start-up cost of electrolyzer (44 start-ups over 2022, each costing 1,000 EUR).
### _Sensitivity analysis_
Recall we have assumed the minimum weekly hydrogen demand \(\mathrm{HPA}^{\min}\) is met if the electrolyzer operates at 30% of its capacity all over the week, whereas the hydrogen price \(\lambda^{\mathrm{H_{2}}}\) is EUR(\$2\)/kg. We conduct a sensitivity analysis for the annual profit of the electrolyzer with respect to these two parameters. We vary \(\mathrm{HPA}^{\min}\) from 0% to 50%, and \(\lambda^{\mathrm{H_{2}}}\) from EUR1/kg to EUR5/kg. We conduct this analysis for two cases: (_i_) the electrolyzer offers FCR services along with the hydrogen production, and (_ii_) the electrolyzer produces hydrogen only.
The results are depicted in Figure 6. As expected, profit declines by increasing \(\mathrm{HPA}^{\min}\), as the electrolyzer is obliged to operate during non-profitable hours. In extreme cases, the annual profit is negative. The economic value of FCR services becomes even more remarkable when \(\mathrm{HPA}^{\min}\) increases. Finally, higher hydrogen prices increase the profit.
## V Conclusion
This paper develops a mixed-integer linear model for optimal scheduling of an electrolyzer, purchasing power from the spot market and selling hydrogen as well as FCR-N and FCR-D Up/Down services in the Nordic synchronous region. For a case study based on realized spot and FCR prices in 2022, we found out FCR services can significantly increase the annual profit of the electrolyzer. For a case with the fixed hydrogen price of EUR2/kg and the minimum weekly hydrogen demand of 30%, the electrolyzer earns the annual profit of 0.73 million EUR with a significant contribution from FCR markets (72%), particularly from FCR-D Up/Down markets. However, this is an analysis with perfect foresight into prices, thereby the true contribution of FCR markets with imperfect foresight might be different. The capital cost of an alkaline electrolyzer alone (without auxiliary assets) varies with its scale and depends on the manufacturer, but overall it is approximately around 1 million EUR per MW [20]. It looks the annual profit of 0.73 million EUR, earned mostly from FCR auctions, is still insufficient to recover the investment cost, but it requires an in-depth analysis, which is left for the future work. This may call for additional regulatory supportive actions to make green hydrogen cost competitive.
## Acknowledgment
The authors would like to thank Thomas Dalgas Fecht-enburg (Energinet) for our discussions on the setup and requirements for an electrolyzer to provide FCR services, Roar Hestebek Nicolaisen and Andreas Thingvad (Hybrid Greentech) for the support in the conceptualization and formulation of the model, and finally, Edoardo Simioni (Orsted) for examining the MSc thesis project and providing constructive feedback.
|
2302.08402 | Continuum reverberation mapping of MCG 08-11-011 | We report the results from a photometric reverberation mapping campaign
carried out with the C18 telescope at the Wise Observatory from 2019 to 2020,
targeting the active galactic nucleus (AGN) MCG 08-11-011. The monitoring was
conducted on a daily basis with specially designed narrow-band filters,
spanning from optical to near-infrared wavelengths ($\sim4000$ to $8000${\AA})
and avoiding prominent broad emission lines. We aim to measure inter-band
continuum time lags, determine the size-wavelength relation, and estimate the
host-subtracted AGN luminosity for this system. We used the point-spread
function photometry to extract the continuum light curves and measure the
inter-band time lags using several methods, including the interpolated
cross-correlation function, the z-transformed discrete correlation function, a
von Neumann estimator, JAVELIN (in spectroscopic and photometric mode), MICA,
and a multivariate correlation function. We find wavelength-dependent lags,
$\tau(\lambda)$, up to $\sim$7 days between the multiband light curves of MCG
08-11-011. The observed lags are larger than predictions based on standard
thin-disk theory by a factor of $\sim3-7$. We discern a significantly steeper
($\tau \propto \lambda^{4.74}$) size-wavelength relation than the $\tau \propto
\lambda^{4/3}$ expected for a geometrically thin and optically thick accretion
disk, which may result from the contribution of diffuse continuum emission to
the flux. These results are similar to those found by previous continuum
reverberation mapping campaigns. | C. Fian, D. Chelouche, S. Kaspi, C. Sobrino Figaredo, T. Lewis, S. Catalan | 2023-02-16T16:26:57Z | http://arxiv.org/abs/2302.08402v1 | # Continuum reverberation mapping of MCG 08-11-011+
###### Abstract
Context:
Aims:We report the results from a photometric reverberation mapping campaign carried out with the C18 telescope at the Wise Observatory from 2019 to 2020, targeting the active galactic nucleus (AGN) MCG 08-11-011. The monitoring was conducted on a daily basis with specially designed narrow-band filters, spanning from optical to near-infrared wavelengths (\(\sim 4000\) to 8000A) and avoiding prominent broad emission lines. We aim to measure inter-band continuum time lags, determine the size-wavelength relation, and estimate the host-subtracted AGN luminosity for this system.
Methods:We used the point-spread function photometry to extract the continuum light curves and measure the inter-band time lags using several methods, including the interpolated cross-correlation function, the z-transformed discrete correlation function, a von Neumann estimator, JAVELIN (in spectroscopic and photometric mode), MICA, and a multivariate correlation function.
Results:We find wavelength-dependent lags, r(\(\lambda\)), up to \(\sim\)7 days between the multiband light curves of MCG 08-11-011. The observed lags are larger than predictions based on standard thin-disk theory by a factor of \(\sim 3-7\). We discern a significantly steeper (\(\tau\propto\lambda^{4.74}\)) size-wavelength relation than the \(\tau\propto\lambda^{4/3}\) expected for a geometrically thin and optically thick accretion disk, which may result from the contribution of diffuse continuum emission to the flux. These results are similar to those found by previous continuum reverberation mapping campaigns.
Conclusions:
## 1 Introduction
Active galactic nuclei (AGNs) are among the most luminous sources of radiation in the Universe, and understanding their interior structure has been one of the major goals of extragalactic astrophysics. The current picture of the schematic sub-parsec-scale structure of an AGN includes three main components: a hot, X-ray emitting corona; an accretion disk around a supermassive black hole (SMBH), and a broad-line region (BLR) consisting of fast-orbiting photoionized gas and clouds. On scales of parsec to hundreds of parsecs, the AGN consists of an obscuring dusty torus and a narrow-line region (NLR) comprised of small, low-density gas clouds moving at lower velocities. Gravitational potential energy and viscous heating is converted into heat and radiation by the accretion of matter onto the central SMBH (e.g., Page & Thorne 1974; Rees 1984; Balbus & Hawley 1998). The accretion disk thereby reaches temperatures of \(10^{5}-10^{6}\)K at its inner edge with a gradient to cooler temperatures at larger radii, leading to a continuum emission spectrum spanning from the extreme ultraviolet (UV) to the infrared (IR). The hottest parts of the accretion disk provide the ionizing photons that cause Doppler-broadened emission lines in the BLRs and NLRs (Davidson & Netzer 1979; Veilleux & Osterbrock 1987). Although this basic picture can explain most of the observational properties of AGNs (Burbidge 1967; Weedman 1977; Shields 1978; Elvis et al. 1994; Telfer et al. 2002), the detailed geometry and kinematics of the interior structure remain poorly understood. Since the sub-parsec-scale structures are unresolved in even the closest AGN, information must be obtained by indirect means.
Reverberation mapping (RM; Bahcall et al. 1972; Blandford & McKee 1982; Peterson 1993, 2014) is a powerful tool to probe compact structures in the central parts of AGNs. The basic principle of RM is to search for temporal correlations between the time-variable flux signals (intrinsic variability) and their light echoes at different wavelengths. Combined with the speed of light, the lag between those light echoes determines the characteristic size of the reverberation structure in the AGN. For example, gas in the outer part of the accretion disk reprocesses (as variable optical flux) variations emitted in the far and extreme UV by the inner parts of the accretion disk. Measuring the lag between the primary UV signal and light echoes at longer wavelengths provides an estimate of the accretion disk's spatial extent. Recent findings suggest that the disk sizes are larger than the predictions from standard models (e.g., Fian et al. 2022; Fausnaugh et al. 2017, 2018; Edelson et al. 2015, 2019). Accretion disk sizes considerably larger than predicted by theory have
also been found in microlensing studies of gravitationally lensed quasars (e.g., Morgan et al. 2010; Blackburne et al. 2011; Fian et al. 2016, 2018, 2021; Cornachione et al. 2020a,b). In addition, continuum time lags across the accretion disk provide information about the disk's temperature gradient, and it appears that they are flatter than expected (Motta et al. 2017; Cornachione & Morgan 2020; Jimenez-Vicente et al. 2014).
Measuring inter-band continuum lags is extremely challenging, because the predicted size of accretion disks is only about one light day and monitoring campaigns require comparable or better cadence (i.e., on the order of one day or less) on timescales of weeks to months to resolve such short lags. In this work, we analyzed six months of densely sampled (daily cadence) photometric monitoring data of the Seyfert 1 galaxy MCG 08-11-011, and we present detections of optical and near-IR inter-band continuum time lags. In Section 2, we discuss our observations, the data reduction, and the light curves taken in multiple photometry bands. In Section 3, we describe our time series analysis and compare several tools to measure the inter-band continuum time lags. The results are presented in Section 4, including the time delays, the lag spectrum, the host-subtracted AGN luminosity, and a comparison with the theoretical disk sizes. Finally, we discuss and conclude our findings in Section 5.
## 2 Observations and data reduction
The ground-based photometric monitoring was conducted between October 2019 and April 2020 with the robotic C18 telescope (Brosch et al. 2008) of the Wise Observatory located in the Negev desert in southern Israel. We used the QSI 683 CCD (image sensor KAF-8300), which has \(3326\times 2504\) pixels of 5.4 \(\mu\)m in size. The pixel scale is 0.882 arcsec pix\({}^{-1}\), which gives a field of view of \(48.9\times 36.8\) arcmin (0.815 \(\times\) 0.613 degrees, corresponding to an area of 0.5 deg\({}^{2}\)). The observations were carried out on a daily basis (\(\sim\)4 exposures per night in each filter) for a duration of almost six months. To trace the AGN continuum variations free of emission lines at the object's rest frame (MCG 08-11-011 is at a redshift of \(z\sim 0.0205\)1), five relatively narrow bands (NBs) at 4250, 5700, 6200, 7320, and 8025A were carefully chosen. In Table 1 we list the object's characteristics. In Table 2 we summarize the filter and observation information, and Figure 1 shows the position of the NB filters together with the quasars' composite spectrum of Gikman et al. (2006). The images were reduced following standard procedures performed with IRAF (including bias subtraction, dark current correction, and flat fielding for each filter), and we used the traditional point-spread function (PSF) fitting photometry to obtain the light curves.
Footnote 1: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
We used the DAOPHOT (Stetson 1987) package as implemented in IRAF and DAOSTAT (Netzer et al. 1996) to measure the magnitude of the objects in the images and to compute the light curves of the Seyfert 1 galaxy (see Fian et al. 2022 for a detailed description). To obtain accurate measures for the magnitude at a given epoch, we discarded problematic exposures (due to low S/N, condensation rings on the CCD, and/or elongated stars caused by telescope tracking or auto-guider issues) from each night. After comparing consecutive points and removing points above a certain threshold, we are left with a set of good measurements per night (only one night has been discarded). Finally, we averaged the outlier-free exposures for each night, resulting in high S/N light curves consisting of \(\sim\)90 data points. In Figure 2 we show the normalized-to-mean and unit standard deviation light curves for the different bands, and in Table 3 we present the variability measures for each light curve.
## 3 Time series analysis
The primary objective of this paper is to estimate the time delays between the NB passes located at 4250, 5700, 6200, 7320, and 8025A, which to a large extent trace the AGN continuum variations relatively free of contamination from the broad-emission lines. We used several methods to robustly determine the reverberation lags between the multiwavelength bands, as outlined below. All time lags are measured relative to the NB4250 light curve. A more detailed description of the methods (a) - (e) can be found in Fian et al. (2022).
(a) _ICCF_. A well-known method to estimate reverberation lags is the traditional interpolated cross-correlation function (ICCF) of Gaskell and Sparke (1986) and Gaskell and Peterson (1987), as implemented by White and Peterson (1994); see also review by Gaskell (1994). To properly perform cross-correlation function (CCF) analysis, uneven sampled light curves have to be interpolated. The time lag is determined by measuring the centroid of the points around the ICCF peak (above a certain threshold), and to estimate the errors of the inferred time lags we used the flux randomization and random subset selection (FR/RSS) method of Peterson et al. (1998, 2004).
(b) _ZDCF_. One way to bypass interpolation is the use of a discrete correlation function (DCF; Edelson and Krolik 1988), which evaluates the correlation function in bins of time delay. In this work, we used the z-transformed discrete correlation function (ZDCF) of Alexander (1997), which applies Fisher's z transformation to the correlation coefficients. To measure the time delays between the multiband light curves, we took the centroid of the correlation function above 60% or 80% of the peak, and we estimated the errors using a maximum likelihood method that takes into account the uncertainty in the ZDCF points.
(c) _Von Neumann Estimator_. The von Neumann estimator (von Neumann 1941; Chelouche et al. 2017) does not depend on interpolation or binning of the light curves but is based on the regularity of randomness of the data. In this work, we used von Neumann's statistical estimator to find the relative time shift between two light curves that minimizes the level of randomness.
(d) _JAVELIN_. JAVELIN stands for "Just Another Vehicle for Estimating Lags in Nuclei", and is a popular (parametric) Bayesian tool to measure reverberation lags (Zu et al. 2011, 2013, 2016). Instead of extracting peaks from empirical cross-correlation functions, it models the continuum variability of the quasar itself by assuming a damped random walk (DRW) process (Kelly et al. 2009; MacLeod et al. 2010, 2012; Kozlowski et al. 2010; Kozlowski 2016). In this work, we used JAVELIN in spectroscopic and photometric RM mode. In the spectroscopic mode, we have two parameters for the continuum DRW model (amplitude and timescale of the quasar's stochastic variability) and three parameters for each lagging light curve (time delay, width of the smoothing function, and scaling factor). In photometric mode, JAVELIN models light curves in two different bands and estimates the contamination of the leading light curve to the longer wavelength band (additional parameter) and the corresponding time delay simultaneously.
(e) _MICA_. MICA is a nonparametric method that determines the so-called transfer functions (see Blandford and McKee 1982) for RM data, which reflect the structure of AGNs since the temporal behavior of spatially extended regions (outer parts of the
Figure 2: PSF photometry light curves of the AGN continuum at 4250, 5700, 6200, 7320, and 8025Å (from top to bottom) for the period between October 2019 and April 2020. The light curves are normalized to zero mean and unit standard deviation, and the fluxes are in arbitrary units.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Filter & \(\overline{m}\) & \(rms\) & \(\delta\) & \(\chi_{\nu}^{2}\) & \(\sigma_{N}\) & \(F_{var}\) & \(Err\,F_{var}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline NB4250 & 8.48 & 2.25 & 0.21 & 139 & 26.4 & 0.264 & 0.020 \\ NB5700 & 7.41 & 1.91 & 0.22 & 84 & 25.6 & 0.255 & 0.019 \\ NB6200 & 7.70 & 1.96 & 0.20 & 104 & 25.3 & 0.253 & 0.019 \\ NB7320 & 7.88 & 2.15 & 0.23 & 100 & 27.1 & 0.271 & 0.020 \\ NB8025 & 8.14 & 2.32 & 0.34 & 66 & 28.2 & 0.282 & 0.022 \\ \hline \end{tabular} NOTES. – Col. (1): NB filter. Cols. (2)–(4): Mean (\(\overline{m}\)), rms, and mean uncertainty (\(\delta\)) of all data points in the light curves in units of mJy. Col. (5): \(\chi_{\nu}^{2}\) obtained by fitting a constant to the light curve. Col. (6): Intrinsic normalized variability measure, \(\sigma_{N}=100\sqrt{(rms^{2}-\delta^{2})/\overline{m}}\). Cols. (7)–(8): Fractional variability amplitude and its uncertainty (Rodríguez-Pascual et al. 1997; Edelson et al. 2002).
\end{table}
Table 3: Variability measures for the host-subtracted PSF photometry light curves.
accretion disk, BLR) is assumed to involve blurred echoes of the central ionizing continuum variations. The time lags are given by the first moment of the transfer functions, and the associated uncertainties are estimated as the standard deviation of the generated Markov chains.
(f) _PRM_. The photometric RM (PRM) developed by Chelouche & Zucker (2013) is a generalized approach to RM and is based on multivariate correlation analysis techniques. It is able to identify reverberation signals across the accretion disk and simultaneously identifies the relative contribution of an additional, slowly varying component (associated with the BLR) to the continuum signal. Observationally, neither the time lag nor the contribution of the BLR to the lagging continuum light curve is known. Those values are constrained by the requirement for a maximal Pearson correlation coefficient within the computational domain. A more detailed explanation of the performance of this method is given in Chelouche & Zucker (2013).
We used 1000 Monte Carlo runs to obtain the lag distributions for methods (a)--(c) and (f). In the case of JAVELIN and MICA, we ran 10.000 Markov Chain Monte Carlo simulations. We applied a common time-lag search interval of \([\tau_{min},\tau_{max}]=[-15,10]\) days, and, for methods that required interpolation, we adopted a time step of 0.15 days.
## 4 Results and discussion
In the subsequent section, we report the results of the multi-band photometric study of the AGN MCG 08-11-011, including the derived continuum time lags between the different NBs, the corresponding lag spectrum, an estimate for the host-subtracted AGN luminosity, and the theoretical disk size as a function of luminosity.
### Continuum time lags
We calculated the time lags (and their uncertainties) between the varying AGN continuum light curves at five different wavelengths using the various methods discussed in Section 3. All lags are measured with respect to the bluest (NB4250) light curve, and, for validation purposes, we also include the lag estimations of the 4250A NB relative to itself. In Figure 3, we show the lag distributions or transfer functions for each light curve and method, and Table 4 lists the corresponding lags and uncertainties. The last four rows give the overall mean time delay of all time-lag determination techniques, and the final uncertainties were obtained by estimating the standard deviation from the mean of all methods. From Figure 3, we see that the lag distributions of the reference light curve relative to itself are symmetric and concentrated around zero as expected, while for the rest of the NB light curves the distributions are clearly shifted away from zero, and RM lags can be detected at high significance.
For the ICCF, ZDCF, and FR/RSS, we computed the centroid time lags from all points above 60% and 80% of the peak value, leading to similar results. The lags derived from the ICCF, and ZDCF methods are consistent, indicating that the interpolation done in the ICCF does not introduce any artificial correlation. Also, the light-curve modeling techniques are able to capture reverberation lags, as can be seen for the JAVELIN and PRM posterior distributions, as well as for the MICA transfer functions. The lag distributions obtained from the von Neumann method after Monte Carlo simulation of FR/RSS as done for the ICCF analysis, yield similar results to those derived from the cross-correlation and light-curve modeling approaches. Thus, we find general agreements within uncertainties among the results of all methods used in this work. Combining all the lag estimates listed in Table 4, we obtain the mean time delays in the observer's frame relative to the 4250A NB, \(\tau=1.0\pm 0.5\) days for NB5700,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Filter\({}^{a}\) & \(\tau\) (days) & \(r_{max}^{b}\) \\ \hline \hline ICCF & NB5700 & \(1.0^{+1.1}_{-1.0}\) & 0.998 \\ & NB6200 & \(2.1^{+1.0}_{-1.1}\) & 0.997 \\ & NB7320 & \(5.1^{+1.3}_{-0.8}\) & 0.994 \\ & NB8025 & \(7.4^{+1.4}_{-1.1}\) & 0.988 \\ \hline ZDCF & NB5700 & \(0.6^{+1.0}_{-1.1}\) & 0.987 \\ & NB6200 & \(1.4^{+1.6}_{-0.6}\) & 0.988 \\ & NB7320 & \(4.4^{+1.1}_{-1.4}\) & 0.984 \\ & NB8025 & \(8.0^{+1.1}_{-3.5}\) & 0.974 \\ \hline FR/RSS & NB5700 & \(1.0^{+1.1}_{-1.1}\) & — \\ & NB6200 & \(2.0^{+1.1}_{-1.1}\) & — \\ & NB7320 & \(5.0^{+1.1}_{-1.1}\) & — \\ & NB8025 & \(7.3^{+1.3}_{-1.3}\) & — \\ \hline Von Neumann & NB5700 & \(1.0^{+0.5}_{-0.9}\) & — \\ Estimator & NB6200 & \(1.8^{+0.6}_{-0.3}\) & — \\ & NB7320 & \(3.7^{+0.2}_{-0.2}\) & — \\ & NB8025 & \(5.2^{+2.4}_{-1.5}\) & — \\ \hline JAVELIN (spectroscopic) & NB5700 & \(1.3^{+0.3}_{-0.3}\) & — \\ & NB6200 & \(2.7^{+0.3}_{-0.3}\) & — \\ & NB7320 & \(3.8^{+0.4}_{-0.4}\) & — \\ & NB8025 & \(5.3^{+0.6}_{-0.6}\) & — \\ \hline JAVELIN (photometric) & NB5700 & \(2.2^{+1.6}_{-0.8}\) & — \\ & NB6200 & \(2.8^{+0.8}_{-0.7}\) & — \\ & NB7320 & \(4.1^{+0.7}_{-0.6}\) & — \\ & NB8025 & \(7.3^{+1.5}_{-1.2}\) & — \\ \hline MICA & NB5700 & \(0.6^{+0.5}_{-0.5}\) & — \\ & NB6200 & \(1.4^{+0.5}_{-0.5}\) & — \\ & NB7320 & \(4.6^{+0.6}_{-0.6}\) & — \\ & NB8025 & \(6.1^{+1.6}_{-1.6}\) & — \\ \hline PRM & NB5700 & \(0.8^{+0.6}_{-0.6}\) & — \\ & NB6200 & \(2.4^{+0.5}_{-0.5}\) & — \\ & NB7320 & \(4.4^{+1.0}_{-1.0}\) & — \\ & NB8025 & \(8.5^{+2.5}_{-2.5}\) & — \\ \hline \hline Mean All & NB5700 & \(1.0\pm 0.5\) & — \\ Methods & NB6200 & \(2.0\pm 0.6\) & — \\ & NB7320 & \(4.5\pm 0.6\) & — \\ & NB8025 & \(7.1\pm 1.1\) & — \\ \hline \end{tabular} \({}^{a}\) relative to NB4250.
\({}^{b}\) maximum correlation coefficient.
\end{table}
Table 4: Summary of the time lags expressed in light days in the observer’s frame between the five continuum light curves of MCG 08-11-011.
\(\tau=2.0\pm 0.6\) days for NB6200, \(\tau=4.5\pm 0.6\) days for NB7320, and \(\tau=7.1\pm 1.1\) days for NB8025. We note that in the case of the PRM, the contribution of the additional component is close to unity (\(\sim\)0.85) for all wavelength bands, thereby justifying the inclusion of the inferred lag estimates in the mean time delay calculation. Applying a weighted mean, we obtain slightly larger values for the time delays of the bluer wavelength bands (\(\tau=1.1\) days for NB5700 and \(\tau=2.2\) days for NB6200) and somewhat smaller values for the two reddest wavelength bands used in our RM campaign (\(\tau=3.9\) days for NB7320 and \(\tau=6.2\) days for NB8025); however, these are consistent (within errors) with the time lags obtained when using the ordinary mean.
Figure 3: From top to bottom: Partially interpolated CCFs, z-transformed DCFs, FR/RSS centroid distributions (for centroid \(\geq 0.8\)\(r_{\rm max}\)), von Neuman estimator peak distributions, JAVELIN posterior distributions of lags (spectroscopic mode), MICA transfer functions, and PRM lag distributions for each NB relative to the 4250Å band. In the bottom panel all time delays are plotted (in the same order as described before) on a vertical axis for illustration. The solid lines show the mean time delay of all methods together, and the shaded regions represent the corresponding standard deviation. These values are presented in Table 4.
### Lag spectrum
Figure 4 displays the inter-band continuum time lags (relative to the 4250A light curve) as a function of wavelength for each of the methods discussed in Section 3. For a disk reprocessing model, we can translate the observed time-lag-wavelength relation to a wavelength-dependent emissivity profile, which in turn depends on the temperature profile of the accretion disk. To quantify this, we fit the observed continuum lags in the five different photometric NBs with a disk model of the following form:
\[\tau=\tau_{0}\left[\left(\frac{\lambda}{\lambda_{0}}\right)^{\beta}-y_{0} \right], \tag{1}\]
where \(\lambda\) is the observed wavelength, \(\lambda_{0}\) is the reference band wavelength (here 4250A), and \(\tau_{0}\), \(\beta\), and \(y_{0}\) are free parameters. The normalization \(\tau_{0}=R_{\lambda_{0}}/c\) measures the light crossing time across an accretion disk emitting at a reference wavelength, \(\lambda_{0}\); the power-law index, \(\beta\), quantifies the temperature profile of the disk, \(T\propto R^{-1/\beta}\); and \(y_{0}\) allows the model lag at \(\lambda_{0}\) to differ from 0. A list of best-fitting parameters is shown in Table 5. From Figure 5 and Table 5, we infer that the observed time lags, as well as the physical models, clearly favor a steeper slope than predicted by the standard disk temperature profile. In Figure 5, we show the average time lag spectrum, and fit models with both \(\tau_{0}\), \(\beta\), and \(y_{0}\) free to vary, as well as with \(\beta\) fixed to \(4/3\), corresponding to a standard thin accretion disk. The best fit yields \(\beta=4.74\) (dashed blue line in Figure 5), resulting in a very steep lag-wavelength relation. The dotted red fit indicates that a disk reprocessing model with \(\beta=4/3\) cannot reproduce our data very well, contradicting the prediction for a geometrically thin disk with temperature profile of \(T\propto R^{-3/4}\). In previous RM studies several authors observed similar trends in the lag spectra of AGNs (see, e.g., Gaskell, 2007; Chelouche et al., 2019; Fian et al., 2022) and attributed this to possible contamination by light being reprocessed from further away.
### Host-subtracted AGN luminosity
To determine the AGN's luminosity at a wavelength of 5100A, the contribution of the host galaxy to the nuclear flux has to be subtracted. To achieve this, we disentangled the constant host from the variable AGN flux inside our aperture by using the flux variation gradient (FVG) method originally proposed by
Figure 4: Time lags (black circles) between multiband continuum light curves as a function of wavelength for the various methods described in Section 3. All lags are measured relative to variations at 4250Å. The dashed blue lines show the best fit to the observed relation \(\tau=\tau_{0}/\left[(\lambda/\lambda_{0})^{\beta}-y_{0}\right]\) with \(\tau_{0}\), \(\beta\), and \(y_{0}\) as free parameters (these values are presented in Table 5).
Figure 5: Mean time lags (black circles) between multiband continuum light curves as a function of wavelength. All lags are measured relative to variations at 4250Å. The dashed blue line shows the best fit to the observed relation \(\tau=\tau_{0}/\left[(\lambda/\lambda_{0})^{\beta}-y_{0}\right]\), with \(\tau_{0}\), \(\beta\), and \(y_{0}\) as free parameters (these values are presented in Table 5). The red dotted line is a fit with fixed theoretical power-law index \(\beta=4/3\), as expected for an optically thick and geometrically thin disk.
Choloniewski (1981) and further established by Winkler et al. (1992) and Sakata et al. (2010). We plot data points for different filter pairs collected throughout the monitoring program in flux-flux diagrams in units of mJy (see Figure 6). As the observed source varies in luminosity, the fluxes in the FVG diagram will follow a linear relation with a slope (denoted by the symbol \(\Gamma\); representing the AGN color) given by the host-free AGN continuum. The host, however, will show no variation. While the host slope passes through the origin, a linear least-squares fit to the data points yields the AGN slope. The intersection of the two slopes then allows us to determine the host flux contribution and to calculate the host-subtracted AGN luminosity at the time of the monitoring campaign - even without the need for high spatial resolution images (Haas et al. 2011). We note that the FVG diagrams were calculated taking into account the previously estimated time delays (in Section 4.1) between the different wavelength bands. The absolute flux calibration was carried out on the reference images (built from the individual NB frames) by comparison with the Pan-STARRS1 Catalog2 (within a 20' distance of the target). Since the field is crowded, we obtained up to \(\sim\)150 comparison stars in the red filters. For each calibration star, we fit a black-body curve between the known \(griz\) values and interpolate the flux to obtain the flux values for the central wavelengths of our NB filters. Finally, we calibrated and estimate the flux of MCG 08-11-011 in each NB and corrected the value for the Galactic foreground extinction (Schlafly & Finkbeiner 2011).
Footnote 2: [https://catalogs.mast.stsci.edu/pantars/](https://catalogs.mast.stsci.edu/pantars/)
Figure 6 shows the NB4250 versus NB5700, NB4250 versus NB6200, NB4250 versus NB7320, and NB4250 versus NB8025 fluxes of MCG 08-11-011. Linear least-squares fits to the flux variations in each NB filter pair yield \(\Gamma_{AGN}=1.18\pm 0.02\) for NB4250 versus NB5700, \(\Gamma_{AGN}=1.15\pm 0.02\) for NB4250 versus NB6200, \(\Gamma_{AGN}=1.10\pm 0.02\) for NB4250 versus NB7320, and \(\Gamma_{AGN}=1.04\pm 0.03\) for NB4250 versus NB8025. The host slope was determined by applying multi-aperture photometry on the stacked reference images as proposed by Winkler et al. (1992). Fluxes measured at different apertures are used to infer the host galaxy color, and since the host galaxy contribution increases with the aperture, a linear fit between the fluxes approximates the host slope. We list the total (AGN + host) fluxes for each filter in Table 6 together with the mean host galaxy fluxes (obtained by averaging over the intersection area between the AGN and the host galaxy slopes) and the nuclear flux (calculated by subtracting the constant host galaxy component from the total flux). The listed uncertainties include the median errors of the calibration stars and errors caused by the black-body interpolation. The host contributes \(\sim\) 5% in NB4250, \(\sim\) 30% in NB5700, \(\sim\) 36% in NB6200, \(\sim\) 47% in NB7320, and \(\sim\) 54% in NB8025 to the total (AGN + host) observed fluxes.
To obtain the host-subtracted AGN flux of MCG 08-11-011 at a rest-frame of 5100A, we interpolated between the filters NB4250 and NB5700, adopting for the interpolation that the AGN has a power-law spectral shape (\(F_{\nu}\propto\nu^{\alpha}\)). At a distance of \(D_{L}=93.10\) Mpc (Yoshi et al. 2014), this yields a host-subtracted AGN luminosity of \(\lambda L_{\lambda}(5100\AA)=(4.21\pm 0.65)\times 10^{43}\) erg s\({}^{-1}\). The \(\sim\) 15% uncertainty includes the measurement errors, the uncertainty of the AGN and host slopes, and the AGN variations. In Figure 7, we show the total (AGN + host) fluxes, the host-subtracted AGN fluxes, and the host fluxes as a function of wavelength. The power-law fit to the pure AGN fluxes yields \(F_{\nu}\sim\lambda^{-\alpha}\), with \(\alpha=0.02\pm 0.12\), which is shallower than (but consistent within uncertainties with) the spectral index predicted by a standard Shakura-Sunyaev disk (\(\alpha=1/3\)). The host-subtracted RMS spectrum (values listed in Table 3) shows no spectral variation, which is accordant with the use of the Choloniewski diagrams. Thus, all fractional variability amplitude values are consistent with each other within their uncertainties.
It is worth mentioning that MCG 08-11-011 was previously monitored over four months in 2014 by Fausnaugh et al. (2017, 2018), with the light curves spanning the broad-band \(ugriz\) filters. Unlike our work, they observe a lag-spectrum consistent with geometrically thin accretion-disk models that predict a lag-wavelength relation of \(\tau\propto\lambda^{4/3}\). They report significantly smaller lags (up to \(\sim\)2.6 days) than the ones inferred in the present paper using NB light curves, and they find that the disk is a factor of 3.3 larger than predictions based on standard thin-disk theory. However, it is interesting to notice that Fausnaugh
\begin{table}
\begin{tabular}{c c c c} \hline \hline Filter & Total (mJy) & Host (mJy) & AGN (mJy) \\ \hline NB4250 & \(9.3\pm 1.2\) & \(0.5\pm 0.2\) & \(8.8\pm 4.7\) \\ NB5700 & \(11.0\pm 1.0\) & \(3.3\pm 0.2\) & \(7.7\pm 1.2\) \\ NB6200 & \(12.8\pm 1.2\) & \(4.6\pm 0.3\) & \(8.2\pm 1.3\) \\ NB7320 & \(16.2\pm 1.6\) & \(7.7\pm 0.5\) & \(8.5\pm 1.3\) \\ NB8025 & \(18.7\pm 2.1\) & \(10.1\pm 0.7\) & \(8.6\pm 1.6\) \\ \hline \end{tabular}
\end{table}
Table 6: Total (AGN + host), host galaxy, and AGN continuum fluxes.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & \(\tau_{0}\) (days) & \(\beta\) & \(y_{0}\) & \(\chi^{2}\) \\ (1) & (2) & (3) & (4) & (5) \\ \hline ICCF & 0.48 & 4.44 & 1.12 & 0.13 \\ ZDCF & 0.14 & 6.38 & 1.10 & 0.03 \\ FR/RSS & 0.82 & 3.62 & 1.18 & 0.20 \\ Von Neumann Estimator & 0.46 & 4.02 & 1.08 & 0.48 \\ JAVELIN (spectroscopic) & 1.10 & 2.88 & 1.10 & 5.42 \\ JAVELIN (photometric) & 1.10 & 3.20 & 1.10 & 2.60 \\ MICA & 0.22 & 5.42 & 1.12 & 1.55 \\ PRM & 0.34 & 5.08 & 1.12 & 1.54 \\ \hline All Methods together & 0.38 & 4.74 & 1.08 & 0.17 \\ \hline \end{tabular}
\end{table}
Table 5: Best-fitting parameters to the inter-band lag spectra presented in Figure 4.
et al. (2017) estimated an (host-subtracted) AGN luminosity of \(\lambda L_{\lambda}(5100\rm\AA)\sim 1.99\times 10^{43}\) erg s\({}^{-1}\), which is \(\sim\)2.1 times lower than our optical luminosity estimate. These differences in luminosity and measured reverberation lags indicate that the reprocessing may undergo changes on timescales of years. Up to now, only very few AGNs comprise high-cadence continuum RM data spanning timescales long enough to search for temporal changes in lags (one example is Mrk 110 which has shown evidence for a time-varying BLR contribution; Vincentelli et al. 2021, 2022).
### Theoretical disk size
A standard, geometrically thin, optically thick accretion disk radiates thermally and has a temperature profile of \(T\sim R^{-3/4}\)(Shakura & Sunyaev, 1973). The hot inner parts of the accretion disk emit in the UV (\(\sim 100-3000\rm\AA\)), while the cooler outer annuli emit in the optical and near-IR (\(\sim 3000-10000\rm\AA\)). As the short-wavelength emission from the X-ray-emitting corona and the inner edge of the disk varies, it irradiates the outer annuli and drives variations at longer wavelengths delayed by the light travel time across the disk (e.g., Krolik et al. 1991). Therefore, this model predicts (based on the object's SMBH mass and mass-accretion rate) theoretical time delays between short-wavelength and long-wavelength variations according to a given temperature-radius relation.
We compared the observed inter-band continuum lags with model predictions for thermal reprocessing following the method described by Fausnaugh et al. (2016) and Edelson et al. (2017). Since the SMBH mass is highly uncertain, we substituted the product of the SMBH mass and mass-accretion rate with the target's optical luminosity, \(L_{opt}\)(see Eq. (7) in Davis & Laor 2011; for a detailed derivation, see Fian et al. 2022). Hence, we used the Shakura-Sunyaev model self-consistently and without the need to assume radiative efficiencies. The predicted light travel time \(\tau\) (in days) relative to a reference time delay \(\tau_{0}\) at a wavelength of \(\lambda_{0}=5100\rm\AA\) can then be written as follows:
\[(\tau-\tau_{0})\simeq 2\ \left(\frac{L_{opt}}{10^{45}\rm\ ergs\ s^{-1}} \right)^{1/2}\times\ \left[\left(\frac{\lambda}{5100\rm\AA}\right)^{4/3}-1\right]\ \rm days. \tag{2}\]
We find that the inferred time lags are much larger (by a factor of \(\sim 3-7\)) than the theoretical lag estimates, which has been reported in previous works as well (Jha et al., 2022; Fian et al., 2022; Montano et al., 2022; Edelson et al., 2019; Fausnaugh et al., 2018). However, an accretion disk larger than predictions by a factor of 7.4 for the longest optical wavelength band used in this work is striking since optical continuum RM campaigns typically find that continuum emission region sizes are \(\sim 2-3\) times larger than expected from disk reprocessing models (Cackett et al., 2022). The discrepancy in MCG 08-11-011 is difficult to explain, and it is not clear that host contamination (even for an \(\sim 50\%\) contribution from the extended host galaxy to the observed PSF photometry light curve at that wavelength) and/or intrinsic reddening could fully account for the mismatch between theory and observations. One possible explanation is that AGN accretion disks are larger than model predictions and that their implied physics (e.g., the accretion disk temperature profile) is markedly different from that expected in the thin-disk scheme. Another possible explanation for the longer-than-expected continuum lags and their wavelength dependences is a substantial contribution of diffuse continuum emission from the BLR to the observed continuum signals and reverberation lags (e.g., Cackett et al. 2018; Chelouche et al. 2019; Korista & Goad 2019; Netzer 2022). Since we were not able to constrain higher moments of
Figure 6: FVG diagram of MCG 08-11-011 between NB4250 and NB5700, NB6200, NB7320, and NB8025 (from left to right). Each data point is drawn as a thin cross in which the line length corresponds to the photometric uncertainties in the respective filters. A linear least-squares fit to the data points yields the AGN slope, plotted with the steep blue line. The cyan shaded area denotes the host color range from our multi-aperture photometry. The intersection between the AGN and the host galaxy slope gives the host contribution in the respective band within the aperture.
Figure 7: SED of MCG 08-11-011. Blue points show the host-subtracted AGN continuum with a power-law spectral shape of \(F_{\nu}\propto\lambda^{-0.02(0.02(0.12)}\) (dashed blue line). The dotted light blue line corresponds to a spectral shape as predicted by a standard Shakura-Sunyaev disk (with a spectral index of \(1/3\)).
the transfer functions than the lags (i.e., the first moment), we could not test the pure accretion disk versus accretion disk-BLR origin for the time delays.
## 5 Conclusions
We carried out photometric RM of the Seyfert 1 galaxy MCG 08-11-011 using specially designed optical NB filters at the C18 telescope of the Wise Observatory, allowing us to trace the emission-line-free continuum at different wavelengths and measure inter-band continuum time lags. According to the disk-reprocessing _lamppost_ model (Martocchia & Matt, 1996; Petrucci & Henri, 1997; Bao et al., 1998; Reynolds et al., 1999; Dabrowski & Lasenby, 2001), photons arising from the innermost regions are reprocessed in the form of emission from the outer regions, resulting in a lag. The reverberation lags represent the light travel time across different regions of the disk and their trend with wavelength contains information about the disk's temperature profile. The high-cadence multi-wavelength observations at the Wise Observatory provide an excellent dataset to constrain inter-band reverberation lags efficiently. Our main results and conclusions are summarized below.
1. All continuum light curves show significant correlated flux variations, which enabled us to carry out time series analysis to estimate the accretion disk size of MCG 08-11-011 using different cross-correlation and light-curve modeling methods.
2. We chose to measure lags relative to the NB4250 band as this is our bluest light curve, and we obtain mean time delays in the observer's frame of \(\tau=1.0\pm 0.5\) days for NB5700, \(\tau=2.0\pm 0.6\) days for NB6200, \(\tau=4.5\pm 0.6\) days for NB7320, and \(\tau=7.1\pm 1.1\) days for NB8025. The inferred disk sizes are larger (by a factor of \(\sim 5\) on average) than predicted by the Shakura-Sunyaev accretion disk model, which is consistent with recent findings (Jha et al., 2022; Fausnaugh et al., 2017, 2018; Fian et al., 2022; Pozo Nunez et al., 2019; Edelson et al., 2019; Cackett et al., 2018).
3. The inter-band lags increase with wavelength, which provides strong evidence of disk reprocessing. However, the trend of lag versus wavelength does not match the \(\tau\propto\lambda^{4/3}\) prediction of a standard geometrically thin disk. Phenomenological modeling shows that the data prefer a steeper lag-wavelength relation instead. This is in agreement with recent findings for Mrk 279, in which a diffuse continuum emission component was detected at the light curve level (Chelouche et al., 2019).
4. A significant contribution of the host-galaxy was found in the reddest bands, and we estimated a monochromatic host-corrected AGN luminosity at 5100A of \((4.21\pm 0.65)\times 10^{43}\) erg s\({}^{-1}\).
5. Interestingly, our results corroborate those from gravitational microlensing of strongly lensed quasars, which also find larger disk sizes than expected and a range of temperature profiles (Jimenez-Vicente et al., 2014; Motta et al., 2017; Fian et al., 2016, 2018, 2021; Cornachione et al., 2020, 2020; Fornachione & Morgan, 2020; Rojas et al., 2020). While microlensing can only probe the continuum emitting regions in distant high-luminosity quasars, RM provides a complementary approach to investigate the accretion disk structure in low-luminosity AGNs.
Accretion disk sizes obtained through both gravitational microlensing and continuum RM indicate that the standard Shakura-Sunyaev disk assumption does not hold for the majority of AGNs studied so far, and it raises the question of the usage of the simple standard-disk model for AGN accretion disks. Thus, the discrepancy between theory and observations reinforces the suggestion that additional components (such as a contribution of the diffuse BLR continuum emission) may be needed while modeling the accretion disks in AGNs (Jha et al., 2022; Montano et al., 2022; Vincentelli et al., 2021, 2022; Fian et al., 2022). Evidence of a non-disk component in the optical continuum of Mrk 279 was reported by Chelouche et al. (2019), indicating a possible explanation for the larger-than-expected continuum time lags. Vincentelli et al. (2021, 2022) show, for the first time, that the BLR contribution may even vary in a single object, confirming the importance of considering the effect of emitting components different from the disk when studying the lag phenomenology in AGNs. Further multi-epoch observations over a broader range of wavelengths and a longer time baseline would be particularly valuable to search for evidence of diffuse continuum emission from the BLR and to better understand short-timescale variations in reprocessing behavior. Although mapping the entire accretion disk profile is only possible with intensive multiwavelength campaigns such as the AGN STORM (Edelson et al., 2015; Fausnaugh et al., 2016; Kara et al., 2021) with observations ranging from X-ray over UV/optical up to the far IR, the RM campaign at the Wise Observatory provides us with the opportunity to use well-sampled NB light curves free of prominent line emission to study inter-band continuum lags and to reach a more detailed understanding of their physical origin, albeit with a smaller wavelength coverage. This work can be extended to a larger sample of low-luminosity sources that are not accessible through microclensing, allowing us to further research the structure of AGN accretion disks and accretion mechanisms.
###### Acknowledgements.
We thank the anonymous referee for the constructive remarks on this manuscript. This work was financially supported by the DFG grant HA3555-14/1 and CHI-34-3 to Tel Aviv University and University of Haifa. This research also has been partly supported by the Israeli Science Foundation grant no. 2398/19. T. L. is supported by an appointment to the NASA Postdoctoral Program at Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA.
|
2303.07418 | FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization | Novel view synthesis with sparse inputs is a challenging problem for neural
radiance fields (NeRF). Recent efforts alleviate this challenge by introducing
external supervision, such as pre-trained models and extra depth signals, and
by non-trivial patch-based rendering. In this paper, we present Frequency
regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms
previous methods with minimal modifications to the plain NeRF. We analyze the
key challenges in few-shot neural rendering and find that frequency plays an
important role in NeRF's training. Based on the analysis, we propose two
regularization terms. One is to regularize the frequency range of NeRF's
inputs, while the other is to penalize the near-camera density fields. Both
techniques are ``free lunches'' at no additional computational cost. We
demonstrate that even with one line of code change, the original NeRF can
achieve similar performance as other complicated methods in the few-shot
setting. FreeNeRF achieves state-of-the-art performance across diverse
datasets, including Blender, DTU, and LLFF. We hope this simple baseline will
motivate a rethinking of the fundamental role of frequency in NeRF's training
under the low-data regime and beyond. | Jiawei Yang, Marco Pavone, Yue Wang | 2023-03-13T18:59:03Z | http://arxiv.org/abs/2303.07418v1 | # FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization
###### Abstract
N Novel view synthesis with sparse inputs is a challenging problem for neural radiance fields (NeRF). Recent efforts alleviate this challenge by introducing external supervision, such as pre-trained models and extra depth signals, or by using non-trivial patch-based rendering. In this paper, we present **Frequ**ency regularized **NeRF** (FreeNeRF), a surprisingly simple baseline that outperforms previous methods with minimal modifications to plain NeRF. We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training. Based on this analysis, we propose two regularization terms: one to regularize the frequency range of NeRF's inputs, and the other to penalize the near-camera density fields. Both techniques are "free lunches" that come at no additional computational cost. We demonstrate that even with just one line of code change, the original NeRF can achieve similar performance to other complicated methods in the few-shot setting. FreeNeRF achieves state-of-the-art performance across diverse datasets, including Blender, DTU, and LLFF. We hope that this simple baseline will motivate a rethinking of the fundamental role of frequency in NeRF's training, under both the low-data regime and beyond. This project is released at \(\text{FreeNeRF}\).
## 1 Introduction
Neural Radiance Field (NeRF) [21] has gained tremendous attention in 3D computer vision and computer graphics due to its ability to render high-fidelity novel views. However, NeRF is prone to overfitting to training views and struggles with novel view synthesis when only a few inputs are available. We term this view synthesis from sparse inputs problem as a few-shot neural rendering problem.
Existing methods address this challenge using different strategies. Transfer learning methods, _e.g._, PixelNerf [37] and MVSNeRF [4], pre-train on large-scale curated multi-view datasets and further incorporate per-scene optimization at test time. Depth-supervised methods [6, 29] introduce estimated depth as an external supervisory signal, leading to a complex training pipeline. Patch-based regularization methods impose regularization from different sources on rendered patches, _e.g._, semantic consistency regularization [11], geometry regularization [22, 8], and appearance regularization [22], all at the cost of computation overhead since an additional, non-trivial number of patches must be rendered during training [11, 22, 8].
In this work, we find that a plain NeRF can work surprisingly well with _none_ of the above strategies in the few-shot setting by adding (approximately) as few as _one_ line of code (see Fig. 1). Concretely, we analyze the common failure modes in training NeRF under a low-data regime. Drawing on this analysis, we propose two regularization terms. One is frequency regularization, which directly regularizes the visible frequency bands of NeRF's inputs to stabilize the learning process and avoid catastrophic overfitting at the start of training. The other is occlusion regularization, which penalizes the near-camera density fields that cause "floaters," another failure mode in the few-shot neural rendering problem. Combined, we call our method **F**requency regularized **NeRF** (FreeNeRF), which is "free" in two ways. First, it is dependency-free because it requires neither costly pre-training [37, 4, 11, 22] nor extra supervisory signals [6, 29]. Second, it is overhead-free as it requires no additional training-time rendering for patch-based regularization [11, 22, 8].
We consider FreeNeRF a simple baseline (with minimal modifications to a plain NeRF) in the few-shot neural rendering problem, although it already outperforms existing state-of-the-art methods on multiple datasets, including Blender, DTU, and LLFF, at almost no additional computation cost. Our contributions can be summarized as follows:
* We reveal the link between the failure of few-shot neural rendering and the frequency of positional encoding, which is further verified by an empirical study and addressed by our proposed method. To our knowledge, our method is the first attempt to address few-shot neural rendering from a frequency perspective.
* We identify another common failure pattern in learning NeRF from sparse inputs and alleviate it with a new occlusion regularizer. This regularizer effectively improves performance and generalizes across datasets.
* Combined, we introduce a simple baseline, FreeNeRF, that can be implemented with a few lines of code modification while outperforming previous state-of-the-art methods. Our method is dependency-free and overhead-free, making it a practical and efficient solution to this problem.
We hope the observations and discussions in this paper will motivate people to rethink the fundamental role of frequency in NeRF's positional encoding.
## 2 Related Work
**Neural fields.** Neural fields [36] use deep neural networks to represent 2D images or 3D scenes as continuous functions. The seminal work, Neural Radiance Fields (NeRF) [21], has been widely studied and advanced in a variety of applications [2, 3, 32, 19, 23, 13, 25], including novel view synthesis [21, 18], 3D generation [25, 10], deformation [23, 26, 28], video [15, 35, 7, 24, 14]. Despite tremendous progress, NeRF still requires hundreds of input images to learn high-quality scene representations; it fails to synthesize novel views with a few input views, _e.g._, 3, 6, and 9 views, limiting its potential applications in the real world.
**Few-shot Neural Rendering.** Many works have attempted to address the challenging few-shot neural rendering problem by leveraging extra information. For instance, external models can be used to acquire normalization-flow regularization [22], perceptual regularization [38], depth supervision [29, 6, 34], and cross-view semantic consistency [11]. Another thread of works [5, 37, 4] attempts to learn transferable models by training on a large, curated dataset instead of using an external model. Recent works argue that geometry is the most important factor in few-shot neural rendering and propose geometry regularization [22, 1, 8] for better performance. However, these methods require expensive pre-training on tailored multi-view datasets [5, 37, 4] or costly training-time patch rendering [11, 22, 1, 8], introducing significant overhead in methodology, engineering implementation, and training budgets. In this work, we show that a plain NeRF can work surprisingly well with minimal modifications (a few lines of code) by incorporating our frequency regularization and occlusion regularization. Unlike most previous methods, our approach maintains the same computational efficiency as the original NeRF.
**Frequency in neural representations.** Positional encoding lies at the heart of NeRF's success [21, 31]. Previous studies [31, 30] have shown that neural networks often struggle to learn high-frequency functions from low-dimensional inputs. Encoding inputs with sinusoidal functions of different frequencies can alleviate this issue. Recent works show the benefits of gradually increasing the input frequency in different applications, such as non-rigid scene deformation [23], bundle adjustment [16], surface reconstruction [33], and fitting functions with a wider frequency band [9]. Our work leverages frequency curriculum to tackle the few-shot neural rendering problem. Notably, our approach not only demonstrates the surprising effectiveness of frequency regularization in learning from sparse inputs, but also reveals
the failure modes behind this problem and why frequency regularization helps.
## 3 Method
### Preliminaries
**Neural radiance fields.** A neural radiance field (NeRF) [21] uses a multi-layer perceptron (MLP) to represent a scene as a volumetric density field \(\sigma\) and associated RGB values \(\mathbf{c}\) at each point in the scene. It takes as input a 3D coordinate \(\mathbf{x}\in\mathbb{R}^{3}\) and a viewing directional unit vector \(\mathbf{d}\in\mathbb{S}^{2}\), and outputs the corresponding density and color. In its most basic form, NeRF learns a continuous function \(f_{\theta}(\mathbf{x},\mathbf{d})=(\sigma,\mathbf{c})\) where \(\theta\) denotes MLP parameters.
**Positional encoding.** Directly optimizing NeRF over raw inputs \((\mathbf{x},\mathbf{d})\) often leads to difficulties in synthesizing high-frequency details [31, 21]. To address this issue, recent work has used sinusoidal functions with different frequencies to map the inputs into a higher-dimensional space [21]:
\[\gamma_{L}(\mathbf{x})=\left[\sin(\mathbf{x}),\cos(\mathbf{x}),...,\sin(2^{L- 1}\mathbf{x}),\cos(2^{L-1}\mathbf{x})\right], \tag{1}\]
where \(L\) is a hyperparameter that controls the maximum encoded frequency and may differ for coordinates \(\mathbf{x}\) and directional vectors \(\mathbf{d}\). A common practice is to concatenate the raw inputs with the frequency-encoded inputs as follows:
\[\mathbf{x}^{\prime}=[\mathbf{x},\gamma_{L}(\mathbf{x})] \tag{2}\]
This concatenation is applied to both coordinate inputs and view direction inputs.
**Rendering.** To render a pixel in NeRF, a ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\) is cast from the camera's origin \(\mathbf{o}\) along the direction \(\mathbf{d}\) to pass through the pixel, where \(t\) is the distance to the origin. Within the near and far bounds \([t_{\mathrm{near}},t_{\mathrm{far}}]\) of the cast ray, NeRF computes the color of that ray using the quadrature of \(K\) sampled points \(\mathbf{t}_{K}=\{t_{1},\dots,t_{K}\}\):
\[\hat{\mathbf{c}}(\mathbf{r};\theta,\mathbf{t}_{K})=\sum_{K}T_{k}(1-\exp(- \sigma_{k}(t_{k+1}-t_{k})))\mathbf{c}_{\mathbf{k}},\]
\[\text{with}\quad T_{k}=\exp\left(-\sum_{k^{\prime}<k}\sigma^{\prime}_{k}\left( t_{k^{\prime}+1}-t_{k^{\prime}}\right)\right), \tag{3}\]
where \(\hat{\mathbf{c}}(\mathbf{r};\theta,\mathbf{t}_{K})\) is the final integrated color. Note that the sampled points \(\mathbf{t}_{K}\) are in a near-to-far order, _i.e_., a point with a smaller index \(k\) is closer to the camera's origin.
### Frequency Regularization
The most common failure mode of few-shot neural rendering is overfitting. NeRF learns 3D scene representations from a set of 2D images without explicit 3D geometry. 3D geometry is implicitly learned by optimizing appearance in its 2D projected views. However, given only a few input views, NeRF is prone to overfitting to these 2D images with small loss while not explaining 3D geometry in a multi-view consistent way. Synthesizing novel views from such models leads to systematic failure. As shown on the left of Figure 1, no NeRF model can successfully recover the scene geometry when synthesizing novel views.
The overfitting issue in few-shot neural rendering is presumably exacerbated by high-frequency inputs. [31] shows that higher-frequency mappings enable faster convergence for high-frequency components. However, the over-fast convergence on high-frequency impedes NeRF from exploring low-frequency information and significantly biases NeRF towards undesired high-frequency artifacts (horns and room examples in Fig. 1). In the few-shot scenario, NeRF is even more sensitive to susceptible noise as there are fewer images to learn coherent geometry. Thus, we hypothesize that high-frequency components are a major cause of the failure modes observed in few-shot neural rendering. We provide empirical evidence below.
We investigate how a plain NeRF performs when inputs are encoded by different numbers of frequency bands. To achieve this, we train mipNeRF [2] using masked (integrated) positional encoding. Specifically, we set pos_enc[int(L*x%]):]=0, where \(L\) denotes the length of frequency encoded coordinates after the positional encoding (Eq. (1)), and \(x\) is the visible ratio. We briefly demonstrate our observation here and defer the experiment details to SS4.1. Figure 2 shows the results for the DTU dataset under the 3 input-view setting. As anticipated, we observe a significant drop in mipNeRF's performance as higher-frequency inputs are presented to the model. When 10% of total embedding bits are used, mipNeRF achieves a high PSNR of 17.62, while the plain mipNeRF achieves only 9.01 PSNR on its own (at 100% visible ratio). The _only_ difference between these two models is whether masked positional encodings are used. Although removing a significant portion of high-frequency components avoids catas
Figure 2: **Masking high-frequency inputs helps few-shot neural rendering.** We investigate how NeRF performs with positional encodings under different masking ratios on the DTU dataset using 3 input views. Despite its over-smoothness, the plain NeRF succeeds in the few-shot setting when only low-frequency inputs are visible.
trophic failure at the start of training, it does not result in competitive scene representations, as the rendered images are usually oversmoothed (as seen in Fig. 2 zoom-in patches). Nonetheless, it is noteworthy that in few-shot scenarios, models using low-frequency inputs may produce significantly better representations than those using high-frequency inputs.
Building on this empirical finding, we propose a frequency regularization method. Given a positional encoding of length \(L+3\) (Eq. (2)), we use a linearly increasing frequency mask \(\boldsymbol{\alpha}\) to regulate the visible frequency spectrum based on the training time steps, as follows:
\[\gamma^{\prime}_{L}(t,T;\mathbf{x})=\gamma_{L}(\mathbf{x})\odot\boldsymbol{ \alpha}(t,T,L), \tag{4}\]
\[\text{with}\ \ \boldsymbol{\alpha}_{i}(t,T,L)=\begin{cases}1&\text{if}\ i \leq\frac{tL}{T}+3\\ \frac{t\cdot L}{T}-\lfloor\frac{t\cdot L}{T}\rfloor&\text{if}\ \frac{tL}{T}+3<i\leq\frac{tL}{T}+6\\ 0&\text{if}\ i>\frac{tL}{T}+6\end{cases} \tag{5}\]
where \(\boldsymbol{\alpha}_{i}(t,T,L)\) denotes the \(i\)-th bit value of \(\boldsymbol{\alpha}(t,T,L)\); \(t\) and \(T\) are the current training iteration and the final iteration of frequency regularization, respectively. Concretely, we start with raw inputs without positional encoding and linearly increase the visible frequency by 3-bit each time as training progresses. This schedule can also be simplified as one line of code, as shown in Figure 1. Our frequency regularization circumvents the unstable and susceptible high-frequency signals at the beginning of training and gradually provides NeRF high-frequency information to avoid oversmoothness.
We note that our frequency regularization shares some similarities with the coarse-to-fine frequency schedules used in other works [23, 16]. Different from theirs, our work focuses on the few-shot neural rendering problem and reveals the catastrophic failure patterns caused by high-frequency inputs and their implication to this problem.
### Occlusion Regularization
Frequency regularization does not solve all problems in few-shot neural rendering. Due to the limited number of training views and the ill-posed nature of the problem, certain characteristic artifacts may still exist in novel views. These failure modes often manifest as "walls" or "floaters" that are located extremely close to the camera, as seen in the bottom of Figure 3. Such artifacts can still be observed even with a sufficient number of training views [3]. To address these issues, [3] proposed a distortion loss. However, our experiments show that this regularization does not help in the few-shot setting and may even exacerbate the issue.
We find most of these failure patterns originate from the least overlapped regions in the training views. Figure 3 shows an example of 3 training views and 2 novel views with "white walls". We manually annotate the least overlapped regions in the training views for demonstration ((a) and (b) in Fig. 3). These regions are difficult to estimate in terms of geometry due to the extremely limited information available (one-shot). Consequently, a NeRF model would interpret these unexplored areas as dense volumetric floaters located near the camera. We suspect that the floaters observed in [3] also come from these least overlapped regions.
As discussed above, the presence of floaters and walls in novel views is caused by the imperfect training views, and thus can be addressed directly at training time without the need for novel-pose sampling [22, 11, 37]. To this end, we propose a simple yet effective "occlusion" regularization that penalizes the dense fields near the camera. We define:
\[\mathcal{L}_{occ}=\frac{\boldsymbol{\sigma}_{K}^{\intercal}\cdot\mathbf{m}_ {K}}{K}=\frac{1}{K}\sum_{K}\sigma_{k}\cdot m_{k}, \tag{6}\]
where \(\mathbf{m}_{k}\) is a binary mask vector that determines whether a point will be penalized, and \(\boldsymbol{\sigma}_{K}\) denotes the density values of the \(K\) points sampled along the ray in the order of proximity to the origin (near to far). To reduce solid floaters near the camera, we set the values of \(\mathbf{m}_{k}\) up to index \(M\), termed as regularization range, to 1 and the rest to 0. The occlusion regularization loss is easy to implement and compute.
Figure 3: **Illustration of occlusion regularization.** We show 3 training views (solid rectangles) and 2 novel views (dashed rectangles) rendered by a frequency-regularized NeRF. The floaters in the novel views appear to be _near-camera_ dense fields in the _training_ views (dashed circles) so that we can penalize them directly without the need for the costly novel-view rendering in [11, 22].
## 4 Experiments
### Setups
**Datasets & metrics.** We evaluate our method on three datasets under few-shot settings: the NeRF Blender Synthetic dataset (Blender) [21], the DTU dataset [12], and the LLFF dataset [20]. For Blender, we follow DietNeRF [11] to train on 8 views and test on 25 test images. For DTU and LLFF, we adhere to RegNeRF's [22] protocol. On DTU, we use objects' masks to remove the background when computing metrics, as full-image evaluation is biased towards the background, as reported by [37, 22]. We report PSNR, SSIM, and LPIPS scores as quantitative results. We also report the geometric mean of \(\mathrm{MSE}=10^{-\mathrm{PSNR}/10}\), \(\sqrt{1-\mathrm{SSIM}}\), and LPIPS, following [22]. More details on the experimental setup can be found in the appendix.
**Implementations.** Our FreeNeRF can directly improve NeRF [21] and miPNeRF [2]. To demonstrate this, we use DietNeRF's codebase1 for NeRF on the Blender dataset and RegNeRF's codebase2 for miPNeRF on the DTU dataset and the LLFF dataset. We disable the proposed components in those papers and implement our two regularization terms on top of their baselines. We make one modification to miPNeRF [2], which is to concatenate positional encodings with the original Euclidean coordinates (Eq. (2)). This is a default step in NeRF but not in miPNeRF, and it helps unify our experiments' initial visible frequency range. We follow their training schedules for optimization. Please refer to the Appendix for full training recipes.
Footnote 1: [https://github.com/ajayjain/DietNeRF](https://github.com/ajayjain/DietNeRF)
Footnote 2: [https://github.com/google-research/google-research/tree/master/regnerf](https://github.com/google-research/google-research/tree/master/regnerf)
**Hyper-parameters.** We set the end iteration of frequency regularization as \(T=\lfloor 90\%*\mathrm{total\_iters}\rfloor\) for the 3-view setting and \(70\%\) for the 6-view setting and \(20\%\) for the 9-view setting. We regularize both coordinates \(\mathbf{x}\) and view directions \(\mathbf{d}\). For \(\mathcal{L}_{\mathrm{occ}}\), we use a weight of \(0.01\) in all experiments and set the regularization range \(M=20\) for LLFF and Blender and \(M=10\) for DTU. For DTU in particular, we find that the "walls" are mostly caused by the white desk and black background, so we use this information to penalize more points in a slightly wider range (\(M=15\)) if their colors are black or white.
**Comparing methods.** Unless otherwise specified, we directly use the results reported in DietNeRF [11] and RegNeRF [22] for comparisons, as our method is implemented using their codebases. We also include our reproduced results for reference.
could be an interesting application, such behavior is undesired in our task and will hamper outputs' fidelity. In contrast, our method does not require semantics regularization while achieving better performance.
**DTU dataset.** Table 2 shows the quantitative results on the DTU dataset. Transfer learning-based methods that require expensive pre-training (SRF [5], PixelNeRF[37], and MVSNeRF [4]) underperform ours in almost all settings, except the full-image PSNR score under 3-view setting. This may be due to the bias introduced by the white table and black background present in many scenes in the DTU dataset, which can be learned as a prior through pre-training. Compared to per-scene optimization methods (mipNeRF [2], DietNeRF [11], and RegNeRF [22]), our approach achieves the best results. Figure 5 shows example novel views rendered by RegNeRF and ours. In the Buddha scene, for instance, piece-wise smoothness imposed by RegNeRF's geometry regularization [22] leads to the loss of fine-grained details, such as eyes, fingers, and wrinkles. In contrast, our frequency regularization, which can be seen as an implicit geometry regularization, forces smooth geometry at the beginning (due to the limited frequency spectrum) and gradually relaxes the constraint to facilitate the details. In the more challenging scenes (_e.g._, buildings and the bronze statue in Fig. 5), FreeNeRF produces higher-quality results.
**LLFF dataset.** Table 3 and Figure 6 show quantitative and
\begin{table}
\begin{tabular}{l|c|c c c c c|c c c c|c c c} & \multirow{2}{*}{Setting} & \multicolumn{3}{c|}{Object PSNR \(\uparrow\)} & \multicolumn{3}{c|}{Object SSIM \(\uparrow\)} & \multicolumn{3}{c|}{Full-image PSNR \(\uparrow\)} & \multicolumn{3}{c}{Full-image SSIM \(\uparrow\)} \\ & & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view \\ \hline SRF [5] & \multirow{3}{*}{Trained on DTU} & 15.32 & 17.54 & 18.35 & 0.671 & 0.730 & 0.752 & 15.84 & 17.77 & 18.56 & 0.532 & 0.616 & 0.652 \\ PixelNeRF [37] & & & 16.82 & 19.11 & 20.40 & 0.695 & 0.745 & 0.768 & 18.74 & 21.02 & 22.23 & 0.618 & 0.684 & 0.714 \\ MVSNeRF [4] & & & & 18.63 & 20.70 & 22.40 & 0.769 & 0.823 & 0.853 & 16.33 & 13.82 & 20.32 & 0.602 & 0.695 & 0.735 \\ \hline SRF \# [5] & \multirow{3}{*}{Trained on DTU} & 15.68 & 18.87 & 20.75 & 0.698 & 0.757 & 0.785 & 16.06 & 18.69 & 19.97 & 0.550 & 0.657 & 0.678 \\ PixelNeRF \# [37] & & and & 18.95 & 20.56 & 21.83 & 0.710 & 0.753 & 0.781 & 17.38 & 21.52 & 21.67 & 0.548 & 0.670 & 0.680 \\ MVSNeRF \# [4] & Optimized per Scene & 18.54 & 20.49 & 22.22 & 0.769 & 0.822 & 0.853 & 16.26 & 18.22 & 20.32 & 0.601 & 0.694 & 0.736 \\ \hline mip-NeRF [2] & \multirow{3}{*}{Optimized per Scene} & 8.68 & 16.54 & 23.58 & 0.571 & 0.741 & 0.879 & 7.64 & 14.33 & 20.71 & 0.227 & 0.568 & 0.799 \\ DietNeRF [11] & & & 11.85 & 20.63 & 23.83 & 0.633 & 0.778 & 0.823 & 10.01 & 18.70 & 22.16 & 0.354 & 0.668 & 0.740 \\ RegNeRF [22] & & & 18.89 & 22.20 & 24.93 & 0.745 & 0.841 & 0.884 & 15.33 & 19.10 & 22.30 & 0.621 & 0.757 & 0.823 \\ \hline mip-NeRF concat. (repro.) & & & 9.10 & 16.84 & 23.56 & 0.578 & 0.754 & 0.877 & 7.94 & 14.15 & 20.97 & 0.235 & 0.560 & 0.794 \\ \hline \multirow{3}{*}{\({}^{\dagger}\)RegNeRF concat. (repro.)} & \multirow{3}{*}{Optimized per Scene} & 18.50 & 22.18 & 24.88 & 0.744 & 0.844 & 0.890 & 15.00 & 19.12 & 22.41 & 0.606 & 0.754 & 0.826 \\ & & & 19.992 & 23.25 & 25.38 & 0.787 & 0.844 & 0.888 & 18.02 & 22.39 & 24.2 & 0.680 & 0.779 & 0.833 \\ \end{tabular}
\end{table}
Table 2: **Quantitative comparison on DTU.** We present the PSNR and SSIM scores of foreground objects and full images. Our FreeNeRF synthesizes better foreground objects and full images than most of the others. Our direct baseline is mipNeRF [2] (marked in gray). Results in the bottom row section are our reproductions, and others come from [22]. “concat.”: inputs concatenation (Eq. (2)). \({}^{\dagger}\)ReNeRF: w/o. appearance regularization. The best, second-best, and third-best entries are marked in red, orange, and yellow, respectively.
Figure 5: **Qualitative comparison on DTU.** We show novel views rendered by RegNeRF and ours in 3 and 6 input-view settings. For the Buddha example, the piece-wise geometry regularization used by RegNeRF [22] hurts the fine-grained geometry, erasing the details of eyes, fingers and wrinkles. RegNeRF’s results are rendered by our reproduced \({}^{\dagger}\)RegNeRF concat. (_c.f._ Tab. 2).
qualitative results, respectively, on the LLFF dataset. We reproduce mipNeRF [2] and obtain better results. Our FreeNeRF is generally the best. Transfer learning-based methods [5, 4, 37] perform much worse than ours on the LLFF dataset due to the non-trivial domain gap between DTU and LLFF. Compared to RegNeRF [22], our approach predicts more precise geometry and exhibits fewer artifacts. For instance, RegNeRF's rendered "horns" example (Fig. 6-a) is perceptually acceptable but has poor depth map quality, indicating its incorrect geometry estimation. FreeNeRF, in contrast, renders a less noisy and smoother occupancy field. Also, our approach suffers less from "floaters" than ReNeRF (Fig. 6-b), further demonstrating the efficacy of our occlusion regularization.
**Training overhead.** In Table 4, we include the training time of different methods under the same setting. Our method only introduces negligible training overhead (\(1.02-1.04\times\)) compared to the other approaches (\(1.62-2.8\times\)). Both DietNeRF [11] and RegNeRF [22] render unobserved patches from novel poses for regularization, which significantly sets back the training efficiency. DietNeRF requires additional forward evaluation of a large model (CLIP ViT B/32, \(224^{2}\), [27]), and RegNeRF also experiences increased computation due to the use of a normalizing flow model (this part is not open-sourced and therefore not available for our experiments). In contrast, FreeNeRF does not require such additional steps, making it a lightweight and efficient solution for addressing few-shot neural rendering problems.
### Ablation Study
In this section, we ablate our design choices on the DTU dataset and the LLFF dataset under the 3-view setting. We use a batch size of 1024 for faster training instead of 4096 for the main experiments in Tables 2 and 3.
**Frequency curriculum.** We investigate the impact of frequency regularization duration \(T\) in Figure 7. Our FreeN
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c|c c} & \multirow{2}{*}{Setting} & \multicolumn{3}{c|}{PSNR \(\uparrow\)} & \multicolumn{3}{c|}{SSIM \(\uparrow\)} & \multicolumn{3}{c|}{LPIPS \(\downarrow\)} & \multicolumn{3}{c}{Average \(\downarrow\)} \\ & & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view & 3-view & 6-view & 9-view \\ \hline SRF [5] & \multirow{3}{*}{Trained on DTU} & 12.34 & 13.10 & 13.00 & 0.250 & 0.293 & 0.297 & 0.591 & 0.594 & 0.605 & 0.313 & 0.293 & 0.296 \\ PixelNeRF [37] & & 7.93 & 8.74 & 8.61 & 0.272 & 0.280 & 0.274 & 0.682 & 0.676 & 0.665 & 0.461 & 0.433 & 0.432 \\ MVSNeRF [4] & & & & & & & & & & & & & \\ \hline SRF \# [5] & \multirow{3}{*}{Trained on DTU} & 17.07 & 16.75 & 17.39 & 0.436 & 0.438 & 0.465 & 0.529 & 0.521 & 0.503 & 0.203 & 0.207 & 0.193 \\ PixelNeRF \# [37] & & & & & & & & & & & & & \\ MVSNeRF \# [4] & & & & & & & & & & & & & \\ \cline{1-1} \cline{6-12}
0.99-NeRF [2] & \multirow{3}{*}{Optimized per Scene} & 14.62 & 20.87 & 24.26 & 0.351 & 0.692 & 0.805 & 0.495 & 0.255 & 0.172 & 0.246 & 0.114 & 0.073 \\ DietNeRF \# [11] & & & & & & & & & & & & & \\ \cline{1-1} DietNeRF [11] & \multirow{3}{*}{Optimized per Scene} & 14.94 & 21.75 & 24.28 & 0.370 & 0.717 & 0.801 & 0.496 & 0.248 & 0.183 & 0.240 & 0.105 & 0.073 \\ RegNeRF [22] & & & & & & & & & & & & & \\ \cline{1-1} \cline{6-12}
[MISSING_PAGE_POST]
\\ \cline{1-1
eRF benefits more from a longer curriculum in terms of PSNR score across two datasets, with the \(90\%\)-schedule being the best. We thus adopt it as our default schedule. However, we notice a trade-off between PSNR and LPIPS where a longer frequency regularization duration can result in higher PSNR but lower LPIPS scores. Fine-tuning the trained model can address this issue and yield better LPIPS scores. More details and discussions are provided in the Appendix.
**Occlusion regularization.** Table 5-(a) studies the effect of occlusion regularization. We observe consistent improvements in both datasets when occlusion regularization is included, confirming its efficacy. In contrast, the distortion loss \(L_{distort}\) in [3] worsens the results. Additionally, we find the performance of DTU-3 drops significantly if a large \(M\) is chosen since a large portion of real radiance fields falls in those ranges. The hyper-parameter \(M\) can be set per dataset empirically according to the scene statistics. Further, in Table 5-(b), we show that the way our regularization penalizes points near the camera differs from simply adjusting the near bound. The latter changes the absolute location of the ray starting point, while the occlusion effect _remains_ in the starting area regardless of changes to the near bound.
**Limitations.** Our FreeNeRF has two limitations. First, a longer frequency curriculum can make the scene smoother but may decrease LPIPS scores despite achieving competitive PSNR scores. Second, occlusion regularization can cause over-regularization and incomplete representations of near-camera objects in the DTU dataset. Per-scene tuning regularization range can alleviate this issue but we opt not to use it in this paper. Further discussion on these limitations can be found in the Appendix. Addressing these limitations can significantly improve FreeNeRF and we leave them as future work. Still, we consider FreeNeRF to be a simple yet intriguing _baseline_ approach for few-shot neural rendering that differs from the current trend of constructing more intricate pipelines.
## 5 Conclusion
We have presented FreeNeRF, a streamlined approach to few-shot neural rendering. Our study unfolds the deep relation between the input frequency and the failure of few-shot neural rendering. A simple frequency regularizer can drastically address this challenge. FreeNeRF outperforms the existing state-of-the-art methods on multiple datasets with minimal overhead. Our results suggest several venues for future investigation. For example, it is intriguing to apply FreeNeRF to other problems suffering from high-frequency noise, such as NeRF in the wild [18], in the dark [20], and even more challenging images in the wild, such as those from autonomous driving scenes. In addition, in the Appendix, we show that the frequency-regularized NeRF produces smoother normal estimation, which can facilitate applications that deal with glossy surfaces, as in RefNeRF [32]. We hope our work will inspire further research in few-shot neural rendering and the use of frequency regularization in neural rendering more generally.
\begin{table}
\begin{tabular}{c c|c|c c} \multirow{2}{*}{Dataset} & \multirow{2}{*}{\# views} & \multicolumn{3}{c}{Training time multiplier w.r.t. baseline} \\ \cline{3-5} & & NoRF [21] & +Ours & DietNeRF [11] \\ \hline Blender & 8 & \(1.0\times\) & \(1.02\times\) & \(2.8\times\) \\ Dataset & \# views & mipNeRF [2] & +Ours & \({}^{\dagger}\)RegNeRF [22] \\ \hline DTU & 3 & \(1.0\times\) & \(1.04\times\) & \(1.69\times\) \\ LLFF & 3 & \(1.0\times\) & \(1.04\times\) & \(1.98\times\) \\ \end{tabular}
\end{table}
Table 4: **Training time comparison. We run experiments under a fair setting and report the training time multipliers relative to the baselines. Our FreeNeRF has negligible training overhead compared to baselines (gray), while DietNeRF and RegNeRF do not. \({}^{\dagger}\): w/o. appearance regularization. Note that using appearance regularization will further increase training budgets.**
\begin{table}
\end{table}
Table 5: **Effect of occlusion regularization range. (a) We report PSNR scores on the DTU-3 object and LLFF-3 datasets. Entries except the last row use a batch size of 1024. “B&W” means using the predicted black & white color as additional prior (see “Hyper-parameters” in the “Setup” section). All entries use a \(90\%\)-schedule frequency regularization. (b) In the 3-view DTU ablation setting, we disable/enable \(\mathcal{L}_{occ}\) and vary the near bound to study the impact of our occlusion regularization. Our results show consistent improvement while adjusting the near bound has little impact. Our default settings are marked in gray.**
Figure 7: **Effect of frequency regularization duration. We set the end of frequency regularization as \(T=\lfloor\mathrm{total\_iters}*x\%\rfloor\). FreeNeRF achieves reasonably well performance across a wide range of curriculum choices. All entries use the occlusion regularization, including “w/o. frequency regularization”.** |
2301.11403 | Detecting Pump&Dump Stock Market Manipulation from Online Forums | The intersection of social media, low-cost trading platforms, and naive
investors has created an ideal situation for information-based market
manipulations, especially pump&dumps. Manipulators accumulate small-cap stocks,
disseminate false information on social media to inflate their price, and sell
at the peak. We collect a dataset of stocks whose price and volume profiles
have the characteristic shape of a pump&dump, and social media posts for those
same stocks that match the timing of the initial price rises. From these we
build predictive models for pump&dump events based on the language used in the
social media posts.
There are multiple difficulties: not every post will cause the intended
market reaction, some pump&dump events may be triggered by posts in other
forums, and there may be accidental confluences of post timing and market
movements. Nevertheless, our best model achieves a prediction accuracy of 85%
and an F1-score of 62%. Such a tool can provide early warning to investors and
regulators that a pump&dump may be underway. | D. Nam, D. B. Skillicorn | 2023-01-26T20:31:27Z | http://arxiv.org/abs/2301.11403v1 | # Detecting Pump&Dump Stock Market Manipulation from Online Forums
###### Abstract
The intersection of social media, low-cost trading platforms, and naive investors has created an ideal situation for information-based market manipulations, especially pump&dumps. Manipulators accumulate small-cap stocks, disseminate false information on social media to inflate their price, and sell at the peak. We collect a dataset of stocks whose price and volume profiles have the characteristic shape of a pump&dump, and social media posts for those same stocks that match the timing of the initial price rises. From these we build predictive models for pump&dump events based on the language used in the social media posts.
There are multiple difficulties: not every post will cause the intended market reaction, some pump&dump events may be triggered by posts in other forums, and there may be accidental confluences of post timing and market movements. Nevertheless, our best model achieves a prediction accuracy of 85% and an F1-score of 62%. Such a tool can provide early warning to investors and regulators that a pump&dump may be underway.
## 1 Introduction
New financial products and technologies have allowed naive investors to easily enter financial markets. This has increased the risk of manipulation, and detecting and investigating fraudulent activities has become much more difficult. Many go undetected [8].
Social media has created new methods for manipulating markets. A scheme known as _Pump and Dump_ (P&D) is one popular mechanism. Fraudsters buy quantities of a stock, disseminate false information about it to artificially raise its price, and then sell their purchased shares at the higher price. Social media provides a channel for rapid dissemination and a pool of investors with little knowledge or experience who may not detect that the information is false.
Conventional approaches to detecting manipulation look for known patterns, and for anomalous activity such as exceeded thresholds for prices and trading volumes. Suspicious activities can be detected using sets of rules and triggers that cause notifications of potential manipulation. However, those methods struggle in the presence of behaviours that deviate from historical patterns [16]. Previous work has also focused on detecting manipulations so that regulators can penalise those who carry them out. This does little to help investors, either to prevent their being deceived or recovering their investments.
Data-analytic techniques have the potential to detect false information as it being disseminated [11, 25]. Natural language analytics can detect the posts in social media that are intended to pump particular stocks, providing a real-time warning to potential investors. We investigate how well P&D schemes can be detected in posts on social media, by matching the language patterns in the posts to the pattern of stock price corresponding to a P&D manipulation.
A penny stock is a stock that is traded by a small public company for less than $5 per share [24]. Many of these companies are known for their volatility due to their limited coverage by analysts and interest from institutional buyers. Because of their low price, retail investors can buy large quantities of these stocks without having to invest much money. This, however, makes their prices volatile and so creates the potential for large returns on investments; but also leaves them vulnerable to manipulation by malicious actors. One study found that 50% of manipulated stocks are those with a small market capitalization [1].
It might be supposed that the connection between a social media post and a P&D event is too tenuous to be detected - after all, not every post will have the desired effect, and a P&D might be triggered by some less visible social media activity. We show that, at least for penny stocks, the connection is reasonably detectable, and we achieve prediction accuracies (that a post is intended to cause a P&D event) of 85%, with an F1 score of 67% (\(\pm\) 12 percentage points) from posts alone, and 62% (\(\pm\) 3 percentage points) from posts and comments.
## 2 Tools
Stance detection is a technique to determine the attitude or viewpoint of a text towards a target. It aims to detect whether the author of the text is in support of or against a given entity [21]. Some applications of stance detection have been in political debates, fake news, and social media [15, 26, 30].
Empath is a tool that was developed by Fast et al. [13] for researchers to generate and validate new lexical categories on demand. It uses deep learning to establish connections between words and phrases used in modern fiction. Given a small set of seed words that represents a category, Empath can provide new related terms using its neural embeddings. It also employs the use of crowd-sourcing to validate the terms that it considers are related. Along with the ability to create new categories, Empath comes with 200 built-in, pre-validated categories for common topics (e.g., neglect, government, social media).
SHAP (**SH**apley **A**dditive ex**P**lanation) is a tool that was developed by Lundberg and Lee [22] to determine the impact of each attribute on the output of a predictive model. It is based on Shapley values, a concept from game theory that determines a fair way to distribute the payoff for players that have worked in coalition towards an outcome [33].
Extreme Gradient Boosting is a decision-tree based ensemble algorithm that has become known for its speed and performance [5]. Decision trees are built sequentially so that each one reduces the errors of the previous one [35]. Random Forests is a decision-tree based ensemble algorithm with each tree built from a subset of the rows and columns of the dataset [34]. This allows for variation among the trees and results in lower correlation among their predictions [37]. Support Vector Machines are a supervised learning algorithm that finds a hyperplane that best separates the data points from two classes [14].
Artificial Neural Networks are computational networks that are inspired by the biological nervous system [10]. ANNs excel at prediction for data where the amount of information in each
atribute is small and there are non-linear interactions among them. Deep learning models are a class of extensions to ANNs that have solved long standing prediction problems in image recognition and natural language [20]. Convolutional Neural Networks (CNNs) are a class of deep learning networks that were designed initially to work with images but work surprisingly well with sequence data such as texts as well. Long Short-Term Memory (LSTM) deep learning networks are a type of recurrent neural network designed to handle the long-term dependencies present in sequence prediction problems [4]. Understanding text often requires looking ahead (think of verbs in German) and so processing text in both directions, using a biLSTM, provides better results for language [6].
## 3 Experiments
Within a typical online forum, there are two different categories of texts. The first is a _post_, which initiates a discussion. The second is a set of _comments_ responding to the post. For example, an individual may post saying that, in their opinion, a stock's price is about to rise, with others respond by sharing their opinions in the same thread. Responders may agree with the original post, or disagree.
P&D is an information-based manipulation, artificially raising the price of a stock through the dissemination of false information. As shown in Figure 1, this manipulation strategy involves three different stages [19]. The operators of the scheme first purchase the stock that they are planning to manipulate (Accumulation). Once they have acquired enough shares, they will disseminate false information to make it appear more desirable, driving up the price (Pump). Once the price has risen to the desired level of profit, the operators sell off their shares before anyone uncovers that the information has no basis or the hype dies down (Dump).
To identify P&Ds within the market, patterns associated with the scheme must be established.
Figure 1: Stages of Pump and Dump
While the method of conducting a P&D may vary, two indicators that can identify them are sharp changes in price and volume [19]. A P&D will cause a significant price increase within a short amount of time, larger than the fluctuations that the stock typically experiences; followed by a decrease once the dump phase has begun. The volume also increases as the stock gains interest among investors during and after the dissemination phase. However, the volume will typically not immediately experience as sharp a decline as the price when the operators begin to dump their shares because of the reluctance of investors to believe that the price is illusory.
If the profile of a P&D manipulation can be detected in the market, then the post that putatively caused it can be straightforwardly labelled and its language patterns investigated. (Of course, it is possible that some of the apparent connections are spurious, but it is relatively unlikely that a post touting a particular stock will be disseminated exactly when the stock's price and volume begin a sharp rise).
Labelling comments is more complex, since the comments may agree with the original post, or disagree. Only the language of those that agree can contribute to predicting a P&D event.
### Data Sources
Two different data sources were utilized. The first is the popular online website Reddit, where users discuss the stock market. The second is Yahoo Finance, a financial market website that provides historical data about companies.
Reddit contains forums referred to as subreddits, each dedicated to the discussion of a specific topic. Popular forums for the discussions of stocks are r/pennystocks, r/wallstreetbets, r/stocks, r/RobinHoodPennyStocks, r/TheWallStreet. We use r/pennystocks and r/RobinHoodPennyStocks, Yahoo Finance is a website provided by Yahoo for investors to access financial news, market data, and basic financial tools. Given a stock symbol or company name, it provides the relevant market data.
Classification techniques such as Extreme Gradient Boosting (XGBoost), Random Forests, Support Vector Machine (SVM), and Artificial Neural Networks (ANNs) were used to learn predictive models, and then to identify which attributes (i.e. words) are most predictive. Figure 2 shows the experimental workflow.
Data from Reddit and Yahoo Finance were collected daily for the period October 1, 2019, to June 28, 2020. A breakdown of the data is shown in Table 1. The majority of the data is retrieved
Figure 2: Experiment workflow
from r/pennystocks, with about a third from r/RobinHoodPennyStocks. The number of comments is much larger than the number of posts, with posts making up only about 5% of the texts.
As shown in Figure 3, there was a sharp increase in the number of submissions over the period of data collection:
* 139,000 Members \(\Rightarrow\) 257,000 Members
* 52,000 Members \(\Rightarrow\) 133,0000 Members
This seems to reflect an increase in amateur stock market investing because of the covid-19 pandemic, and a corresponding increase in manipulation. i.e, as manipulators look to take advantage of new, naive investors during the pandemic. Alerts and press releases by the SEC and the Canadian Securities Administrators warned new investors to be vigilant about the increasing number of P&D schemes that have occurred around that time [9, 28, 29].
The median number of words per post or comment was 22, and the total number of distinct words was 4,862.
Replacing stock symbols by the market sector to which each business belongs allows us to see which sectors are discussed the most, and which are the targets of P&D. Figure 4 shows that healthcare stocks are the most mentioned, followed by technology stocks. The pandemic clearly had an effect on both attention to markets and manipulations. Temporal trends in the healthcare
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Subreddit** & **Number of Posts** & **Number of Comments** & **Total** \\ \hline r/pennystocks & 12,049 & 234,149 & 246,198 \\ \hline r/RobinHoodPennyStocks & 6,506 & 78,429 & 84,935 \\ \hline
**Total** & 18,555 & 312,578 & 331,133 \\ \hline \end{tabular}
\end{table}
Table 1: Breakdown of records collected from subreddits
Figure 3: Data Collection Volumes
sector, Figure 5, show an increase in online activity at the beginning of the pandemic, and then a further increase in the middle of 2020. Figure 6 shows that P&D manipulations also increased in 2020.
Table 2 shows the information collected for each post and comment.
Data from Yahoo Finance was scraped using the yfinance tool [2]. Stock symbols were extracted from Reddit posts. This step is non-trivial and required regular expression extraction, and look ups against the publicly traded exchanges. Posts which mentioned more than one stock were discarded, partly because of the complexity of deciding which stock may be being touted, and partly because P&D posts typically focus on one particular stock they are pumping. If a stock symbol was found, yfinance was used to collect the financial information described in Table 3.
As shown in Figure 7, the daily Open, High, Low, Close, and Volume (OHLCV) data was collected over nine business days surrounding an event. Data was collected over five days before each post event to establish a baseline for price and volume. Penny stocks almost always shows minor variation in price and volume so this baseline is typically quite flat. The remaining four days contain the pump event (sharp increase) followed by a decrease in price and a slower decrease in volume.
Figure 4: Histogram of market sectors discussed within subreddits
Sabherwal et al. [27] studied the effects of online message boards on market manipulation and found that dumps typically occur within four days and this is plausible because the manipulators want to sell as soon as the price reaches a peak.
Texts from subreddits were preprocessed using the following steps: remove URLs, expand contractions, remove HTML Tags, remove punctuation, remove extra whitespaces, remove numbers, lemmatization, and remove stopwords.
Stock symbols within the text were replaced by dummy stock names representing the market sector associated with each business. This is required because the name of the particular stock being pumped and dumped in one case has nothing to do with the name of the stock being used in another case - but there might be correspondences within sectors. Here is an example:
\begin{table}
\begin{tabular}{l l} \hline
**Feature** & **Description** \\ \hline Post Title & Title of the post. \\ \hline Post ID & Unique identification code for post. \\ \hline Post Author & Author of the post. \\ \hline Post Created & Unix Timestamp of when post was submitted. \\ \hline Post Body & Text of the post. \\ \hline Comment ID & Unique identification code for comment. \\ \hline Comment Author & Author of the comment. \\ \hline Comment Created & Unix Timestamp of when comment was submitted. \\ \hline Comment Body & Text of the comment. \\ \hline \end{tabular}
\end{table}
Table 2: Features of collected Reddit data
Figure 5: Trend of posts and comments that discussed healthcare stocks
* "**AYTU** perfect time to buy" \(\Rightarrow\) "**SectorHealthcare** perfect time to buy"
### Data Labelling
To label each post, stock data surrounding the day in which the post was submitted to Reddit were analyzed. If the market data exhibited that pattern associated with P&D (a notable rise from the time of the post, followed by a sharp drop) then the post was labelled accordingly. A rise was detected by calculating the average price and volume in the five-day window before the post. The
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt}} \hline
**Feature** & **Description** \\ \hline Open & Opening price of the stock for the given period. \\ \hline High & Highest price for the stock within the given period. \\ \hline Low & Lowest price for the stock within the given period. \\ \hline Close & Closing price of the stock for the given period. \\ \hline Volume & Total number of shares traded within the given period. \\ \hline Market Sector & Associated industry that the company is in. \\ \hline Market Capitalization & Total market value of the company’s outstanding shares. \\ \hline \end{tabular}
\end{table}
Table 3: Features of Yahoo! Finance data
Figure 6: Trend of posts that have been labelled as P&D
daily average price (**DAP**) of the values was first calculated for each of the five days.
\[DAP(X_{t})=\frac{1}{4}(X_{t_{open}}+X_{t_{high}}+X_{t_{low}}+X_{t_{close}}) \tag{1}\]
and then the baseline average price (**BAP**) was calculated by
\[BAP(X_{est})=\frac{1}{5}\cdot\sum_{t=T_{0}}^{T_{1}}DAP(X_{t}) \tag{2}\]
The baseline average volume (**BAV**) was calculated by taking the average of the volume values over the estimation window.
\[BAV(X_{est})=\frac{1}{5}\cdot\sum_{t=T_{0}}^{T_{1}}X_{t_{volume}} \tag{3}\]
A threshold was set at two standard deviations above the average price within the five-day estimation window. Price increases above this threshold were considered to be pump events. A similar threshold was used to define a volume anomaly. Events were considered to be the result of P&D if they exceeded the threshold for both price and volume. Figure 8 shows a comparison of the stock behaviours labelled using this approach.
A sudden price rise or volume increase might coincide with a post, but is not necessarily caused by it. The rising region of each stock trend of a potential P&D event was min-max normalised,
Figure 7: Time window used to collect market data.
and its slope calculated. Steep price increases are more likely to arise from genuine information and less likely to have resulted from a single manipulation post, so the median slope across the entire dataset was calculated, and only slopes below the median were considered as potential P&D events. Figure 9 shows the distribution of stock price trend slopes from the entire the dataset. The median value is 0.18.
### Agreement Model
The comments associated with the P&D post cannot all be labelled as examples of P&D language, since not all of them will be supportive of the post they are responding to. Manipulators, of course, will post comments in support of the post, either from the same identity or from others.
We developed an agreement model, using ideas from stance detection. This was done using Empath to generate a lexicon of agreement, seeding it with the words: **bought**, **agree**, **positive**, **increasing**, **good**, and **now**. Empath returned the words listed in Table 4. Posts touting stocks also use a specialised vocabulary, shown in these examples.
* "probably go to **shoot** up tomorrow"
Figure 8: Comparison of stock behaviours that have been labelled using anomaly detection
* "this bad boy just **rocket**"
* "i will see you on the **moon**"
An extended lexicon was determined manually by inspecting posts associated with manipulation. Table 5 contains the list of words that were chosen using this approach.
Comments were labelled as associated with pumping if they contained two or more of the
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline only & done & better & true & knew \\ besides & like & maybe & wanted & liked \\ also & important & buying & understand & good \\ understood & needed & work & because & successful \\ knowing & grateful & plus & much & reasonable \\ should & give & happy & course & glad \\ well & considering & anyway & agree & meaning \\ great & probably & sure & thought & guaranteed \\ more & honestly & positive & thankful & actually \\ agreed & special & doubt & guess & though \\ bet & buy & surpass & worth & \\ suppose & although & especially & definitely & \\ certain & figured & given & means & \\ \hline \end{tabular}
\end{table}
Table 4: List of generated agreement words from Empath
Figure 9: Distribution of stock price trend slopes
agreement words, or if they were (visibly) authored by the original poster. The following are some examples of comments that were labelled as not P&D related based on the agreement model:
* "it be the american dream to fall for snake oil salesman and then lose everything it be a story as old as humanity"
* "clearly a pump and dump scheme"
* "do not touch it if the chart look like a hockey stick"
This labelling of comments is limited by the completeness of the agreement lexicon, and also does not account for negations.
P&D posts and comments are relatively rare and so the dataset is naturally imbalanced. Techniques such as SMOTE [3] and ADASYN [17] were tried but proved ineffective. Instead, where predictors allowed it, class weight parameters were set to penalise mistakes in the minority class.
### Modelling
The following predictors were used:
* Extreme Gradient Boosting (XGBoost)
* Random Forest (RF)
* Support Vector Machine (SVM)
* Artificial Neural Networks
* Multilayer Perceptron (MLP)
* Convolutional Neural Network (CNN)
* Bidirectional Long Short Term Memory (BiLSTM)
In each case the standard performance measures (accuracy, precision, recall, F1-Score, confusion matrix) were calculated, as well as the Shapley values which rank words by their importance to the predictions.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline moon & fast & massive & rich & surprise \\ rocket & profit & top & easy & move \\ pump & rally & peak & early & load \\ soar & climb & worth & shoot & quick \\ jump & rise & sale & money & burst \\ pop & high & gain & breakout & drive \\ hype & spike & run & cash & nice \\ fly & go & up & hit & bank \\ awesome & confident & surpass & more & zoom \\ big & great & potential & advantage & \\ \hline \end{tabular}
\end{table}
Table 5: List of custom words used in the Agreement Model
## 4 Results
Table 6 shows the class distribution for the dataset. Less than 9% of the records are labelled as being P&D. This is typical of datasets where fraud is present; indeed it is striking that the rate of fraud is this high.
The results of each of the predictive model are reported in Table 7 using 5-fold cross validation and upweighting the fraud class when the model permits it.
The neural network models perform well as expected. Models such as XGBoost, Random Forests, and SVM had disappointing performance, and a heterogeneous stacked classifier combining their predictions did not improve on the performance of the individual predictors, suggesting that they make their errors on the same records.
At first glance, the ANN models using posts perform better than those using posts and comments. However, the standard deviations of the performance numbers show that the inclusion of comments provides stability for correctly identifying P&D posts. The best performing model overall is CNN, especially with comments included. Its precision is relatively low; of all the records that the model predicts to be P&D, only 52.7% are actually correct. If we look at the rate at which each class is predicted to be positive, a better outlook of the model is provided. Given a positive P&D text, the model has a 76.65% chance of classifying it correctly, whereas, if it is given a negative text, it has a 13.3% chance of classifying it incorrectly as positive. It is perhaps a little surprising that biLSTM did not perform best since they are typically strong predictors for natural language problems.
The SHAP Explainers produce diagrams that rank the attributes by their impact on outcomes. Figure 10 shows the diagram for the CNN predictor for posts and comments and the 30 most impactful words. Although the influence of any single word is inevitably weak, there are visible red dots to the right for many of these words, indicating that higher frequencies of these words are associated with P&D events. The names of the popular sectors are indicator of P&Ds, as are words
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Record Type** & **P\&D** & **Not P\&D** & **Total** \\ \hline Posts & 3,006 & 15,549 & 18,555 \\ \hline Comment & 26,727 & 285,851 & 312,578 \\ \hline
**Total** & 29,733 & 312,142 & 331,133 \\ \hline \end{tabular}
\end{table}
Table 6: Dataset class distribution
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Model** & **TP** & **FP** & **TN** & **FN** & **Accuracy** & **Precision** & **Recall** & **F1-Score** \\ \hline \hline
**XGBoost Posts** & 1728 & 6615 & 8934 & 1278 & 57.46 (\(\pm\)3.73) & 20.71 (\(\pm\)0.48) & 57.49 (\(\pm\)0.68) & 30.45 (\(\pm\)2.25) \\ \hline
**XGBoost Posts and Comments** & 2007 & 7646 & 7903 & 999 & 53.41 (\(\pm\)1.42) & 20.79 (\(\pm\)0.85) & 66.77 (\(\pm\)1.58) & 31.71 (\(\pm\)0.96) \\ \hline
**RF Posts** & 271 & 646 & 14903 & 2735 & 81.78 (\(\pm\)0.51) & 29.55 (\(\pm\)1.40) & 9.01 (\(\pm\)0.52) & 13.81 (\(\pm\)0.78) \\ \hline
**RF Posts and Comments** & 414 & 211 & 51538 & 2592 & 84.89 (\(\pm\)0.69) & 66.24 (\(\pm\)1.69) & 13.77 (\(\pm\)0.47) & 22.80 (\(\pm\)0.75) \\ \hline \hline
**SVM Posts** & 1752 & 5263 & 10286 & 1254 & 64.88 (\(\pm\)1.14) & 24.98 (\(\pm\)0.76) & 58.28 (\(\pm\)1.05) & 34.97 (\(\pm\)1.16) \\ \hline
**SVM Posts and Comments** & 2125 & 4559 & 10990 & 881 & 70.6 (\(\pm\)0.49) & 31.79 (\(\pm\)0.43) & 70.69 (\(\pm\)0.56) & 43.86 (\(\pm\)0.57) \\ \hline \hline
**MLP Posts** & 2382 & 1718 & 13831 & 624 & 87.38 (\(\pm\)6.66) & 58.10 (\(\pm\)11.65) & 79.24 (\(\pm\)12.76) & 67.04 (\(\pm\)12.12) \\ \hline
**MLP Posts and Comments** & 2103 & 2602 & 12947 & 903 & 81.11 (\(\pm\)3.71) & 44.70 (\(\pm\)4.28) & 69.96 (\(\pm\)3.80) & 54.55 (\(\pm\)4.36) \\ \hline \hline
**CNN Posts** & 2373 & 1709 & 13840 & 633 & 87.38 (\(\pm\)7.04) & 58.13 (\(\pm\)12.02) & 78.94 (\(\pm\)12.76) & 66.96 (\(\pm\)12.37) \\ \hline
**CNN Posts and Comments** & 2304 & 2068 & 13481 & 702 & 85.07 (\(\pm\)1.25) & 52.70 (\(\pm\)2.33) & 76.65 (\(\pm\)3.45) & 62.46 (\(\pm\)2.64) \\ \hline \hline
**biLSTM Posts** & 2297 & 2495 & 13054 & 709 & 82.73 (\(\pm\)8.11) & 47.93 (\(\pm\)9.92) & 76.41 (\(\pm\)10.94) & 58.91 (\(\pm\)10.82) \\ \hline
**biLSTM Posts and Comments** & 2288 & 2370 & 13179 & 718 & 83.36 (\(\pm\)2.27) & 49.12 (\(\pm\)3.25) & 76.11 (\(\pm\)3.86) & 59.71 (\(\pm\)3.54) \\ \hline \end{tabular}
\end{table}
Table 7: Summary of model performance
from the agreement model such as "buy" and "go". Across the best performing models, the same set of words emerge as the most impactful features (not shown).
Related work
The application of data analytics for detecting market manipulation is a relatively new in the field of finance. Most research has focused on detecting trade-based manipulation because it is most common [32]. Huang and Chang found that of the manipulation cases prosecuted in Taiwan from 1991 to 2010, 96.61% were trade-based, and only 3.39% were information-based [18]. Some examples detecting trade-based manipulation are: Ogut et al. [38] in the emerging Istanbul Stock Exchange, Wang et al. [32] for prosecuted manipulation cases reported by the China Securities Regulatory Commission, Cao et al. [7] using real trading data from four popular NASDAQ stocks with synthetic cases of manipulation (spoofing and quote stuffing), Cao et al. [36] using seven popular NASDAQ and LSE stocks data injecting ten simulated stock price manipulations, Diaz et al. [12] using manipulation cases pursued by the U.S. Securities and Exchange Commission (SEC) in 2003, and Golomohammadi et al. [16] trying to detect three groups of manipulation schemes: marking the close, wash trades, and cornering the market.
For information-based manipulation, Victor and Hagemann [31] looked at 149 confirmed P&D schemes coordinated through Telegram chats and pumped via Twitter. Using XGBoost, they built a model that achieved a sensitivity of 85% and specificity of 99%. They concluded that P&Ds were frequent among cryptocurrencies that had a market capitalization of $50 million or below and often involved trading volumes of several hundred thousand dollars within a short time-frame.
Mirtaheri et al. [23] looked specifically at forecasting P&Ds by combining the information from Twitter and Telegram. They manually labelled known P&D operation messages on Telegram, and then used SVMs with a stochastic gradient descent optimizer to label the remaining messages as P&D or not. They used Random Forests to detect whether a manipulation event was going to take place within the market. Their results showed that they were able to detect, with reasonable accuracy, whether there is an unfolding manipulation scheme occurring on Telegram. Their proposed model was able to achieve an accuracy of 87% and an F1-Score of 90%.
Some partially automated tools have also been developed. These flag suspicious activities that can then by investigated by regulators. Delort et al. [11] used Naive Bayes classifiers to examine collected messages from HotCopper, an Australian stock message board. They successfully identified messages of concern, but the number of false positives was too high to use the model in an automated way. Owda et al. [25] compared messages to lexicon templates of known illegal financial activities (e.g. Pump and Dump, Insider Information). They found that, of the 3000 comments that were collected on a daily basis, 0.2% were deemed suspicious.
## 6 Conclusion
The intersection of social media with low-cost trading platforms and naive investors has made market manipulation an attractive strategy. Pump&dump is particularly simple to implement since it requires only the dissemination of fictional information about the future prospects for a stock. This is particular easy for penny stocks where validating information is difficult for ordinary investors, and where relatively small purchase volumes can cause large price movements.
We investigate protecting investors, and assisting regulators, by building predictive models that label social media posts (and the responses they elicit) as potential drivers of P&D events. We do this by collecting posts and comments, developing a model for a P&D event based on patterns of price and volume changes, using the match between posts and P&D events to label posts, and
extending this labelling to comments using an agreement model. Natural language predictors then learn the language patterns associated with P&D manipulations, so that new manipulations can be detected before they affect the market.
Data is imbalanced, since manipulations are rare, but our best predictive model achieves an F1-score of 62% and an accuracy of 85%. Improvements in performance are limited by potential coincidences between a post and a price and volume change that mimics a P&D, posts that fail to reach a sufficient audience to cause the desired buying behaviour, and natural language issues that arise from informal and short texts, and a specialised vocabulary used in stock discussion forums.
|
2302.07731 | Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews
on Social Media | Recent advances in generative models such as GPT may be used to fabricate
indistinguishable fake customer reviews at a much lower cost, thus posing
challenges for social media platforms to detect these machine-generated fake
reviews. We propose to leverage the high-quality elite restaurant reviews
verified by Yelp to generate fake reviews from the OpenAI GPT review creator
and ultimately fine-tune a GPT output detector to predict fake reviews that
significantly outperform existing solutions. We further apply the model to
predict non-elite reviews and identify the patterns across several dimensions,
such as review, user and restaurant characteristics, and writing style. We show
that social media platforms are continuously challenged by machine-generated
fake reviews, although they may implement detection systems to filter out
suspicious reviews. | Alessandro Gambetti, Qiwei Han | 2023-02-10T19:40:10Z | http://arxiv.org/abs/2302.07731v3 | # Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews on Social Media
###### Abstract.
Recent advances in generative models such as GPT may be used to fabricate indistinguishable fake customer reviews at a much lower cost, thus posing challenges for social media platforms to detect these machine-generated fake reviews. We propose to leverage the high-quality elite restaurant reviews verified by Yelp to generate fake reviews from the OpenAI GPT review creator and ultimately fine-tune a GPT output detector to predict fake reviews that significantly outperforms existing solutions. We further apply the model to predict non-elite reviews and identify the patterns across several dimensions, such as review, user and restaurant characteristics, and writing style. We show that social media platforms are continuously challenged by machine-generated fake reviews, although they may implement detection systems to filter out suspicious reviews.
AI-generated Content, Natural Language Generation, Fake Review Detection, GPT, Social Media +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
This paper is organized as follows. Section 2 presents a comprehensive review of the relevant literature regarding the influence and identification of fake reviews on social media platforms. Section 3 outlines the supervised learning approaches used to detect and analyze fake reviews across multiple attributes. In Section 4, the findings are presented, followed by a discussion in Section 5, which links back to the existing literature and highlights any limitations encountered. Lastly, Section 6 concludes the paper.
## 2. Literature Review
### Impact of Fake Reviews in Online Markets
Different economic agents including retailers and platforms are known for manipulating online reviews (Krishnan, 2011; Krishnan, 2012; Krishnan, 2013). For example, motivated by financial incentives, online merchants are inclined to distribute fake positive reviews for their own products of fake negative reviews against competitors' products (Krishnan, 2013; Krishnan, 2013; Krishnan, 2013). Also, online platforms have the propensity to circulate fake reviews to augment website traffic to promote customer engagement (Krishnan, 2013). Remarkably, individual users might also post fake content for reward-seeking purposes (Krishnan, 2011; Krishnan, 2013; Krishnan, 2013). Overall, fake reviews weaken informativeness and information quality (Krishnan, 2013), damaging review credibility and helpfulness (Krishnan, 2011; Krishnan, 2013; Krishnan, 2013; Krishnan, 2013), which are the main factors new consumers take into account when browsing reviews before making purchase decisions. Additionally, extant research has demonstrated that the proliferation of fake reviews increases consumer uncertainty (Krishnan, 2013; Krishnan, 2013), inducing customer distrust towards online reviews (Krishnan, 2013; Krishnan, 2013; Krishnan, 2013), undermining consumers' purchase intentions (Krishnan, 2013; Krishnan, 2013; Krishnan, 2013). Specifically, in the context of Yelp, (Krishnan, 2013) executed an interdisciplinary study leveraging both qualitative and quantitative research methods such as surveys and linear models, respectively, showing that, from the output of univariate models only, consumers' exact knowledge of review fraud is statistically significantly connected to increased intentions to use Yelp as a tool before making purchase decisions.
### Fake Reviews Characteristics and Detection
Recent literature examined how fake reviews differed from legit ones across several characteristics such as writing style (e.g. readability and sentiment), ratings, restaurant characteristics, and user behavior. For each characteristic, we succinctly discuss its relevant related work.
#### Writing Style
(Krishnan, 2013) posited that reviews' readability serves as a proxy for their helpfulness, as consumers must first read and then comprehend the text to assess their usefulness. Empirical research has further demonstrated that the likelihood of a review being deemed helpful increases when it is presented in an easily comprehensible manner (Krishnan, 2013). Hence, several studies theorized that fraudsters might deliberately disseminate simple fake content to quickly catch readers' attention (Krishnan, 2011; Krishnan, 2013; Krishnan, 2013), conceptualizing that fake reviews were easier to comprehend. Empirically, leveraging readability metrics such as the _Automated Readability Index_(Krishnan, 2013), (Krishnan, 2013) found that fake deceptive reviews exhibited less writing complexity as compared to truthful ones. However, no unanimous academic consensus has been established on this finding, because other studies employing comparable methodologies showed the opposite result (Krishnan, 2013; Krishnan, 2013).
Also, textual review sentiment has been investigated for its effectiveness and helpfulness (Krishnan, 2013; Krishnan, 2013). As for fake reviews, consumers realized that more polarized sentiment tones could be surrogates for suspicious user-generated content (Krishnan, 2013). For example, leveraging statistical tools such as standard t-tests and generalized linear models, prior research discovered that fake reviews were richer in positive cues as compared to authentic ones (Krishnan, 2013; Krishnan, 2013). Additionally, (Krishnan, 2013) adopted machine learning techniques to identify review spam, concluding that mixed or neutral sentiments were associated with truthful reviews. Also, employing ranking classification models, (Krishnan, 2013) described how spammers are not capable of expressing true sentiment when writing fake reviews, leading to more polarized opinions in the end.
#### Ratings and Restaurant Characteristics
Extreme sentiment polarity was also detected when considering review ratings (Krishnan, 2013), which are a robust representative of sentiment as well. In particular, extant literature affirmed that positive fake reviews were more prevalent than negative ones (Krishnan, 2013; Krishnan, 2013). For example, (Krishnan, 2013) found that 56% of fake reviews were positive (4-5 stars) and that 29% were negative (1-2 stars). One hypothesis that may ex-post explain the prevalence of positive fake content could be that a one-star increase in the Yelp restaurant average rating is associated with a 5-9% revenue growth (Krishnan, 2013). As far as restaurant characteristics, (Krishnan, 2013) examined how fake restaurant reviews were present on Yelp. They found that about 16% of the reviews were filtered out as fake or suspicious, and that restaurants with fewer associated reviews were more likely to submit positive fake reviews to enhance their reputation. (Krishnan, 2013) also segmented restaurants into chain (e.g. McDonald's, Burger King, Subway, etc.) and non-chain, finding the former ones less likely to display positive fake content, because their revenue is not significantly affected by their rating (Krishnan, 2013), and because they may incur high reputation costs if caught (Krishnan, 2013).
#### User Behavior
Fake reviews can also be identified by user behavior, i.e. spammers' characteristics. For example, (Krishnan, 2013) describe the concept of _singleton reviews_, which is the phenomenon of users posting only one fake review. Because of that one-to-one relationship, spotting and tracking activities of singleton review spammers is challenging (Krishnan, 2013; Krishnan, 2013). (Krishnan, 2013) defined four subsets of user-centric features to analyze Yelp reviews: _personal profile_ features (e.g. profiles description), _social interaction_ features (e.g., user number of friends), _review activity_ features (e.g. number of previous reviews), and _trust information_ features (e.g. number of photos posted). Here, they leveraged supervised machine learning techniques to classify fake versus authentic reviews, showing that _review activity_ features were the most relevant in terms of classification accuracy. Inherent to our paper, they also described how accounts associated with consistent spamming of fake user-generated content displayed fewer friends, fewer photos posted, and fewer reviews as compared to accounts conducting a genuine activity. Strengthening this finding, (Krish, 2013) also confirmed that fake reviews were more likely to be posted by users with less established reputations, as determined by fewer friends and reviews published.
#### 2.2.4. Detection
Human evaluators were found to systematically fail at distinguishing user-generated fake reviews from genuine ones (Han et al., 2017). For example, (Wang et al., 2018) surveyed members of the general public to detect fake reviews, finding that the best human judge achieved an accuracy of 65%. Inherently, (Wang et al., 2018) found a similar result for the same task amounting to 61% accuracy. Also, (Wang et al., 2018) and (Wang et al., 2018) recorded comparable human performance in similar experimental settings, with humans achieving average accuracy detection rates of 57% and 52%, respectively. Such findings demonstrate that humans perform at an accuracy level comparable to random guessing. In contrast, fake reviews detection, viewed as a binary "_spam_ versus _non-spam_" (Krause et al., 2019) or "_fake_ versus _non-fake_" (Wang et al., 2018) supervised learning problem, showed promising results. Models such as logistic regression (Krause et al., 2019; Wang et al., 2018), naive Bayes (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), random forest (Wang et al., 2018), and XGBoost (Krause et al., 2019) served as valuable benchmarks for more sophisticated models such as deep convolutional neural networks (Wang et al., 2018; Wang et al., 2018) and recurrent neural networks (Wang et al., 2018). Also, large language models (LLM) such as GPT-2 or RoBERTa, depending on the attention mechanism (Wang et al., 2018) have been employed in spam detection tasks (Krause et al., 2019; Wang et al., 2018). For example, as early as 2011, (Wang et al., 2018) achieved detection accuracy rates above 85% with models such as naive Bayes out of sample. More recently, (Wang et al., 2018) achieved 97% F1-score with a RoBERTa transformer in a comparable experimental setting. These findings demonstrate how natural language processing and machine learning techniques outperform human capabilities with a high degree of accuracy and efficiency in terms of identifying patterns and trends in the fake review detection domain.
## 3. Methodology
In this section, we describe the methodology used. We explain how we: (1) collected data and generated fake GPT-3 reviews, (2) asked human judges and implemented machine learning algorithms to detect them, and (3) inferred and explained the predictions of fake versus non-fake reviews on a set of unverified reviews. Figure 1 illustrates the GPT-3 pipeline from fake review generation to detection. As of 2022, GPT-3 is a state-of-the-art natural language processing model developed by OpenAI that has gained considerable attention due to its impressive performance in a wide range of language-related tasks (Wang et al., 2018). Its ability to learn the patterns and structures of language at an unprecedented scale has enabled the model to generate coherent and contextually relevant text that is often indistinguishable from that written by humans. The power and versatility of GPT-3 make it a valuable methodology for both text generation and detection. It is necessary to highlight that a successful application of GPT-3 models is the recent introduction of ChatGPT, a chatbot that leverages their architecture to engage in human-like conversations and provide support to users in question-answering tasks. However, the wide accessibility and (current) nil usage costs of ChatGPT may also enhance the capabilities of malicious actors to generate fraudulent content to be disseminated on social media platforms. This raises concerns about the potential misuse of GPT-3-based models and underscores the need for vigilant monitoring and control of its applications.
### Data Collection
We accessed the 2021 to mid-2022 New York City restaurant mobility data from the company SafeGraph ([https://www.safegraph.com](https://www.safegraph.com)) to collect a dataset of restaurants. New York City was selected because it offers a variety of restaurants serving distinct culinary tastes within an international setting, thereby providing sufficient heterogeneity for our study. Next, we scraped all restaurant-related customer reviews from Yelp. In total, we collected 447,295 reviews connected to 5,959 restaurants. Each example includes the review text, the date it was posted, the rating, the poster's Yelp elite status, the poster's number of previous reviews, and the poster's number of uploaded photos. Then, we enriched the data by (1) querying the Yelp official API downloading restaurant-related variables (each review was connected to) such as the average rating and the price level, and (2) including the raw number of visits and the normalized number of visits by the total visits (in the New York state) from the original SafeGraph data. SafeGraph collects visit data by leveraging various data sources, including GPS signals, Wi-Fi signals, and Bluetooth signals from visitors' mobile devices. The company uses a combination of these signals to determine the temporal location of devices in the physical world, and then aggregates this information to create a comprehensive dataset of visit information. Overall, SafeGraph datasets have been extensively used in diverse research domains, including public health (Han et al., 2017), impact of mobility restrictions and compliance (Han et al., 2017), disease transmission patterns and epidemiological simulations (Wang et al., 2018; Wang et al., 2018), and estimation of alcohol sales (Krause et al., 2019), among others.
### Fake Reviews Generation
After collecting the main data, we used the OpenAI publicly available GPT-3 API to build a dataset of fake reviews ([https://openai.com/api](https://openai.com/api)). As of 2022, four different GPT-3 sub-models could be
Figure 1. We leveraged OpenAI’s GPT-3 _Davinci_ and _Curie_ models to create a dataset of fake reviews. Prompts were elite reviews posted by elite users. Therefore, the generated dataset contained equally balanced real elite and fake reviews. To classify them accurately, we fine-tuned a GPT-Neo (GPT-3 equivalent) model, which was used to predict the probability of non-elite reviews being AI-generated.
chosen at different price rates: _Ada_ (0.00045 / 1K tokens), _Babbage_ (0.00058 / 1K tokens), _Curie_ (0.00205 / 1K tokens), and _Davinci_ (0.02005 / 1K tokens). Naturally, the higher the price rate, the more accurate the model instruction-following. A higher price rate also translates into larger model sizes, as measured by the number of parameters, which are not officially disclosed by OpenAI. Simply put, a larger number of parameters leads to increased performance.
We then randomly sampled 12,000 reviews from a total of 92,253 elite reviews (out of 447,295 total scraped reviews) representing 4,994 restaurants, and used the elite-sampled texts as prompts to generate related fake reviews. Elite reviews are written by elite users, who Yelp thoroughly verifies. According to Yelp, to apply for an elite membership, a user is expected to have consistently posted thoughtful reviews, uploaded beautiful pictures, and upvoted others' reviews. Therefore, we assume that elite reviews are a reliable proxy of information reflecting real customers' opinions. As a prompt, we utilized the default template that OpenAI provides for restaurant fake review generation:
_"Write a restaurant review based on these notes:_
_Name: [Example restaurant name]_
_[Example elite review text]_"
For each prompt, we randomly selected one model between _Curie_ and _Davinci_ with equal probability. Also, we randomly sampled the _Temperature_ value, a hyper-parameter controlling the randomness of the generated text, from \(U\sim(0.3,0.7)\). For general reference, a value of 0 generates a deterministic and repetitive text and 1 vice versa. All the other hyper-parameters were kept as default. Table 1 provides three examples with different sentiments. As a result, the final dataset of 24,000 reviews, equally balanced between elite and fake reviews, was split into 80% training and 20% testing for later use.
### Survey Design for Fake Reviews Detection
We ran a human study in which each respondent was asked to select the AI-generated review from a set of review pairs. The surveying strategy consisted of two steps.
Firstly, we sampled and showed 15 review pairs from the training set to "train" the respondents for the task, with each pair containing one human-generated review and one AI-generated review. Secondly, we sampled 40 review pairs from the test set and used those as survey questions. Train and test set human-written reviews average about 140 words per review (std 75 words). Here, we picked human-written reviews in the range \(140\pm 40\) words away, with reviews exceeding 140 words categorized as _long_, and vice-versa, and paired a comparable length AI-generated review (max 30 words difference). For the questions, we randomly selected 10 same-restaurant long review pairs (_Same-Long_, 140\(>\)words\(>\)180), 10 same-restaurant short review pairs (_Same-Short_, 100\(<\)words\(<\)140), 10 different-restaurant long review pairs (_Different-Long_), and 10 different-restaurant short (_Different-Short_) review pairs. In relation to the response options, a third alternative, "_Cannot decide. I'm unsure_", was included to enable study participants to indicate their uncertainty instead of having to resort to a random guess. This additional response choice aimed to enhance the accuracy and reliability of the survey data collected by reducing the impact of arbitrary guessing, which may compromise the validity of the study. All the pairs were randomly spread across the survey form, and two questions were converted into attention checks to monitor the respondents' care.
The survey was then sent to 90 participants through the Prolific platform, and they were paid about $8.30 per hour. After removing 10 attempts connected to inattentive answers, we counted 80 valid responses related to 38 questions, totaling 3,040 single-question responses. We reported the overall average accuracy and the average accuracy for all four categories. Finally, we conducted a Tukey's HSD test to validate whether such category averages were statistically different.
### Automating Fake Reviews Detection with AI
We deployed machine learning techniques to carry out the same task as above. Practically, we fine-tuned a pre-trained **GPT-Neo** model (GPT-Neo, 2016; 6) to classify fake versus real reviews.
GPT models belong to the family of transformer models, which have become state-of-the-art in natural language processing and computer vision. The reason why transformer models are powerful is that they rely on the attention mechanism (Zhu et al., 2017), allowing the network to focus mainly on the most relevant parts of the input sequence. GPT-Neo is designed using EleutherAI's replication of the GPT-3 architecture, which currently is OpenAI's proprietary software. As such, GPTNeo is a scale-up of the GPT (Zhu et al., 2017) and GPT-2 (Zhu et al., 2017) models.
Practically, we accessed a 125 million parameters pre-trained version from Huggingface ([https://huggingface.co/EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M)), and fine-tuned it with our generated fake restaurant reviews dataset. We benchmarked GPT-Neo with other machine learning models such as **B**idirectional-**LSTM**, Logistic **Regression**, **Naive Bayes**(Zhu et al., 2017), **Random** Forest (Brock et al., 2018), **XGBoost (Brock et al., 2018), and **GPT-2**. We also benchmarked it to the current open-source **OpenAI's** RoBERTa model for GPT fake text detection (Zhu et al., 2017). We trained with 5-fold cross-validation on the training set for the machine learning models and reported the results on the test set. Review texts were represented with a bag-of-words approach, which tokenizes text into individual words and then counts the frequency of those words in each document. While for the deep learning models (BiLSTM, GPT-2, and GPT-Neo), we extracted another 20% partition from the training set as validation data since computing 5-fold cross-validation is computationally expensive. Here, review texts were tokenized using Byte-Pair Encoding (BPE) (Ring et al., 2016), which is a byte-level data compression algorithm used to segment words into subword units by iteratively merging the most frequently occurring pairs of adjacent bytes. We trained using the AdamW optimizer default hyper-parameters (Kingmae et al., 2014), using a learning rate of 1e-4, decaying it by a factor of 0.1 every 5 epochs, a batch size of 1, and early-stopping at 10 epochs. For the GPT-Neo, we computed the optimal classification threshold at each epoch by optimizing Youden's **J** statistics in the validation set (Zhu et al., 2017), calculated as the difference between the true positives rate and false positives rate. Finally, the best weights and classification threshold were saved, and evaluation was performed on the test set. For all the models, we reported the accuracy score, precision score, recall score, and F1 score.
### Inference on Non-Elite Reviews
With the best GPT-Neo classifier, we performed inference on the unverified non-elite reviews, determining the probability of each one being fake. Importantly, reviews posted before 2020 (included) were dropped, since GPT-3 models were beta-released in 2020. Here, we hypothesized that GPT-3-based AI crowdturfing campaigns were implemented in the last two years, given the GPT-3 API's easy accessibility and low usage costs. This operation reduced the inference dataset to 131,266 non-elite reviews. Importantly, Yelp implements a proprietary algorithm to flag and filter fake reviews (Yelp, 2019). In this paper, we performed inference on reviews that had already passed the Yelp filtering system.
Each example review incorporates a _review_-based variable, i.e. the review rating (_Rating_), distributed as a 1 to 5 Likert scale; _user_-based variables, i.e. the user's number of friends (_#Friends_), the user's aggregated number of previously posted reviews (_#Review_s), and the user's overall number of posted photos (_#Photos_); and _restaurant_-based variables, i.e. the restaurant's average rating from all the reviews (_AvgRating_), the price level (_PriceLevel_), i.e. the average price per person as "$": under 510, "$$": $10-$30, "$$55": $31-$60 and "$$$$$": over $60, the overall number of restaurant reviews posted by customers (_#RestReview_s), the chain status (_ChainStatus_), computed adopting Zhang & Luo (2022) approach (Zhang & Luo, 2022), which counts the number of unique restaurant names in the dataset, and assigns those appearing more than five times as belonging to a restaurant chain (e.g. McDonald's, Starbucks, Burger King, etc.), the number of customer visits between 2021 and mid-2022 (_#Visits_), and the normalized number of visits (_NormVisits_), multiplied by 1,000 for easier readability. A summary statistics is provided in Table 2.
Afterward, classification was performed with a sensitivity analysis approach at the (Castro et al., 2017; Dodds et al., 2017; Dodds et al., 2017; Dodds et al., 2017; Dodds et al., 2017) classification thresholds. For each threshold, we separated predicted fake versus non-fake reviews, and performed ANOVA for each aforementioned variable to inspect differences across the two predicted categories. This methodology was adopted because the labels about whether non-elite reviews were AI-generated were not available. Thus, different thresholds were tested to examine the sensitivity and robustness of predictions.
### Writing Style: Explaining the Predictions
In our context, writing style refers to how a textual review is constructed by the writer, sentence-by-sentence, and word-by-word. Although it is hard to distinguish at first impact, we believe that humans and AI have different writing styles, with the latter being more repetitive, more predictable, and less sophisticated than the former.
We considered three classes of metrics to evaluate the writing style of each non-elite review: _perplexity_-based, _readability_-based,
\begin{table}
\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt}} \hline \hline Sentiment & Prompt (Real) & Generated (Fake) \\ \hline Negative & “Unfortunately I found the soup to be very bland. I’m not sure if it was a one off but I was very disappointed." & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating." & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it." & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noodles. The soup was very bland and I wasn’t sure if it was just a one-off, but I didn’t enjoy it at all. I wouldn’t recommend it.” & “Overall, I had a good experience at Spoonfed NYC. The shrimp and grits were delicious, as was the okra and molasses corn bread. The only downside was the long wait for the food. If they could address the hunger issue, I would have given this restaurant a 4-star rating.” & “I was really disappointed with Noona Noona Noodles. The soup
and _sentiment_-based metrics. _Perplexity_-based metrics include _Perplexity_/_ppl_ and _Textual Coherence_/_TC_.
\[PPL(X)=\exp\left[-\frac{1}{t}\sum_{1}^{t}\log p(x_{i}|x_{<i})\right] \tag{1}\]
As in Equation 1, _PPL_ is defined as the exponential average negative log-likelihood of a sequence of words \(w_{i},w_{i+1}\dots w_{i+t}\). In simple terms, it measures the conditional probability that each word follows its preceding one. As of 2022, _PPL_ is one of the most widely adopted metrics to evaluate the accuracy of language models. Generally, a low _PPL_ score implies better grammar text correctness and cohesion.
Then, by breaking a review into a sequence of sentences, we introduce the concept of _Textual Coherence_/_TC_. is defined as the presence of semantic relations among sentences. In simple words, given a corpus containing a set of sentences that when viewed independently convey a valid meaning, if by reading them sequentially no meaning is conveyed, then the corpus is not coherent. To measure _TC_, we deployed the _Zero-Shot Shuffle Test_(Shi et al., 2017; Wang et al., 2018). Namely, we split each review into single sentences and generated all the possible sentence permutations. We scored each permutation with the _PPL_ as in Equation 1 and subtracted the original perplexity score, obtaining a per-review set of perplexity changes, which we averaged to compute _TC_. It is important to highlight that some reviews contained a large number of sentences, leading to high computational costs in generating permutations. Mathematically, given a set of \(n\) sentences (population), and a subset of \(r\) sentences to be chosen from \(n\), it is possible to generate all the permutations without repetition \(P(n,r)=\frac{n!}{(n-r)!}\). In our problem, \(n=r\), meaning that \(P(n,r)=n!\), which incurs an expensive computational cost \(O(n!)\). To mitigate that, we sampled \(s\) sentences from each review such that \(s=min(n,5)\). We chose 5 as a maximum threshold because such value represents the median number of sentences per review, and because 5! per-review permutations are still computationally feasible to be processed. To calculate both _PPL_ and _TC_, we adopted a general purpose pre-trained 125 million parameters GPT-Neo.
As for _readability_-based metrics instead, we considered the following metrics: _Automated Readability Index_ (_ARI_) (Krishnaman et al., 2017), _Number of Difficult Words_/_p_, and _Readability Time_ (_RTime_). _ARI_ is one of the most widely adopted readability indices to evaluate the readability of a given text. Also, it has already been adopted in other studies evaluating the readability of online reviews (e.g. (Zhou et al., 2018; Krizhevsky et al., 2017)).
\[ARI=4.71\frac{\sharp Chars}{\sharp Words}+0.5\frac{\sharp Words}{\sharp Sentences}-21.43 \tag{2}\]
As in Equation 2, _ARI_ decomposes the text into basic structural elements such as the number of characters (_#Chars_), number of words (_#Words_), and number of sentences (_#Sentences_). Unlike other readability indices, the main advantage of _ARI_ is that it relies on the number of characters per word and not on the number of syllables per word, therefore being more accurate to calculate for a computer. Also, the interpretation of _ARI_ is straightforward, as its output produces an approximated representation of the US-grade education level needed to understand the text. For example, an _ARI_ of 9.2 indicates that a 9th-grade student can understand the text. Simply put, the higher the _ARI_ score, the higher the difficulty in text comprehension for an average interlocutor. Next, _#DW_ is the count of difficult words present in a text. By looking at the _Dalek-Chall Word List_(Dalek and Schuster, 2015), which contains approximately 3,000 familiar words known by an average 5th-grade student, if a word is not present in the list, then it is labeled as difficult. Then, _RTime_ was computed by following (Krishnaman et al., 2017), who found that each text character needs an estimated average of 14.69 milliseconds to be digested by the reader. Finally, the only _sentiment_-based metric is the SiEBERT _Sentiment_ score (Krizhevsky et al., 2017). SiEBERT is based on a RoBERTa architecture and fine-tuned on 15 different datasets. Its output ranges from -1 (negative) to +1 (positive).
To sum up, we scored each review with the above-mentioned _perplexity_-based, _readability_-based, and _sentiment_-based metrics. Afterward, for each metric, we performed ANOVA to inspect differences across the predicted fake versus real reviews with the same methodology mentioned in Section 3.5.
## 4. Results
### Human Evaluations versus Model Evaluations
Surveyed people from the general public only attained an accuracy score of 57.13% (std 13.57%), meaning that humans are only 7.13% better than random guessing (=50%). In addition, we recorded an abstention rate (i.e. selecting the _"Cannot Decide. I"m unsure option"_) of 11.15% (std 12.66%). In Table 3, we report the Tukey-HSD results across the different categories. Here, we did not discover significant differences across categories, except for (1) _Different-Long_ versus _Same-Long_ (10.63, _p-c_05), and (2) _Different-Long_ versus _Same-Short_ (11.17, _p-c_01) pairs. Apparently, for long reviews, humans are more accurate in distinguishing fake content when the review is associated with the same restaurant. However, with these results, we concluded that humans are not generally capable of distinguishing real versus fake content. Conversely, machine learning algorithms attained significantly better performance than human evaluators. In Table 4 we provide the classification report of the classifiers.
We observed that the current OpenAI's fake text detector benchmark improved human evaluators' accuracy by 19.65%, meaning that machines are more suitable for performing the task. This claim
\begin{table}
\begin{tabular}{c c c} \hline \hline Category 1 & Category 2 & MeanDiff \\ \hline Different-Long (50.76) & Different-Short (55.11) & 4.35 \\ Different-Long (50.76) & Same-Long (61.39) & 10.63\({}^{*}\) \\ Different-Long (50.76) & Same-Short (61.93) & 11.17\({}^{**}\) \\ Different-Short (55.11) & Same-Long (61.39) & 6.28 \\ Different-Short (55.11) & Same-Short (61.93) & 6.82 \\ Same-Long (61.39) & Same-Short (61.93) & 0.54 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Tukey-HSD test results across categories. In brackets the category unweighted by abstention rate accuracy averages in percentage. “Different” stands for different restaurants, and “Same” vice versa. “Long” stands for a long review, and “Short” vice versa. “_p-c_05, **_p-c_0.1, ***_p-c_001.
was strengthened by looking at the performance of standard machine learning algorithms. In particular, LR ranked as the top performer among those with an accuracy score and F1-score of 85.07% and 84.61%, respectively. As for deep learning models, GPT-Neo models ranked as top performers. Specifically, the GPT-Neo maximizing accuracy after calculating the Youden's-J statistics as the optimal classification threshold in the validation set (GPT-Neo(g)) achieves the best performance. Convergence occurred at the 2nd epoch, with optimal \(J^{*}\)=.5708. Overall, GPT-Neo(g) significantly outperforms human evaluators and OpenAI's benchmark by 38.38% and 18.73%, respectively. With this finding, we applied the optimized GPT-Neo(g) model for inference on the unverified non-elite reviews.
### ANOVA Results
Unless differently specified, ANOVA results are discussed at the.05 significance level and at the optimized classification threshold \(J^{*}\) found in Section 4.1. In Table 5, we provide a per-variable summary.
Out of a total of 131,266 non-elite reviews posted from 2021 onwards, 8.48% were predicted as fraudulent. As demonstrated in Figure 2, this percentage decreases as the threshold \(t\) is lowered. For instance, at a threshold of \(t\)=.99, only.10% of reviews were detected as AI-generated.
#### 4.2.1. **Review-based and User-based**
For the review _Rating_, and the users' _#Reviews_, _#Friends_ and _#Photos_ all differences across predicted fake AI-generated and predicted real reviews were statistically significant. Here, reviews classified as fake were given a higher average star _Rating_ (+.43, _p_<.001). As for user-based variables, reviews classified as fake were posted by users with a lower average number of _#Friends_ (-5.59, _p_<.01), a lower average number of previously posted _#Freques_ (-11.37, _p_<.001), and a lower average number of previously posted _#Photos_ (-34.32, _p_<.001). In Figure 3, we show a sensitivity analysis considering other thresholds \(t\) of classification. Remarkably, when the threshold \(t\) is set to.99, both _Rating_ and _#Reviews_ exhibited greater statistically significant differences, with disparities of +.66 (_p_ <.001) and -24.06 (_p_ <.05), respectively. In contrast, the variables _#Friends_ and _#Photos_ no longer demonstrated statistical significance at this threshold.
#### 4.2.2. **Restaurant-based**
We observed that predicted fake reviews were associated with restaurants with a higher _AvgRating_ (+.03, _p_<.001). However, given the minimal difference, we acknowledge the modest practical implications that such a result may bring about. Then, we documented statistical significance concerning _#RestReviews_. Here, predicted fake reviews were connected to restaurants that displayed a greater average number of reviews available (+44.71, _p_<.001). The opposite behavior was observed for the average _#Visits_, in which predicted fake reviews were linked to restaurants that received fewer customer visits from 2021 to mid-2022 (-138, _p_<.05). This result was strengthened by _NormVisits_ (-.01, _p_<.05). Finally, no significant differences were noticed for the _ChainStatus_ and the _PriceLevel_ (_p_>.05). In Figure 4, we show a sensitivity analysis considering other thresholds \(t\) of classification.
AI-generated fake reviews also displayed a statistically significant downtrend when increasing the classification threshold \(t\) (_p_\(<\).05), eventually scoring lower than for predicted human-generated reviews from _t_\(\sim\).7. Instead, when considering _Textual Coherence_, i.e. defined as the change in perplexity between sentence-shuffled documents, no statistical significance was observed (_p_\(>\).05). Figure 5 shows the sensitivity analysis.
As for _readability_-based metrics, predicted AI-generated fake reviews were discovered to be more readable and less difficult to comprehend compared to the human-generated ones. Both average _ARI_ and average _\(\pm\)DW_ scored lower for AI-generated content, (-.23, _p_\(<\).001) and (-3.94, _p_\(<\).001), meaning that such content can be understood by a wider audience. Also, predicted AI-generated fake reviews were faster to read by 2.34 seconds (_p_\(<\).001). Lastly, for _Sentiment_, we observed that predicted AI-generated reviews have a more positive tone (+.24, _p_\(<\).001). Here, predicted human text averaged to a medium positive polarity close to.5 at each \(t\); while, predicted AI-generated text displayed a statistically significant positive trend, ending up at.87 at _t_=.99 (_p_\(<\).001).
## 5. Discussion
Aligned with findings from prior studies in fake reviews identification [(58; 65; 74)], we described how human evaluators systematically fail at detecting GPT-3 AI-generated content (AIGC) in the domain of restaurant reviews. On the contrary, machine learning models significantly achieve superior performance (+38.38% accuracy). We have also shown that optimizing Youden's J statistic in the validation set can further improve prediction accuracy (+.3%). Such disparity in performance between humans and machines could be possibly attributed to human cognitive limitations at detecting patterns from large-scale unstructured data [(35)]. Prior studies that leveraged machine learning techniques reported filtering out user-generated fake reviews at estimated rates around 15% [(39; 48; 75)]. In our sample of customer reviews that already passed the filtering system of social media (Yelp), we further documented that up to 8.48% of them were fake reviews generated by AI. We then explored how fake AI-generated reviews differentiate from human-generated reviews across several associated dimensions. In Table 6, we summarize extant literature findings on user-generated fake reviews compared to our findings on AI-generated fake reviews. Firstly, we observed that predicted AI-fake reviews score a higher average _Rating_ compared to the human-generated ones (+.43, _p_\(<\).001). This finding is congruent with conclusions from [(48)], who documented that fake reviews have a bimodal distribution with spikes at 1 and 5 stars, and from [(39)], who singled out 56% of positive reviews out of 15,000 hotel fake reviews on Yelp. Extant literature may provide
Figure 4. Sensitivity analysis of restaurant-based variables. In blue, highlighted the \(J^{*}\)=.5708 optimal threshold value. Error bars are the confidence intervals at the.05 significance level.
Figure 3. Sensitivity analysis of review-based and user-based variables. In blue, highlighted the \(J^{*}\)=.5708 optimal threshold value. Error bars are the confidence intervals at the.05 significance level.
a rationale for our result, as they suggest that economic agents seeking to bolster or restore their reputation may be more likely to engage in the self-promotion of falsely positive reviews (Kumar et al., 2017; Kumar et al., 2018; Kumar et al., 2018), because, for example, a one-star increase in the average Yelp rating has been documented to be associated with 5-9% revenue growth (Kumar et al., 2018). Secondly, consistent with prior observations from (Kumar et al., 2018; Kumar et al., 2018), users that post more predicted fake AI-generated reviews have less established Yelp reputations as compared to those that allegedly post real content: fewer previously posted _#Reviews_ (-11.37, _p_<.001), fewer _#Friends_ (-5.59, _p_<.01), and fewer previously posted _#Photos_ (-34.32, _p_<.001). Such diminished engagement levels exhibited by users that post more predicted fake AI-generated reviews might indicate an inclination toward spamming activities. It might be logical to presume that fraudsters engaging in spamming behavior would demonstrate lower levels of activity on a given platform. This might be due to their employment of rotating accounts to disseminate fabricated content, which might explain their lack of interest in cultivating reliable and trustworthy reputations within the Yelp community. At the extreme of this phenomenon, such a pattern is exacerbated by the presence of _singleton-review_ spammers, who generated a multitude of accounts that end up publishing a single review per account as a consequence of their fraudulent activity (Kumar et al., 2018).
Thirdly, regarding _restaurant_-based variables, our study found no significant impact of fake AI-generated reviews on the overall average restaurant rating, price level, and chain status. Specifically, the very marginal difference of +.03 (_p_<.001) in _AvgRating_ lacks practical relevance since humans are not affected by such a small difference. Moreover, no difference in AI-generated fake reviews from either _PriceLevel_ or _ChainStatus_ showed any statistical significance. However, it is worth pointing out that our findings are in contrast with prior research concerning chain restaurants, as (Kumar et al., 2018) found that they are less likely to display fake content to protect their brand reputation. Next, predicted AI-generated fake reviews tend to be associated with restaurants displaying more reviews on their Yelp web pages (_#RestReviews_, +44.71, _p_<.001). Yet, extant literature posited that restaurants have a stronger incentive to post fake reviews when few reviews are available (Kumar et al., 2018), because the marginal benefit of each additional review is higher since Yelp focuses on the average rating as an indicator of customer satisfaction to be reported in restaurant web pages.
Interestingly, by leveraging the power of the SafeGraph data, which reports the estimated per-restaurant customer visits (_#Visits_), we concluded that restaurants that displayed more AI-generated fake reviews totaled fewer customer visits (-138, _p_<.05). To the best
\begin{table}
\begin{tabular}{l c c c} \hline \hline Variable & Study & UGC Prior Results & AIGC Results (ours) \\ \hline Rating & (Kumar et al., 2018) & Bimodal distribution. Spikes at 1 star and 5 stars. & \\ Rating & (Kumar et al., 2018) & Fake more positive. & Fake more positive. \\ \#Friends & (Kumar et al., 2018) & Fake fewer friends. & Fake fewer friends. \\ \#Friends & (Kumar et al., 2018) & Fake fewer friends. & Fake fewer friends. \\ \#Photos & (Kumar et al., 2018) & Fake fewer photos. & Fake fewer photos. \\ \#Reviews & (Kumar et al., 2018) & Fake fewer reviews. & Fake fewer reviews. \\ ChainStatus & (Kumar et al., 2018) & Chain display less fake content. & No statistical difference. \\ \#RestReviews (Kumar et al., 2018) & Stronger incentives to post fake reviews & Fake more reviews. \\ & & when few reviews & \\ ARI & (Kumar et al., 2018) & Fake less complex. & Fake less complex. \\ ARI & (Kumar et al., 2018) & Fake more complex. & Fake less complex. \\ Sentiment & (Kumar et al., 2018) & Fake more positive. & Fake more positive. \\ Sentiment & (Kumar et al., 2018) & Fake more positive. & Fake more positive. \\ Sentiment & (Kumar et al., 2018) & Fake more polarized. & Fake more positive. \\ \hline \hline \end{tabular}
\end{table}
Table 6. Summary of the findings from extant literature related to UGC fake reviews extend by this research to the realm of AIGC fake reviews.
Figure 5. Sensitivity analysis of writing-style variables. In blue, highlighted the \(J^{*}\)=.5708 optimal threshold value. Error bars are the confidence intervals at the.05 significance level.
of our knowledge, this study represents the first analysis leveraging real users' visits to describe how fake reviews correlate with customer visits in the hospitality sector. Also, this finding raises novel research questions that aim to investigate the influence of fabricated reviews on business performance. This research direction is motivated by the need to gain a deeper understanding of the potential effects of fake reviews on consumer behavior, which can inform business strategies and policies to promote fairness and transparency in online marketplaces.
Finally, we inspected the writing style of the two predicted review categories. As for _perplexity_-based metrics, our results suggest that perplexity exhibits a downtrend pattern when applied to sequential thresholds of classification \(t\). Specifically, our findings indicated that at \(t\)\(\sim\)7, the mean _Perplexity_ of predicted fake AI-generated reviews was higher than for human-generated text (\(p\)\(<\).01), whereas it was lower at \(t\)\(\sim\)7 (\(p\)\(<\).05). Additionally, _Textual Coherence_ was not found to be statistically significant at any threshold. The pattern of _Perplexity_ may be explained by analyzing how large language models (LLM) are developed and generate text. LLM (including ChatGPT) are trained by predicting the next most likely token in a sequence of words, minimizing textual perplexity, and are more likely to output common words instead of rare words (Zhou et al., 2017). Thus, it is reasonable to assume that AI-generated texts have lower perplexity in comparison to human-generated ones, meaning that LLM demonstrate reduced uncertainty in generating text. In other words, perplexity may reflect the likelihood of a text being machine-generated, with lower values indicating a higher probability of machine generation. In connection with this, our study reports a statistically significant downward trend in _Perplexity_ for AI-generated text across all thresholds \(t\) (see Figure 5). Here, higher values of \(t\) can be interpreted as a higher level of confidence in classifying text as AI-generated. Therefore, we hypothesize that as our confidence in classification increases, the likelihood of misclassifying an AI-generated text decreases, thus leading to lower perplexity. Our findings are consistent with this hypothesis. In practical terms, AI-generated reviews exhibit greater grammatical correctness and predictability, yet they may lack word originality and creativity, as well as potentially be repetitive.
Next, as for _readability_-based, we showed that AI-generated reviews bear a higher degree of comprehension, necessitating a lower educational grade to be understood, as measured by _ARI_ (\(\sim\).23, \(p\)\(\sim\).001) and \(\approx\)_DW_ (\(\sim\).394, \(p\)\(\sim\).001). These findings are congruent with (Kang et al., 2017), but different from (Kang et al., 2017). Specifically, predicted human-generated reviews and AI-generated reviews score _ARI_ values of 7.05 and 6.82, respectively, meaning that they can be understood by average 7th and 6th-grade US students, respectively. According to (Bang et al., 2017), written content that is easily comprehensible can reduce the cognitive load on readers' information processing capabilities. As a result, such content may attract a larger readership and positively affect the perceived helpfulness of reviews. Based on this premise and on our results, we hypothesize that AI-generated fake reviews may capture readers' attention more effectively than authentic human-written reviews. Consequently, we warn that the prevalence of fake reviews may bias consumers' perceptions and intentions to visit a restaurant that has published more fake content relative to one that has not.
As for sentiment, aligned with (Bang et al., 2017; Wang et al., 2017; Wang et al., 2017), AI-generated fake reviews presented a more positive tone (_Sentiment_, \(+\).24, \(p\)\(\sim\).001). Here, review spammers may be deliberately employed to alter customers' perceptions by using exaggerated language that translates into more polarized sentiment polarities (Zhou et al., 2017), because spammers are presumably not able to express true sentiment when writing (Wang et al., 2017). Based on our research findings, we conjecture that a more positive tone may be a signal of self-promoting activities. This is congruent with our previous result that AI-generated fake reviews tend to have higher average ratings (\(+\).43, \(p\)\(\sim\).001), as both variables are closely intertwined in the effect they measure.
This study is not without limitations. Firstly, we restricted the analysis to the city of New York to avoid sampling biases. However, we cannot conclude whether the results can be generalized to other areas. Secondly, we relied on the SafeGraph dataset to collect restaurant reviews. Yet, SafeGraph does not disclose the exact data collection methodology for selecting example restaurants, leaving us with uncertainty about the representativeness of the dataset. Thirdly, we only relied on 2021 and 2022 inferential data, because GPT-3 models were proprietarily beta-released in mid-2020 (Bang et al., 2021). However, years 2021 and early 2022 were still affected by the COVID-19 pandemic, thus weakening our results as compared to ordinary years, as local lockdowns may have been imposed by New York authorities, potentially changing customers' behavior. It may be reasonable to point out that results about the number of customers' visits may be subjected to changes during ordinary times. Fourthly, we highlight that when filtering out reviews at \(t\)=.99 only about 130 AI-generated fake reviews were singled out. This data limitation reduced statistical power in ANOVA. Lastly and importantly, we do not intend to provide any causal interpretation to the results found, thus limiting us in drawing _cause-effect_ conclusions. This research should be referenced to describe patterns across the variables considered and their relationship with predicted fake AI-generated and genuine reviews.
## 6. Conclusion
Disseminating fake reviews with LLMs such as ChatGPT has become easier and cheaper than ever. Such accessibility may amplify the proliferation of "AI crowdturfing" campaigns aimed at distorting user experiences on social media. This study proves that AI-generated fake reviews are becoming more sophisticated and can easily deceive readers. Therefore, it is imperative for policymakers to develop regulations that require online review platforms to implement tools and processes to detect and remove fake reviews. This research also underscores the need for online review platforms to invest in better detection tools for AI-generated text. As the technology used to generate fake reviews becomes more advanced, review platforms must keep pace with technological advancements to ensure they can detect and remove such content effectively. To combat this issue, we implemented AI-based detection and description of fake AI-generated content across review-based, user-based, restaurant-based, and writing-based variables, showing that fake reviews tend to have a higher rating, that users posting more AI-generated content have less established Yelp reputations, and that such AI-generated content is easier to understand as compared to the human-generated one. Notably, without providing causal
claims, we also described how restaurants displaying more fake content are subjected to fewer customer visits. Up to now, no study has investigated how fake reviews correlate with customer visits. Thus, we intend to open novel research questions in this direction.
###### Acknowledgements.
This work was funded by Fundacao para a Ciencia e a Tecnologia (UID/ECO/00124/2019, UIDB/00124/2020 and Social Sciences DataLab, PINIFRA/22209/2016), POR Lisboa and POR Norte (Social Sciences DataLab, PINIFRA/22209/2016), and by Oracle via the Oracle for Research Grant providing the hardware infrastructure.
|
2310.08613 | Individual Variation Affects Outbreak Magnitude and Predictability in an
Extended Multi-Pathogen SIR Model of Pigeons Vising Dairy Farms | Zoonotic disease transmission between animals and humans is a growing risk
and the agricultural context acts as a likely point of transition, with
individual heterogeneity acting as an important contributor. Thus,
understanding the dynamics of disease spread in the wildlife-livestock
interface is crucial for mitigating these risks of transmission. Specifically,
the interactions between pigeons and in-door cows at dairy farms can lead to
significant disease transmission and economic losses for farmers; putting
livestock, adjacent human populations, and other wildlife species at risk. In
this paper, we propose a novel spatio-temporal multi-pathogen model with
continuous spatial movement. The model expands on the
Susceptible-Exposed-Infected-Recovered-Dead (SEIRD) framework and accounts for
both within-species and cross-species transmission of pathogens, as well as the
exploration-exploitation movement dynamics of pigeons, which play a critical
role in the spread of infection agents. In addition to model formulation, we
also implement it as an agent-based simulation approach and use empirical field
data to investigate different biologically realistic scenarios, evaluating the
effect of various parameters on the epidemic spread. Namely, in agreement with
theoretical expectations, the model predicts that the heterogeneity of the
pigeons' movement dynamics can drastically affect both the magnitude and
stability of outbreaks. In addition, joint infection by multiple pathogens can
have an interactive effect unobservable in single-pathogen SIR models,
reflecting a non-intuitive inhibition of the outbreak. Our findings highlight
the impact of heterogeneity in host behavior on their pathogens and allow
realistic predictions of outbreak dynamics in the multi-pathogen
wildlife-livestock interface with consequences to zoonotic diseases in various
systems. | Teddy Lazebnik, Orr Spiegel | 2023-10-12T06:26:20Z | http://arxiv.org/abs/2310.08613v1 | Individual Variation Affects Outbreak Magnitude and Predictability in an Extended Multi-Pathogen SIR Model of Pigeons Using Dairy Farms
###### Abstract
Zoonotic disease transmission between animals and humans is a growing risk and the agricultural context acts as a likely point of transition, with individual heterogeneity acting as an important contributor. Livestock often occurs at high local densities, facilitating spread within sites (e.g. among cows in a dairy farm), while wildlife is often more mobile, potentially connecting spatially isolated sites. Thus, understanding the dynamics of disease spread in the wildlife-livestock interface is crucial for mitigating these risks of transmission. Specifically, the interactions between pigeons and in-door cows at dairy farms can lead to significant disease transmission and economic losses for farmers; putting livestock, adjacent human populations, and other wildlife species at risk. In this paper, we propose a novel spatio-temporal multi-pathogen model with continuous spatial movement. The model expands on the Susceptible-Exposed-Infected-Recovered-Dead (SEIRD) framework and accounts for both within-species and cross-species transmission of pathogens, as well as the exploration-exploitation movement dynamics of pigeons, which play a critical role in the spread of infection agents. In addition to model formulation, we also implement it as an agent-based simulation approach and use empirical field data to investigate different biologically realistic scenarios, evaluating the effect of various parameters on the epidemic spread. Namely, in agreement with theoretical expectations, the model predicts that the heterogeneity of the pigeons' movement dynamics can drastically affect both the magnitude and stability of outbreaks. In addition, joint infection by multiple pathogens can have an interactive effect unobservable in single-pathogen SIR models, reflecting a non-intuitive inhibition of the outbreak. Our findings highlight the impact of heterogeneity in host behavior on their pathogens and allow realistic predictions of outbreak dynamics in the multi-pathogen wildlife-livestock interface with consequences to zoonotic diseases in various systems.
**Keywords:** Extended SIR model; multi-species epidemic; agent-based simulation; movement ecology; among individual heterogeneity, movement syndromes; ecological modeling.
## 1 Introduction
Disease ecology and animal movement ecology are inherently linked as animal movement can both determine pathogens' spread and be influenced by their load [1, 2, 3]. These interfaces have direct and indirect links to the pandemic spread across species in general, and between animals and humans, in particular [4]. As humanity spreads, more places are becoming urban which inherently changes the environment and the biodiversity of the area [5]. Specifically, livestock and synanthropic wildlife that live next to humans (e.g., cows and pigeons, respectively) have a complex relationship among themselves and with humans. For example, pigeons commonly occupy urban and agricultural sites worldwide and are a known vector of various human, poultry, and livestock-relevant pathogens [6, 7]. Nonetheless, our understanding of multi-species epidemic spread dynamics, in general, and in the context of mixed urban and agricultural sites with individuals moving between them is limited. Specifically, current models are not designed to capture the unique multi-pathogen infection which can take place in parallel
on the individual level and is likely the common case in most systems. Moreover, wild animals usually follow an exploration-exploitation movement pattern while captive animals are constrained to a small spatial area (e.g. a farm). The influence of this unsymmetrical movement dynamics on an epidemic spread is not fully explored.
The investigation of interacting species has gained significant popularity, leading to the continuous unveiling of the biological dynamics that surround us, while also serving as a fundamental basis for various technological advancements [8, 9, 10, 11]. Particularly, there has been a notable focus on studying epidemiology to comprehend the transmission of infectious diseases. The ultimate aim is to devise effective pandemic intervention strategies and, ideally, eliminate these diseases altogether, or more proximately, prevent them on a local scale [12, 13, 14, 15, 16, 17]. In this regard, mathematical models and computer simulations have emerged as potent tools for comprehending the biological and ecological dynamics that underlie the observed patterns associated with the spread of pandemics [18, 19, 20, 21, 22, 23]. Multiple attempts have been proposed to model the spread of epidemics in populations [24, 25, 26, 27, 28, 29]. In particular, original Susceptible-Infected-Recovered (SIR) based models have been improved and extended to models that offer more realistic spatial, social, biological, and other dynamics compared to the original SIR. These models are now widely used due to their balancing between representation simplicity and prediction accuracy [30, 31, 32]. Initially, extended SIR models with a single species and a single pathogen were proposed and investigated [33, 34, 35]. For instance, Sah and colleagues (2017) [36] used SIR models to show the effect of social structure and network modularity on the outbreak dynamic, demonstrating with empirical social network data from 43 species how group living can actually slow down simulated outbreaks in some conditions. [31] proposed a highly detailed, stochastic, and spatio-temporal extended SIR model for disease progression in animal-to-animal contact. The authors implemented their model as a computer simulation, allowing users to explore a wide range of possible pandemic intervention policies.
Later studies widened these models by taking into consideration multi-strain and even multi-pathogen pandemics [37, 38, 39]. For instance, [40] proposed a multi-strain model that links immunological and epidemiological dynamics across scales where within the host scale, the strains eliminate each other with the strain having the largest immunological reproduction number persisting and on the population scale, the authors adopted an extended SIS (Susceptible-Infected-Susceptible) model. [41] focused on a two-strain SEIR (E - exposed) model with dynamics infection rates. The authors show that this model is able to capture the dynamics of an emerging second strain during a pandemic. They demonstrated this capability using data from the 2020 COVID-19 pandemic. To this end, because pathogens co-occurring pathogens can influence each other through modification of host behavior, physiology, and survival, researchers are interested in extending these models for multiple species and the interactions between them [42, 43, 44]. [45] proposed a multi-strain multi-species extended SIR model where the authors combined the pray-predator model [46, 47] with an extended version of the multi-strain SIR model proposed by [48]. The model allows them to evaluate the extinction of species due to natural pandemics, using only macro data (i.e., average over the population, and not animal-level data) about the animal's prey-predator dynamics and cross-infection for the Avian influenza pathogen and its strains. Adding realism to the model requires focusing on a multi-location scenario, in addition to the multi-pathogen consideration.
Pigeons visiting dairy farm [49, 50]. are a typical and common example of such multi-site, multi-pathogen, multi-hosts scenarios, with mixed urban and agricultural sites. One cause of interest is their interactions with livestock, in general, and cows in cowsheds in particular [51, 52]. Importantly, like many bird species, pigeons are highly social animals and they tend to aggregate in large flocks which previous studies pointed out as a key factor in the spillover and worldwide spread of Avian Influenza [53]. Moreover, dairy and poultry farming are particularly notorious for attracting a variety of wildlife from peripheral habitats, due to their resource availability including food, shelter, and possible security from natural predators who avoid human habitats [54]. In this context, pigeons are able to cover large areas flying several kilometers on a daily basis [55], effectively operating as an infection vector across otherwise spatially separated sites. Generally speaking, pigeons travel between an urban location where they nest and roost and resource-rich foraging sites (i.e., cowshed) during the day [56, 57]. Pigeon's high density and proximity to cows during foraging highlights their potential to infect cows with a wide range of pathogens. These, in turn, may cause negative outcomes for humans, including economic losses, food shortages, and even public health threats [58, 59, 60]. Fig. 1 shows pigeons in a cowshed located in Israel. One can see how their spatial proximity can result in infection between the pigeons and between the pigeons and the cows.
Therefore, in this work, we propose a novel spatio-temporal multi-pathogen epidemic model for studying
the spread of infectious diseases between pigeons and cows in a cowshed setting. To this end, we extended the multi-pathogen multi-species model proposed by [45] for our specific case and developed a detailed agent-based simulation. We used real-world data to obtain several of the model's parameter values. We hypothesize that variation in pigeons' movement (with some being more mobile or more exploratory than others) should affect disease dynamics [61, 62, 63], and explore this effect with our model and simulation, showing a considerable effect on both magnitude and variation of outbreak indices. Furthermore, we find that if pathogen infection reduces pigeons' movement (e.g. through sickness, [3]) then our findings predict a positive linear correlation to the average reproduction number (ARN) of the disease.
The remaining paper is organized as follows. Section 2 describes the proposed model's mathematical formalization following a spatio-temporal extended SIR-based modeling approach, as well as the implementation of the proposed model as a computer simulator based on the agent-based simulation approach. Section 3 provides a comprehensive evaluation of the proposed model. Finally, Section 4 provides a discussion on the model's outcomes followed by suggestions for future work.
## 2 Model Definition
In order to capture the spatio-temporal epidemiological dynamics, we use a system of partial differential equations (PDEs). Essentially, we combine the parallel multi-pathogen with cross-species infection epidemic dynamics based on the SEIRD model [64] together with the exploration-exploitation-based movement dynamics of the pigeons [65, 66, 67].
### Epidemiological dynamics
Formally, let us define a model \(M\) such that contains a finite population of pigeons (\(Pg\)) and cows (\(Cw\)) and their change over finite time \([t_{0},t_{f}]\) such that \(t_{f}>t_{0}\), and finite space (see below). In addition, let us assume a set of disease-generating pathogens \(\Delta\) of natural size \(k\in\mathbb{N}\). At each point in time, each individual animal in the model is either susceptible (\(S\)), exposed (\(E\)), infected (\(I\)), recovered (\(R\)), or dead (\(D\)) from each of these pathogens. Thus, the epidemiological state of an animal in the model can be represented by a vector \(\eta\in\{s,e,i,r,d\}^{k}\). For instance, an individual with the state \([\{1\},\{2\},0,0,0]\) is susceptible to the first and exposed to the second pathogen, but not infected, recovered, or dead by either of them. Therefore, each animal belongs to a super-position epidemiological state where it is susceptible, exposed, infected by, and recovered from sets of pathogens, \(s,e,i,r\subset\Delta\), such that
Figure 1: Two images of pigeons in a cowshed located in Israel, taken on the site of the empirical data collection of [56]. The images show the high proximity of pigeons to cows and their food (facilitating transmission), and the high density and mobility of pigeons (underscoring their between-site transmission potential). Photos by Tovale Solomon.
\(s\cap e\cap i\cap r=\emptyset\wedge s\cup e\cup i\cup r=\Delta\)[45]. In other words, the states are pair-wise distinct and the combination of states has to include all pathogens in the model. Note that we ignore the \(d\) state since if a single pathogen caused the death of the individual, the other states \(s,e,i,\) and \(r\) are meaningless.
As such, for each state, there are 12 processes that influence the number of animals in each epidemiological state. First, animals are born at some rate \(\alpha\); Second, animals are infected by a pathogen \(j\in\Delta\) by animals from the same species, becoming exposed to it with infection rate \(\beta(x,y)\); Third, animals are infected by a pathogen \(j\) by animals from the other species, becoming exposed to it with infection rate \(\zeta(x,y)\). Forth, animals that are exposed to a pathogen \(j\) in either of the mechanisms become infectious at a rate \(\phi\); Fifth, animals infected with pathogen \(j\) recover at a rate \(\gamma\); Sixth, animals from the group \((s,e,i,r)\) are infected by a pathogen \(j\in s\) by animals from the same species, becoming exposed to it with an infection rate \(\beta(x,y)\); Seventh, animals from the group \(s,e,i,r\) are infected by a pathogen \(j\in s\) by animals from the other species, becoming exposed to it with infection rate \(\zeta(x,y)\); Eight, animals from the group \((s,e,i,r)\) which are exposed to pathogen \(j\in e\) become infectious at a rate \(\phi\); Ninth, animals from the group \((s,e,i,r)\) which are infected by pathogen \(j\in i\) recover from it at a rate \(\gamma\); Tenth, for each \(j\in r\) animals from the group \((s,e,i,r)\) loss their immunity and become susceptible again to the pathogen \(j\) at a rate \(\psi\); Eleventh, animals from the group \((s,e,i,r)\) die due to their diseases at a rate \(\mu\); Finally, animals are naturally dying at a rate \(\upsilon\), independent of the diseases they carry (e.g. predation, trauma etc). Importantly, each parameter is also defined by a superposition of the epidemiological-subset defined by \((s,e,i,r)\). These dynamics take the partial differential equation representation as follows:
\[\forall s,e,i,r:\frac{\partial P_{s,e,i,r}(t,x,y)}{\partial t}= \sum_{a,b,c,d|a\cap b\cap c\cap d=\emptyset\wedge a\cup b\cup c\cup d=\Delta} \alpha_{a,b,c,d}P_{a,b,c,d}\] \[+\sum_{j\in e}\beta_{s\cup j,e/j,i,r}^{s,e/j,i,\upsilon j,r}(x,y) P_{s,e,i,r}P_{s,e,i,r}+\sum_{j\in e}\beta_{s\cup j,e/j,i,r}^{s,e/j,i \upsilon j,r}(x,y)P_{s,e,i,r}C_{s,e,i,r}+\sum_{j\in i}\phi_{s,e\cup j,i/j,r}P_ {s,e\cup j,i/j,r}\] \[+\sum_{j\in r}\gamma_{s,e,i\cup j,r/j}P_{s,e,i\cup j,r/j}+\sum_{ j\in s}\psi_{s/j,e,i,r\cup j}P_{s/j,e,i,r\cup j}-\sum_{j\in s}\beta_{s,e,i,r}^{s,e/j,i \upsilon j,r}(x,y)P_{s,e,i,r}P_{s,e/j,i\cup j,r}\] \[-\sum_{j\in s}\beta_{s,e,i,r}^{s,e/j,i\upsilon j,r}(x,y)P_{s,e,i, r}C_{s,e/j,i\cup j,r}-\sum_{j\in e}\phi_{s,e,i,r}P_{s,e,i,r}-\sum_{j\in r} \psi_{s,e,i,r}P_{s,e,i,r}\] \[-\sum_{j\in i}\gamma_{s,e,i,r}P_{s,e,i,r}-\sum_{j\in i}\mu_{s,e, i,r}P_{s,e,i,r}-\upsilon_{s,e,i,r}P_{s,e,i,r} \tag{1}\]
Similarly, the cows' epidemiological dynamics are identical to the pigeon's one while having different parameter values. A schematic view of the epidemiological states of the model for the specific case of two pathogens (i.e., \(k=2\)) is shown in Fig. 2 where each box indicates the epidemiological state of the individual represented by the pathogens belongs to each of the \(s,e,i,r\) sets. For example, a healthy individual starts on the left box and moves to the right as it is exposed to a pathogen (following an orange arrow), becoming infectious (following a black arrow), recovers (following a blue arrow), and finally return to be again susceptible due to immunity decay (following a green arrow). During the infectious state, there is a chance the individual would die due to the pathogen as indicated by the red arrow, as well as background natural mortality throughout its life (not shown).
### Movement dynamics
The movement dynamics, unlike the epidemiological dynamics, are unique for each species, reflecting their different life histories and wild/captive contexts. First, as cows are living in a relatively small cowshed and interact intensively with each other, it can be approximated that they are well-mixing within each farm [68]. This decision was motivated by the infection range compared to the animals' proximity as indicated in Fig. 1a. Namely, the probability that a cow meets any other cow in the population for any point in time is identical and proportional to the cow population size. As such, we assume the cow population does not have any movement dynamic which is significant for the proposed model. On the other hand, feral pigeons worldwide (and our system included) are known to sleep in buildings while traveling to forage in food sites during the day [69]. However, pigeons explore their surroundings and alternate among different foraging sites, reflecting a tradeoff between the exploitation of known resources and the exploration of new ones. Individuals differ in their tendency to explore [70], with some
visiting sites of interest (cowsheds) more frequently than others [71]. Indeed, previous studies show that one can explain the pigeon flight patterns using the exploration-exploitation model [72; 73; 74]. Moreover, as pigeons are exposed and infected with pathogens, their flight abilities might be reduced [75]. Hence, the pigeons' movement dynamics can be represented as a weighted average of a random walk (representing the exploration dynamics), and a time-based directed walk (which represents the exploitation dynamics). Moreover, both behaviors are influenced by the combination of the pathogens each pigeon is exposed to and infected with. As such, the pigeons' movement dynamics take the form:
\[\forall e,i:\frac{\partial P_{s,e,i,r}(t,x,y)}{\partial t}=\omega_{1}^{e,i} \big{(}\frac{\partial^{2}P_{s,e,i,r}(t,x,y)}{\partial x^{2}}+\frac{\partial^{ 2}P_{s,e,i,r}(t,x,y)}{\partial t^{2}}\big{)}+\omega_{2}^{e,i}b_{s,e,i,r}(t), \tag{2}\]
where \(\omega_{1}^{e,i},\omega_{2}^{e,i}\in\mathbb{R}^{+}\) are the coefficients of movement such that \(\omega_{1}^{e,i}/\omega_{2}^{e,i}\) is the exploration-exploitation rate and \(\omega_{1}^{e,i}+\omega_{2}^{e,i}\) is the total amount of movement for a single step in time. In addition, \(b_{s,e,i,r}(t)\) is the average time-depended directed walk vector of the pigeons' population and satisfies that \(\forall s,e,i,r,t:|b_{s,e,i,r}(t)|=1\).
Figure 2: A schematic view of transition between disease stages, shown for \(k=2\). The red arrows indicate that from this stage, the animal might die from the disease. In a similar manner, the orange, black, blue, and green arrows indicate exposure, infection, recovery, and immunity decay, respectively. Notably, several states are duplicated for ease of reading (e.g. 2;,1,).
Fig. 3 presents a schematic view of the movement dynamics.
### Agent-based Simulation
Since the proposed model (see Eqs. (1 and 2) captures population dynamics, it is, in practice, takes into consideration only the average behavior of the population [64, 76]. Another shortcoming of such SIERD model is that practical applications will be limited by the availability of required parametrization data. Thus, inspired by [77], we implemented the proposed model using the agent-based simulation (ABS) approach. Generally speaking, ABS is able to provide another layer of realism by allowing each animal in the population with unique attributes, reflecting more closely capture the realistic dynamics (in contrast to the system at equilibrium) observed in nature where animals within a population differ [70, 78]. Moreover, ABS makes the infection computation burden relatively small, as one can use spatial approximation to determine interactions between individuals in the population(s), and it allows exploring the importance of specific parameter values of general scenarios for emerging population-level patterns.
Formally, let \(M:=(Pg,Cw)\) be a tuple of agents' sets which moves and interacts in discrete finite time steps \(t\in[1,\ldots,T]\), where \(T<\infty\). In order to use the ABS approach, one has to define the agents in the dynamics as well as their three types of interactions: agent-agent, agent-environment, and spontaneous (i.e., depends only on the agent's state and time) [79]. To this end, for our model, each agent \(a\) is represented by a finite state machine [80] as follows: \(a:=(\xi,x,y,\eta,\{w_{1}\}^{e,i},\{w_{2}\}^{e,i},\beta_{r})\) where \(\xi\in\{p,c\}\) is the agent's species (i.e., pigeon or cow), \(x,y\) are the spatial coordinates of the agents in a Cartesian coordinate system, \(\eta\) is the agent's epidemiological state, \(\forall e,i\forall j\in\{1,2\}:\{w_{1}\}^{e,i}\) are the agent's personal spatial exploration-exploitation coefficients, and \(\beta_{r}\) is a vector representing infection radius of each pathogen, effectively setting the relevant interaction distance among agents.
At the first time step (\(t=1\)), the pigeons and cow populations (\(P,C\)) are generated based on some initial conditions and located on a continuous two-dimensional map with dimensions denoted by \(w,h\in\mathbb{R}^{+}\). Then, at each time step \(t\), each individual in the pigeon follows Eq. (2). The decisions of all individuals are referred to as the pigeon population walk. Following standard convention, we assume that all individuals may be located at each point on the map. Given the nature of our particular system, this general simplification is very suitable as pigeons aggregate at high numbers in high proximity (centimeters) with a movement range of (kilometers). For example, Fig. 1 shows this proximity. Between every two consecutive time steps, individuals infected each other based on their proximity, here simplified to a threshold function (note that we simplify these to a single value rather than a pathogen-specific effective distance). Namely, if an agent is infected with pathogen \(j\) has another agent susceptible to pathogen \(j\) and is located within a range of \(\beta_{r}^{j}\), it has a probability of \(\beta\in[0,1]\) for the
Figure 3: A schematic view of a pigeon movement in a 2-dimensional space with the exploitation vector (\(b(t)\)) at a specific time and over time (marked by the gray line) and the exploration vector (\(\frac{\partial^{2}P_{s,e,i,r}(t,x,y)}{\partial x^{2}}+\frac{\partial^{2}P_{s, e,i,r}(t,x,y)}{\partial t^{2}}\)) which looking for resources (such as food). For our model, all these alternative explored sites can be condensed into a single one as they do not directly affect transmission dynamics.
specific \(\eta=(s,e,i,r)\) state of the infectious and susceptible individuals if they are from the same species and with probability \(\zeta\in[0,1]\), otherwise. Thus, applying the epidemiological dynamics represented in Eq. (1) in a spatially-local manner. In addition, the spontaneous epidemiological process of exposure to infectious, recovery, death, and immunity-decay are computed using a rate associated with the number of time steps rather than being computed using the population-level dynamics as suggested by Eq. (1), following a common practice of ABS implementation [81]. For instance, an individual exposed to pathogen \(i\) would become infectious to this pathogen after \(\psi_{i}\) time steps. Lastly, we compute the metrics and store them. The simulation ends either after \(T\) steps in time or once all individuals in the model are dead.
## 3 Experiments
In this section, we perform _in silico_ experiments based on the proposed model. First, we find from the literature realistic values ranges for the model's parameters to obtain biologically relevant instances of the proposed model. Afterwards, using these instances we explore the central spatial and temporal dynamics occurring in such a system.
### Setup
While high-resolution and extensive epidemiological data required to obtain a real-world instance of the proposed model is currently unavailable (to our best knowledge), partial data in the form of the pigeons' movement dynamics and some biological data about pathogens' spread are available in the literature. Specifically, prior empirical work has shown that pigeons in general and in our system in particular carry a diversity of pathogens simultaneously. Hence, we used the data collected by [56]. Namely, [56] captured \(n=328\) pigeons from three cowsheds located in Israel. The authors installed a GPS device on several of those and were able to obtain sufficient tracking data from \(n=33\) individuals. providing its location in \(3\) meters accuracy every \(10\) minutes. The transmitters were active for \(214\pm 193\) days (range: 14 and 713 days per individual). In total, they collected \(8635\) tracking days regarding the location of the pigeons over time. In addition, all three hundred pigeons were sampled with oral and cloacal swabs for pathogen identification. a subset of 29 samples was also assessed for microbiome-wide DNA presence using next-generation sequencing [82]. Table 1 summarizes the proposed model's hyper-parameter space values for each configuration based on this study and values from the biological literature [83, 84, 85, 16, 86, 87, 88, 89]. In particular, in agreement with the above GPS sampling rate, we chose to simulate a 10-minute time step, balancing the computational burden and the model's accuracy. Namely, the movement dynamics as well as the infection can be approximated in a several minutes scale [56, 23] and the other dynamics are much slower. Moreover, the population sizes are chosen based on the estimation of three cowshed workers. The exploration-exploitation coefficients are computed based on the average behavior from [56].
In order to investigate the epidemic spread dynamics from various scenarios, one first needs to define the setup of the model. Hence, we uniformly randomly sample the model's parameter values from Table 1. For the hyperparameters, we uniformly randomly sample \(|C(0)|\) to be between 100 and 1000, (\(|P(0)|\) to be between 500 and 5000, the number of pathogens \(k\) to be between 2 and 7, and the map's size to be between 1 and 10 kilometers. For simplicity, we positioned the cowshed in the center of the map and placed the urban locations in which pigeons nest at a random point in the map which is at least half a kilometer from the cowshed (well within pigeons' daily movement range but two orders of magnitude larger than the direct transmission threshold (\(\beta_{r}\)). Then, each pigeon is located in a personal nesting location which is normally distributed around the center location of the urban area with a mean and standard deviation of \(0.25\pm 0.1\) kilometers (effectively implying most pigeons will be nesting inside the urban area). In addition, we set the simulation duration to be \(51,840\) time steps of ten minutes to obtain a total of one year. Fig. 4 shows a schematic view of the synthetic scenario generation process. This setup is also repeated with multiple cowsheds such that the cowsheds are too far apart to allow cows to infect each other between cowsheds but close enough for the same pigeons to visit all cowsheds in the region.
In order to evaluate the epidemic spread, one is required to define an epidemiological metric of interest. Here we consider three of the most popular epidemiological metrics: the ARN (\(E[R_{t}]\)), the maximum number of infections (MI), and the number of dead cows (CD) due to the epidemic [90, 91, 92, 93, 94]. \(R_{t}\) measures the number of secondarily infected individuals given the epidemic state at a given time \(t\)[94]. Intuitively, the ARN
(\(E[R_{0}]\)) computes how many, on average, a single infected individual infects other individuals. The max infected metric at a time point in time, \(t\), counts the number of individuals infected by some pathogen divided by the infected infected individuals. The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected individuals infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected infected infected infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected infected infected infected by the infected individuals is \(\left(1-\frac{1}{10}\right)\). The maximum infected number of infected infected infected infected by the infected infected infected by the infected infected
population size. The cow death computes how many cows are dead up to some point in time, \(t\). Formally, \(R_{t}\) can be approximated using the following formula: \(R_{t}:=\big{(}I(t)-I(t-1)+R(t)-R(t-1)\big{)}/I(t-1)\), the \(MI\) at time \(t\) is defined as follows \(MI(t):=\max_{i\in[t_{0},t_{t}]}I(i)\), and \(CD\) at time \(t\) is defined to be \(\sum_{s,e,i,r\in\Delta}\mu_{s,e,i,r}C_{s,e,i,r}\), such that \(I(t):=\|\{\forall p\in P|\eta[i]_{t}\neq\emptyset\}\|\) and \(R(t):=\|\{\forall p\in P|\eta[r]_{t}\neq\emptyset\}\|\) where \(\eta[x]_{t}\) indicates the set of pathogens for the \(x\in\{s,e,i,r,d\}\) epidemiological state at time \(t\). For our case, we assume that both \(R_{t}\) and \(MI\) are computed only for the pigeon population while \(CD\) naturally considers the cow population.
### Results
Based on this setup, we conducted three main experiments as well as a sensitivity analysis for the model. First, let us consider a scenario in which the number of cowsheds ranges from the basic case of a single cowshed to a more complex one of up to five spatially distinct cowsheds. As the number of parameters and their values can range widely from one instance of the model to another, we computed the model for \(n=1000\) simulation realizations, each time with a random sample of the models' parameters' values using a uniform distribution. Fig. 5 presents the epidemic metrics as a function of the pigeon population size and number of cowsheds with 50 cows at each one. The results are shown as a mean of \(n=1000\) simulation realizations. Intuitively, one can notice that as the pigeons' population size increases (lower in the y-axis), all three epidemic metrics also increase, on average. Moreover, a phase transition between a single and multiple cowshed(s) is revealed. This phase transition can be associated with the fact that multiple cowsheds cannot infect each other without the transition of pigeons which is less likely for a smaller pigeons population size.
In a similar manner, since pigeons operate as infectious vectors, they are commonly sick with one or even many pathogens in parallel (here only two are included in the simulation) which might reduce their movement. Hence, let us consider a simplistic but biologically-supported scenario [72, 74] where each pathogen, regardless of its nature, experiences a reduction of \(x\in[0,1]\) and \(y\in[0,1]\) in the exploration-exploitation (\(w_{1},w_{2}\)) parameters of an infected pigeon, respectively. Fig. 6 presents the epidemic metrics as a function of the pigeon movement reduction. Namely, a reduction of \(x=0.5\) implies that a sick pigeon moved with \(\omega_{1}\cdots x=\omega_{1}\cdot 0.5\) and y=0.5 implies \(\omega_{2}\cdots y=\omega_{2}\cdot 0.5\). The results are shown as a mean of \(n=1000\) simulation realizations. Unsurprisingly, as \(x\) and \(y\) increase, all three epidemiological metrics decrease, since effectively pigeons proved lower connectivity. The decrease is quicker than the decrease rate of \(x\) while negatively linear to the decrease of \(y\). In other words,
Figure 5: Epidemic metrics as a function of the pigeon population size and number of cowsheds with 50 cows at each one. The results are shown as a mean of \(n=1000\) simulation realizations. Notably, all three metrics show an increase with a growing number of pigeons, in interaction with the number of cowsheds
the exploitation tendency has a weaker effect on the pandemic spread compared to the exploration one, which indicates that the number of cowshed visits a pigeon performs, on average, does not have much effect as it will transmit the disease to cows anyway.
To further investigate the influence of the heterogeneous movement between pigeons, we used the values of \(\omega_{1}\) and \(\omega_{2}\) from Table 1 but changed the standard deviation of \(\omega_{1}/\omega_{2}\) (denoted by \(std[\omega_{1}/\omega_{2}]\)) using the ratio of \(\omega_{1}/\omega_{2}=6.95\cdot 10^{-3}\) as the reference value (this ratio represent the ratio between the two empirically observed values from [56]). Intuitively, a larger standard deviation in the exploitation-exploration rate indicates more diversity in the movement dynamics in the population and therefore more heterogeneously. Fig. 7 summarizes the results of this analysis, showing the values as the mean \(\pm\) standard deviation outcome of \(n=1000\) simulation realizations. Notably, as \(std[\omega_{1}/\omega_{2}]\) increase, all three epidemiological metrics also increase both in mean value and in their standard deviation, as indicated by the error bars. This result indicates that behavioral variation among hosts can affect system-level stability and result in both more pronounced outbreaks (e.g. ARN increases from 0.92 to 1 for this particular value of \(\omega_{1}/\omega_{2}\) ), and in higher variation among realizations, reflecting higher sensitivity to the specific parameters of each one.
Moreover, we investigated the sensitivity of the epidemic spread dynamics with respect to the main model parameters. To this end, Fig. 8 summarizes the main sensitivity results of the proposed model where the x-axis indicates a focal parameter of interest and the y-axis indicates the epidemic spread metrics' values. Results are shown as a mean \(\pm\) standard error of \(n=1000\) simulation realizations. Notably, each simulation realization
Figure 6: Epidemic metrics as a function of the pigeon movement reduction. The results are shown as a mean of \(n=1000\) simulation realizations (representing a random sample from the hyper-parameter space). For all three epidemiological metrics, a growing reduction in exploratory mobility (jumps between sites) causes a reduction in the pandemic spread, with a stronger impact compared to the reduction in pigeons’ exploitation tendency (re-visits to a site).
Figure 7: A heterogeneous analysis of the pigeons’ population movement. The results are shown as the mean \(\pm\) standard deviation value of \(n=1000\) simulation realizations. Notably, as \(std[\omega_{1}/\omega_{2}]\) increase, all three epidemiological metrics also increase and their standard deviation, as indicated by the error bars, is also increased.
uniformly samples the model's parameters from the ranges presented in Table 1, ensuring each realization is unique. As can be seen from Figs. 8a, 8d, and 8g, all metrics generally increase with the number of pathogens \((k)\). To be exact, MI only increases in low values of \(k\) and then reaches a plateau. Focusing on the ARN (\(E[R_{0}]\)), the standard error of the results also increases indicating that the system becomes more chaotic and less predictable, with realized ARN more sensitive to the specific parameters of each realization. The MI metric seems to reach an asymptote or even a peak around \(0.44\) at \(k=5\) and then stops increasing and even slightly decreases. This behavior might reflect that the infected sub-population dies faster than it has an opportunity to further spread the pandemic, on average, emphasizing why consideration of multiple pathogens in concert can overturn the result of simpler SIR models for single pathogens. Lastly, the CD presents relatively stable behavior of monotonic increase with respect to \(k\), as indicated by its error bars.
In a similar manner, when the exploration to exploration walk ratio (\(\omega_{1}/\omega_{2}\)) increases, as shown in Figs. 8b, 8e, and 8h, all three epidemic spread metrics also monotonically increase. The ARN shows similar behavior as before while the MI and CD metrics indicate a polynomial (non-linear) increase. Moreover, the CD was shown to become less stable and higher as the exploration-to-exploration walk ratio increased. Lastly, the spatial infection radius (\(\beta_{r}\)) increases and the ARN also slightly increases in a linear manner, keeping relatively the same level of stochastic behavior. On the other hand, the MI and CD metrics seem to be non-linearly affected by the spatial infection radius as they slightly increase and decrease for different values of \(\beta_{r}\).
In addition, in order to investigate the influence of pigeons as epidemic carriers, we computed the epidemic spread of the proposed scenario as a function of the initial pigeon population size (\(|P(0)|\)) and their average within-species infection rate (\(\beta\)). The results of this analysis are summarized in Fig. 9 such that each heatmap indicates a different epidemic spread metric. First, Fig. 9a reveals that the ARN (\(E[R_{0}]\)) ranged between 0.7 and 1.3 indicating the interactive effect of the two aspects, so that for small pigeon populations even if these very sick or large pigeon populations with mildly infective pathogens would not cause an outbreak while the combination would. Second, Fig. 9b shows that the MI metric where there is a second-order polynomial relationship between (\(|P(0)|,\beta\)) and the MI metric: \(MI=0.172+0.032|P(0)|+0.019\beta+0.001\beta^{2}-0.001|P(0)|^{2}-0.001|P(0)|\beta\). This fitting is obtained using the SciMed symbolic regression tool [95] with a coefficient of determination of \(R^{2}=0.72\). Finally. Fig. 9c presents a linear increase toward \(CD=0.58\) followed by a Plato with noisy results. This outcome indicates that a more aggressive pathogen with larger populations causes more spread in the short term but decays faster, leaving less overall mortality rate [96].
## 4 Discussion
In this study, we address the spatio-temporal dynamics of multi-pathogen epidemic spread and apply a general epidemiological model to a specific study system where pigeons serve as vectors of pathogens among dairy farms, transmitting avian (pigeons) and mammalian (cows) diseases. The model follows recent extensions of the well-established SIR modeling approach [68] into a multi-species with parallel multi-pathogen dynamics, within a Susceptible-Exposed-Infected-Recovered-Dead (SEIRD) framework. We implemented the model as an agent-based simulation approach using the data collected by [56] as well as relevant epidemiological parameters from the biological literature (e.g. pathogen spread dynamics). This parametrization step allows us to test a realistic setup for the proposed model. The simulations show that considering the unique movement patterns of pigeons and their potential role as vectors can generate different outbreak dynamics than predicted otherwise. Further, we also show that variation among individual pigeons is not only affecting outbreak indices in accordance with previously published results for the importance of super-spreaders for disease dynamics [61, 62, 63] - it also affects the stability of the system and the predictability of the results. These two effects, together with evident interactions resulting from co-infections, demonstrate the utility of our model for predicting more realistic outbreak dynamics. This is supported also by previous studies establishing extended SIR-based models' ability to accurately capture similar epidemiological cases with only partial data [97, 98, 99, 100]. Below we first discuss the impact of pigeon movement on the predicted disease dynamic, then we turn to discussing the effects of other parameters and the added value of considering multiple pathogens in the same model. We conclude by pointing out some possible limitations of our model, directions for future studies, and the broader impacts of the SIR modeling approach.
### Vector movement and its variation affect outbreak dynamics and predictability
The proposed model shows that pigeons operate as an effective epidemic spread vector, as illustrated by Fig. 5. When the pigeons' population size is small, even if an epidemic outbreaks in one cowshed, it is not likely to be transformed into other cowsheds. While this outcome may be considered (almost) trivial, we show that the addition of a growing number of cowsheds does not increase linearly as one may obtain from a classical SEIRD model [101] due to the exploitation behavior in the pigeons' spatial dynamics (tendency to revisit a familiar site) which keeps some level of separation. This separation, on average, causes a less aggressive epidemic spread compared to a more naive model. This paints an interesting outcome- namely, pigeons during non-outbreak do not cause much epidemiological harm if the cowshed is far enough from other cowsheds. In a complementary manner, the epidemic caused by pigeons, especially in multi-cowshed scenarios, is self-controlled as the reduction in the pigeons' movement reduced the pathogen spread over time, as shown in Fig. 8. That said, this result should be taken with caution as the outcome achieved for a simplified case of pathogen-movement relationship which is not
Figure 8: A sensitivity analysis of the average reproduction number \((E[R_{t}])\), max infection (MI), and cows death (CD) portion. The results are shown as mean \(\pm\) standard error of \(n=1000\) simulation realizations. The y-axes are identical for all panes of a given row. In general, exploration level (middle column) shows a more pronounced effect on all three indices compared to the number of pathogens (left column) and spatial infectious radius (right column).
biologically validated yet. Namely, it is known that pathogens indeed often reduce movement to a varying extent, but in some scenarios may in fact enhance movement through host manipulation [102].
We also find that as the exploration-to-exploitation walk ratio increases, the maximum portion of infected animals increases polynomially fast (Fig. 8). This phenomenon happens as the random walk takes a larger portion of the pigeons' movement making them closer to a well-mixture scenario for at least the during periods of the day when they either nest together or visit the cowshed [103, 104]. From a modeling point of view, when (\(\omega_{1}/\omega_{2}\)) is relatively small, one can approximate the spatial dynamics using a graph-based model rather than a continuous one as the pigeons would visit a finite set of locations over time while the transformations between them do not have much effect on the infection dynamics [105, 106, 77]. Moreover, the spatial infection radius seems to have only a minor influence on the overall epidemic spread, as revealed in Figs. (c)c, (f)f, and (i)i. This outcome can be associated with the fact that with high enough density such as the one present in the cowsheds, the infection radius does not have much effect as shown by previous models [107, 108, 109]. in other words, this small-scale variation in spatial scale is only minor compared to the local density of cows and movement of the pigeons (vector) [3].
Perhaps our most striking results are the consequences of individual variation among pigeons in their movement. Such behavioral variation is receiving growing attention in the ecological literature, with accumulating evidence for the generality of the pattern across animals [110], and for its potential impact on various system-level outcomes, including contact networks and disease dynamics [70, 111]. In our model, as presented by Fig. 7, increasing variation among individuals in their exploration-exploitation tendencies resulted in higher ARN and MI and eventually more cow mortality. This result concurs with existing literature (both models and empirical case studies) highlighting the role of super-spreaders in facilitating disease transmission [61, 62, 112]. Second, it resulted in increasing variation among realizations, implying lower stability of the results and enhanced unpredictability of the dynamics. With increasing variation, an outlier individual is more likely to connect otherwise isolated sites or contribute to outbreak dynamics in an extraordinary manner. Thus, the strength of mean-field estimates becomes less certain, with higher sensitivity to stochastic conditions and various parameters (note that we randomly selected parameter values for each iteration). Despite this intuitive interpretation, we are unaware of empirical examples demonstrating this effect, highlighting the novelty of this prediction and the overall value of our modeling approach.
### Other factors affecting outbreak dynamics and the benefit of multi-pathogen modeling
In addition to _in silico_ experiments on pigeon movement, we explore the influence of different biological properties of the system on the epidemic spread. Specifically, Fig. 8 shows the sensitivity of the average reproduction number (ARN), max infected (MI), and cow death portion (CD) metrics as a function of the number of pathogens (\(K\)), and spatial infection radius (\(\beta_{r}\)). Interestingly, as the number of pathogens increases up to five, the number of
Figure 9: A sensitivity analysis of different epidemic spread metrics as a function of the initial pigeon population size (\(|P(0)|\)) and average within species infection rate (\(\beta\)). The results are shown as the mean value of \(n=1000\) simulation realizations. The results show an interactive and non-monotonic effect of the two predictors (population site and infection rate) on disease outbreak indices.
individuals infected at the same time increases to around half of the population, as one can see from Figs. 8a, 8d, and 8g. Nonetheless, after this point, as the number of pathogens increases the MI metric remains constant. This outcome is associated with the following dynamics: one pathogen has over the other as too many pathogens cause a higher mortality rate, they actually reduce the overall infection rate as individual die quicker and does not have an opportunity, on average, to infect other individuals in the population [48, 113, 114].
From the proposed analysis, it seems that when the pigeons' population size is controlled, they do not cause epidemic outbreaks in their own population or the cow population in cowsheds. However, if the pigeon population size increases or if a set of multiple, highly contentious pathogens is introduced, an epidemic outbreak is only a question of time. Importantly, if no epidemic intervention policies are quickly and efficiently used, then once just several cows have been infected, most of the cow population within that site would be infected within a small period, due to the small spatial location they share [115, 116]. In contrast, in multi-cowshed scenarios, it seems that even without an intervention, the system will mitigate the outbreak and the epidemic spread over a (relatively) long period of time. This conclusion highlights the complexity of the biological dynamics in such agricultural settings as these have self-stabilizing properties on the one end but are extremely sensitive to outside influence. Therefore, lacking more epidemiological data, it is challenging to draw any definitive conclusions.
When focusing on the influence of the pigeons as an external species in the context of a cowshed, the epidemic spread can be reduced to two main factors - the pigeons' population size and average within-species (instra-specific) infection rate. Following this simplification, the results show (Fig. 9) the influence of these two parameters on the ARN, MI, and CD. Fig. 9c shows that even a relatively small pigeon population with a relatively non-aggressive infection rate can cause cows' deaths if not treated but this would be minor, on average. On the other hand, only a larger pigeons' population and a more aggressive infection rate result in a global outbreak as indicated by Fig. 9a. Taken jointly, the results show that pigeons can cause sporadic infection and death in cows even in small population sizes but it requires both larger pigeons' population size and aggressive infection rate pathogens, and that the pathogen(s) will not have a strong negative effect on their movement (Fig. 6). That said, it is important to note that even for extreme cases such as 1000 pigeons with highly infected pathogens, the average reproduction number reaches only 1.3 which is comparably small to other scenarios such as the Zika and COVID-19 epidemics [117, 118, 119].
### Future direction and concluding remarks
While our model presents a comprehensive approach to studying the pigeons in the cowshed epidemic spread settings, there are several limitations to the proposed model and analysis. First, the accuracy of the model heavily relies on the quality and representativeness of the data used for calibration and validation. Namely, we only partially establish the model's effectiveness. Whereas we have solid spatial data for parameterization, we are still lacking the epidemiological data to fully validate it. Gathering reliable data on animal interactions and disease prevalence can be challenging (particularly in field settings), and future work may focus on collecting such data and re-evaluating the performance of the proposed model. Second, our model simplifies several biological attributes of the system. For instance, we assume a continuous birth of pigeons over time, ignoring the breeding biology of simultaneous clutches that can alter the dynamics. Similarly, we assume random encounters within the population, while social structures and the dependency between individual movement and the subset of individuals it interacts with can affect effective beta and ARN [36, 110, 1]. Third, the proposed model does not consider the potential genetic variability of pathogens circulating within the pigeon and cow populations. Previous studies show that over time, taking into account the genomics mutation process of pathogens plays a critical role in understanding epidemic spread [120, 121, 122, 123]. As such, incorporating better parameters, more biological realism, and genetic data into the proposed model could provide a more nuanced understanding of pathogen interactions.
Finally, despite these above-mentioned limitations, our model highlights the premise of combining extended SIR models (here spatio-temporal multi-pathogen SEIRD) and simulations for addressing concurrent challenges related to disease transmission in agricultural and urban settings. Such models, when coupled with empirical data can refine predictions regarding systems and pathogens of interest. Simulation and model analysis can facilitate the evaluation and prioritization of effective interventions, before their practical (pricey) implementation (e.g. should one reduce pigeon population size through culling or mobility through fencing?). These models and their predictions can directly serve the "One Health" approach [124, 125, 126], highlighting the urgent need to link
wildlife behavior, agricultural practices, and human health. In a world suffering from an over-accelerating rate of zoonotic diseases such models are essential approaches for capturing the complex dynamics of these interdependent systems.
## Declarations
### Funding
OS acknowledges financial support by Grant 891-0232-21 from the Israel Dairy Board Research, grant ISF396/20 from the Israeli Science Foundation, by the Data Science Center at Tel Aviv University, and by the Koret-UC Berkeley-Tel Aviv University Initiative in Computational Biology and Bioinformatics.
### Conflicts of interest/Competing interests
None.
### Code and Data Availability
The code and data that have been used in this study are available upon reasonable request from the authors.
### Acknowledgment
We gratefully thank Miranda Crafton for her guidance, Shay Cahani for his biological consulting, and Avishai Lublin for his valuable feedback and discussions.
### Author Contribution
Teddy Lazebnik: Conceptualization, Software, Formal Analysis, Investigation, Methodology, Visualization, Project administration, Writing - Original Draft, Writing - Review & Editing.
Orr Spiegel: Conceptualization, Resources, Data curation, Validation, Investigation, Writing - Review & Editing.
|
2306.04917 | Soft Matrix: Extracting Inherent Length Scales in Sheared Amorphous
Solids | Amorphous solids yield upon crossing a strain threshold, after an initial
elastic response, when subjected to mechanical deformation. The yielding
process is characterized by local plastic events leading to non-affine
displacements, and their interactions. Despite the lack of long-range
structural order, these disordered materials exhibit long-range spatial
correlations in the non-affine displacement fields, which stems from the
underlying elasticity. Measuring a correlation length scale in deformed
amorphous solids, during the plastic process, is a non-trivial challenge, often
requiring an ad-hoc definition of localized regions. In this paper, we
introduce a novel computational approach called the "soft matrix" that enables
systematic analysis of mechanical response of local regions within a disordered
solid. In this method, we subject the amorphous solid to a quasistatic shear
and allow a local region of interest to relax freely while allowing for elastic
relaxation of the background. The dependence of the yield strain upon the size
of the probe region naturally reveals the existence of an intrinsic length
scale ($\zeta$) that governs the elasto-plastic properties, as observed in four
distinct model amorphous solids. This finding demonstrates the universality of
this characteristic length scale across a wide range of materials. We
investigate the dependence of this length scale on the material's preparation
history and find that $\zeta$ increases with better annealing. Furthermore, the
local mechanical properties measured within this framework provide more
accurate estimates compared to existing techniques. Our study paves the way for
a comprehensive understanding of amorphous solids and facilitates improved
characterization and design of these materials. | Monoj Adhikari, Pinaki Chaudhuri, Smarajit Karmakar, Vishnu V. Krishnan, Nandlal Pingua, Shilditya Sengupta, Aparna Sreekumari, Vishwas V. Vasisht | 2023-06-08T03:40:56Z | http://arxiv.org/abs/2306.04917v1 | # Soft Matrix: Extracting Inherent Length Scales in Sheared Amorphous Solids
###### Abstract
Amorphous solids yield upon crossing a strain threshold, after an initial elastic response, when subjected to mechanical deformation. The yielding process is characterized by local plastic events leading to non-affine displacements, and their interactions. Despite the lack of long-range structural order, these disordered materials exhibit long-range spatial correlations in the non-affine displacement fields, which stems from the underlying elasticity. Measuring a correlation length scale in deformed amorphous solids, during the plastic process, is a non-trivial challenge, often requiring an ad-hoc definition of localized regions. In this paper, we introduce a novel computational approach called the "soft matrix" that enables systematic analysis of mechanical response of local regions within a disordered solid. In this method, we subject the amorphous solid to a quasistatic shear and allow a local region of interest to relax freely while allowing for elastic relaxation of the background. The dependence of the yield strain upon the size of the probe region naturally reveals the existence of an intrinsic length scale (\(\zeta\)) that governs the elasto-plastic properties, as observed in four distinct model amorphous solids. This finding demonstrates the universality of this characteristic length scale across a wide range of materials. We investigate the dependence of this length scale on the material's preparation history and find that \(\zeta\) increases with better annealing. Furthermore, the local mechanical properties measured within this framework provide more accurate estimates compared to existing techniques. Our study paves the way for a comprehensive understanding of amorphous solids and facilitates improved characterization and design of these materials.
## I Introduction
Amorphous solids are disordered systems which are lacking long-range structural ordering. They encompass a broad range of materials ubiquitous in our daily lives, including colloids, foams, emulsions, granular matter as well as metallic and silicate glasses [1]. When subjected to an external load, such as shear deformation, they display an elastic response at low strain values and yield beyond a threshold strain [1; 2; 3; 4; 5; 6; 7; 8]. Unlike the crystalline systems where defects are known to be careers of plasticity, the physical mechanisms underlying the shear response of an amorphous material still lacks a complete understanding. The last two decades of research in theory and experiments have revealed that the plasticity in amorphous solids occurs through local non-affine displacements or rearrangements resulting in the redistribution of elastic stresses within the system [9; 10; 11]. Simulation works on athermal systems performed under quasistatic shearing conditions have shown that the displacement field exhibits a quadrupolar symmetry resembling the Eshelby inclusion model [12; 13; 14].
Amorphous materials exhibit structural and dynamical heterogeneities [15]. In recent years, there has been a growing interest in understanding the possible existence of a characteristic length scale associated with the relaxation process in driven amorphous solids. This interest stems from its impact on various aspects, including shear start-up properties [16; 17; 18; 19; 20], yielding mechanisms [21; 22; 23; 24], and flow heterogeneities [18; 19; 25; 26; 7; 27]. Previously, static correlation lengths have been extracted using non-affine shear deformation protocols for colloidal glasses and supercooled liquids [28; 29; 30], which demonstrated that the measured lengthscale is highly dependent on the applied shear deformation and likely describes the structural correlations in the initial configuration. Similar protocols have been employed to extract the correlation length in athermal colloidal suspensions under steady-state flow with finite-rate shearing [31], showing dependence on both shear strain and shear rate. Based on ideas of non-local fluidization, a similar dynamical correlation length has also been predicted and measured [32; 33; 34]. However, a comprehensive comparison between the extracted correlation length and other measures of static length scale that do not involve shear deformation protocols, such as the point-to-set length
scale[35], length scales obtained from finite-size scaling of relaxation times [36], or minimum eigenvalues [37], is lacking.
Despite the absence of a clear measure of the characteristic length scale, various approaches have been proposed and utilized to characterize the size of plastic events or the number of particles involved in non-affine rearrangements. These methods include analyzing structural properties (e.g., free volume [38], local ordering [39]) and linear response measures (e.g., elastic moduli [40], soft modes [41]). The frozen matrix method has recently gained widespread use in computing local mechanical properties, including yielding thresholds and viscoelastic properties. Initially proposed for assessing local elastic properties [42], this method involves allowing only the specific "target" local region of interest to relax independent of the background. Under external loading, the entire system undergoes a uniform affine deformation, except that the local region is permitted to relax freely, while the rest of the system is restricted from undergoing non-affine relaxation. By analyzing the stress response in the target region to the applied strain, the local mechanical properties can be determined.
The frozen matrix method has been widely utilized in various studies on amorphous solids [43; 44; 45; 46; 47; 48; 49]. However, in all cases, the size of the target region is chosen in an ad-hoc manner, and the background material is not allowed to undergo any relaxation and is made rigid - hence frozen. Although this approach provides reasonable estimates of the local yield stability distribution, it tends to overestimate the local yield stress and elastic modulus significantly due to the presence of a rigid background [43; 44; 46; 48]. This overestimation can have an impact on the predictability of local plastic events and the correlations between them. Additionally, the frozen matrix method does not enable the extraction of the length scale associated with the plastic event.
In this study, we introduce a novel approach called the "Soft matrix method" to investigate the local mechanical properties, as described in the following section. This method builds upon the original proposition of the frozen matrix approach and it actually provides us a route to extract a characteristic length scale \(\zeta\) in various types of amorphous materials, overcoming the limitations imposed by rigid or periodic boundary conditions [50; 51]. We observe that this length scale is influenced by the aging conditions of the material, and we demonstrate that it exhibits scale-free power law behavior when studying larger sub-systems. Consequently, this length scale provides a natural solution to the question of appropriate coarse-graining length, serving two main purposes: (a) enabling accurate computation of local mechanical properties crucial for understanding the yielding phenomenon and providing inputs for mesoscale modeling of amorphous materials and (b) facilitating comparisons between this length scale and other measures of static length scale in disordered systems.
## II Soft Matrix Method
In this approach, within a system having linear dimension \(L\), we choose a local region having size \(L_{s}\), whose mechanical properties we plan to measure. The rest of the system from hereon will be referred to as the background. The mechanical response of the solid is probed via the athermal quasi-static shear (AQS) protocol (_see Methods_). The entire system is first subjected to a simple shear deformation of magnitude \(\delta\gamma\approx 10^{-5}\), and hence an affine displacement is imposed on all particles in the system. Post deformation, the system is subjected to relaxation or energy minimization, and here we introduce an essential improvement over the existing frozen matrix method [43; 44; 45; 46] which is key to extracting the heterogeneity length scale \(\zeta\). Unlike the frozen matrix method, during the energy minimization, the background can relax by affine rearrangements, but no plastic events are permitted in the background. On the other hand, the sub-system is allowed to relax without any constraints. Hence we term the procedure a _soft matrix method_. We implement this partial relaxation of the background by adding an additional spring force of the type \(F_{r}=-kr\), before minimization. Here the spring constant \(k\) is a tunable parameter determining the coefficient of restoring force, and \(r\) is the displacement of a background particle
Figure 1: **Soft Matrix Methodology.** (a) Schematic demonstrating the soft matrix method. The sub-system (central blue box) is subjected to unconstrained relaxation, whereas the background (surrounding pink region) relaxes with a constraint imposed by the spring tether (see text for details). (b.) Snapshot from 2D BMLJ (65:35) simulation illustrating the soft matrix: plastic event is confined to subsystem, while the background is allowed to elastically relax. Here X is the shearing direction, and Y is the gradient direction. (bottom) Variation of stress (\(\sigma\)) as a function of strain (\(\gamma\)) for (c) fixed subsystem size (\(L_{s}=10a\)) and varying \(k\) value and (d) fixed \(k\) value (\(k=10\epsilon/a^{2}\)) and varying subsystem size \(L_{s}\).
from its reference position, taken to be the position after applying the step of affine displacement.
We test out the soft matrix method via extensive numerical simulations using the **2D BMLJ** model and later apply our understanding to study three other model amorphous solids, namely **3D Soft-Rep**, **3D SiO\({}_{2}\)**, and **3D Metallic Glass**. See _Methods_ for further details. All the simulations were carried out using LAMMPS molecular dynamics package [52].
In Fig.1, we show (a) schematic and (b) actual simulation configurations for the 2D BMLJ model with \(L=90a\) and the number of particles \(N=10000\). Here \(a\) represent the unit length scale. The background region (pink in Fig.1a)) is subjected to additional restoring force during the relaxation with an appropriate \(k\) value such that (almost) no irreversible plastic motion is observed in the background particles. In Fig.1(b), the colored particles in the sub-system relax with both affine and non-affine displacement and can involve in plastic events. The color bar is based on the magnitude of displacement.
## III Shear startup response - first plastic event
In Fig.1(c), we show the load curves (shear stress \(\sigma\) vs. applied strain \(\gamma\)) for a range of \(k\) values at a fixed subsystem size (\(L_{s}=10a\)) for the 2D BMLJ model. For a chosen sample, prepared by the standard quench protocol (see _Methods_), we initially perform an AQS simulation of the entire system and obtain the bulk stress-strain curve (load curve), via which we recognise the first local yielding event (referred to herein as the first plastic event) and also identify the spatial location of the event as well as the particles participating in the yielding event (see _Methods_). We construct a sub-box of linear length \(L_{s}\) around the first plastic event and perform the partial relaxation of the surrounding. The bulk load curve corresponds to \(k=0\). In Fig.1(c), we show that with an increase in chosen \(k\) values, the system's shear modulus increases, as clearly observed via an increase in the slope of the stress-strain curve, which implies that the system becomes more and more rigid. Further, with increasing \(k\), the first plastic event is also delayed and occurs at a higher strain value; thereby the local yield stress value also increases. As \(k\rightarrow\infty\), we should reach the frozen matrix limit, and that is consistent with our observed trend of larger yield stress and larger strain thresholds with increasing \(k\).
In the current work, we are primarily interested in determining the heterogeneity length scale \(\zeta\). Hence, for further analysis, we fix the value of \(k\) by the following two competing requirements: (i) \(k\) should be small enough so that the probe itself does not significantly alter the local mechanical properties of the system, and (ii) \(k\) should be large enough such that (almost) no plasticity occurs in the background system during the stress relaxation. We verify (i) by monitoring the bulk modulus and pressure, which remain unchanged for a range of \(k\) values. For condition (ii), we monitor the displacement in the sub-system and surrounding and find that for appropriate \(k\) values, the background shows no large displacements. (See _SI_ for further details).
Having the fixed \(k\), we now examine the effect of sub-system size \(L_{s}\) on the load curve, specifically on the yielding strain. In Fig. 1(d) we show load curves for \(k=\textit{10}\epsilon/\textit{a}^{2}\) for varying \(L_{s}\). The solid black line represents the load curve for an unconstrained system which shows a yield strain of \(\gamma_{y}\approx 0.002\) at the first plastic event. With varying \(L_{s}\), the initial linear response does not show much variation, but we observe a systematic change in local yield strain \(\gamma_{y}\) and local yield stress \(\sigma_{y}\). With the increase in \(L_{s}\), \(\gamma_{y}\) and \(\sigma_{y}\) decrease. Approaching \(L_{s}\to L\), we recover the unconstrained load curve. We find similar features in the other three model systems that we have studied. In each case, the value of \(k\) was fixed according to the above-prescribed protocol. One can rationalise the increasing yield stress and strain with decreasing \(L_{s}\) in terms of interface effects, with the background curtailing the motion of particles in the sub-system and hence increasing the barrier height for the first plastic event. However, we find a more interesting observation when we look at the variation of \(\gamma_{y}\) with \(L_{s}\), which we discuss below.
Figure 2: **Measuring characteristic length scale.** Relative yield strain \(\Delta\gamma_{y}\) as a function of subsystem size \(L_{s}\) for four different models: (a) 2D BMLJ (b) 3D Soft-Rep (c) 3D SiO\({}_{2}\) and (d) 3D Metallic glass. The open symbols are data obtained from the soft matrix method. The opaque symbols are from the frozen matrix method. Different symbols correspond to different system sizes. The thick dashed line is the exponential fit, and the thin dashed line is just a guide to the eye.
A characteristic length scale \(\zeta\)
In Fig. 2, we show the variation of relative yield strain \(\Delta\gamma_{y}\) with the sub-system size \(L_{s}\) for the four different models of amorphous solids. The relative yield strain \(\Delta\gamma_{y}\) is defined as the deviation of \(\gamma_{y}\) from the bulk value: \(\Delta\gamma_{y}=\gamma_{y}(L_{s})-\gamma_{y}(L)\) where \(\gamma_{y}(L)\) is the yield strain for the bulk system (\(k=0\)). The data shown in Fig. 2 corresponds to poorly aged samples, and we obtain the average \(\Delta\gamma_{y}\) over multiple initial configurations as well as different system sizes (whenever possible). All four systems show \(\Delta\gamma_{y}\) decreases exponentially as we increase the size of the probing zone \(L_{s}\), suggesting the existence of a characteristic length scale. The exponential fit function \(a_{0}\exp(-L_{s}/\zeta)\) is shown as dashed lines. Here the fit parameter \(a_{0}\) represents the limiting value of the relative yield strain in the limit of zero sub-system size, and \(\zeta\) is the estimated value of the characteristic heterogeneity length scale intrinsic to each model system. In 2D BMLJ and 3D Soft-rep models, which have short-range interactions, our fit finds \(\zeta\approx 5a\). Interestingly, this is similar in magnitude to values often assumed in the literature for local rheological measurements in similar models [40; 44; 46], where it is interpreted as the length scale below which Hooke's law for linear continuum elasticity breaks down. For the first time, we have provided the rationality of this choice by directly extracting an intrinsic scale via the proposed soft matrix method. For 3D \(SiO_{2}\) and 3D Metallic glass, we find \(\zeta\) to be \(\approx 12a\) and \(\approx 8a\) respectively. The higher values in these models are possibly due to the long-range nature of inter-particle interactions, unlike the Lennard-Jones-type models typically studied. In Fig.2, we also show corresponding data obtained using the frozen method for the purpose of comparison. It is important to note that at small sub-system lengths, the stress shows a significant increase for the frozen matrix method, indicating the influence of how the background relaxation is modeled. This rise is expected to depend on the system size, and in larger system simulations, the impact of the interface will diminish, which is what we observe. Thus, we argue that this approach is not suitable for accurately extracting any characteristic scale which is much less than system size.
## V Age dependence of characteristic length scale, \(\zeta\)
After extracting the characteristic length scale from the exponential relationship between the local yield strain and the probe window size, we now investigate the influence of the preparation protocol on this length scale.
It is well known that the mechanical response varies dramatically with the increasing age of the amorphous solid samples [20; 53; 54; 55; 2]. For example, a poorly aged or annealed sample often shows more homogeneous ductile-like yielding behavior, while a well-aged or ultra-stable glassy sample exhibit heterogeneous brittle-like yielding via shear banding [20; 56; 21]. Thus it is essential to understand how the characteristic length scale that we obtained using our soft-matrix method depends on the sample age or annealing history, which is presented here for 2D BMLJ glass. We study samples quenched by infinite cooling rate (obtained by energy minimization of configurations generated at different initial temperatures \(T_{W}\) to instantaneous inherent structures) as well as the ones prepared via finite cooling rate \(\Gamma\)[57]. We then compute the \(\Delta\gamma_{y}(L_{s})\) via the soft matrix method protocol - see Fig. 3 (a) main panel; note that we have scaled the abscissa by the age-dependent fit parameter \(a_{0}\) for better clarity. We find that \(\Delta\gamma_{y}\) has a slower decay with \(L_{s}\), for increasing age of the system. The exponential nature of \(\Delta\gamma_{y}(L_{s})\) is clearly shown in the inset of Fig. 3 (a), via linear-log scale. Therefore, it is evident that with increasing age, the characteristic length scale increases significantly The extracted \(\zeta\) as a function of the energy of the sample at zero strain is shown in Fig. 3 (b); lower energy of the initial state indicates higher age of the sample. We see that the value \(\zeta\approx 5a\) occur only for a range of the poorly aged sample, and \(\zeta\) systematically increases with an increase in age. Hence our analysis clearly suggests that any measure of local mechanical properties should consider the \(\zeta\) dependence with age. Further work needs to be done to understand
Figure 3: **Age dependence.** (a) Relative yield strain as a function of subsystem size for different ages of the initial sample (shown for 2D BMLJ). The inset has the same data in log-linear scale, for selected ages. Dashed lines represent fits to exponential functions, from which respective characteristic length \(\zeta\) is extracted. (b) Characteristic length \(\zeta\) as a function of energy of the initial state (i.e. at zero strain), \(U_{\rm init}\); note that \(U_{\rm init}\) decreases with an increase in age. The dashed line is a guide to the eye.
the functional form of the age dependence on \(\zeta\).
## VI Large length scale behaviour - beyond \(\zeta\).
The macroscopic response of amorphous solids shows long-range elastic correlations [11] and power law avalanche statistics in the steady state [50]. Via numerical simulations done using periodic boundary conditions, it has been shown that, in the steady state, the finite size dependence of the average strain interval between two stress drops has the following behaviour: \(\langle\Delta\gamma\rangle\sim N^{-\beta}=L^{-\beta d}\), where \(d\) is the dimension, and the exponent \(\beta\simeq 0.7\)[50]. One would thus expect that in \(2D\), the average strain interval for the first plastic drop should go as \(\langle\gamma_{y}\rangle\sim L^{-1.4}\). We test whether this behaviour can be recovered at large sizes of the sub-box, for which we have utilized large system size soft matrix simulations (\(N=125\)K; \(L=360\)) and obtained average yield strain as a function of \(L_{s}\). In Fig.4, we show the yield strain variation with changing \(L_{s}\) for the 2D BMLJ model (in a poorly aged sample) using two different system sizes (\(L=90\) and \(L=320\)). We observe two interesting features. Firstly, with the increase in \(L_{s}\), the average yield strain, \(\gamma_{y}(L_{s})\), crosses over from an exponential behaviour to a power law dependence having an exponent around 1.57. The crossover length is around \(10a\). Secondly, we find that in the limit of \(L_{s}\to L/2\), \(\gamma_{y}\) deviates from power law and seems to saturate, which likely originates from the effects due to periodic boundary condition. These observations highlight effectiveness of our the proposed soft-matrix in not only extracting the inherent length scale associated with sheared amorphous solids but also reproducing the large length scale behaviour. The power law exponent from our fit is not very off from the expected value [50], but we note that the soft elastic confinement employed in our method is different from the standard periodic boundary condition. This might affect the power-law exponent, which is an interesting direction for future investigations. Also, finite size effects cannot be ruled out. One needs to be sufficiently far away from the cross-over length scale to suppress all the effects of the underlying length scale. Hence for well annealed samples, one should go to larger system sizes to extract the same power-law exponent. Further it has been recently pointed out [51] that the asymptotic power-law behaviour with an exponent close to \(\beta\simeq 0.7\) can only be obtained at system sizes much larger for well annealed (ultra stable) glasses than currently accessible via computer simulation. Further studies across various models in different dimensions might shed more light on this puzzle.
## VII Local mechanical properties
After extracting the characteristic length scale \(\zeta\) using the soft matrix method, we employ this approach to determine the local yield threshold and local elastic modulus \(\mu_{i}\). The local yield threshold (X) is defined as \(X=\sigma_{y}^{i}-\sigma_{0}^{i}\), where \(\sigma_{y}^{i}\) and \(\sigma_{0}^{i}\) are the yield stress and zero strain stress at the local region \(i\), whose linear size is of the order of \(\zeta\). The local storage modulus \(\mu_{i}\) is obtained from the slope of the local load curve.
In Fig.5(a), we show for the 3D Soft-Rep system (using \(N=97556\), \(L=42a\) and \(k=100\)), the probability distribution function P(X) (inset) and cumulative distribution function F(X) (main panel) obtained from soft matrix as well as frozen matrix method [58]. We find that the frozen matrix not only overestimates the yield threshold, which would be expected, but also the functional behaviour is different. The fit lines in Fig.5(a) correspond to Weibull distribution \(F(X)=1-exp[-(X/s_{1})^{s_{2}}]\) with scale parameter \(s_{1}\) being 1.9 (soft matrix) and 1.5 (frozen matrix). The shape parameter \(s_{2}\), which is associated with the instantaneous yield rate [59; 60], varies by a factor of 3, with \(s_{2}=2.3\) (soft matrix) and \(s_{2}=6.2\) (frozen matrix). Examining the implications of these differences on the mesoscale modeling results (where P(X) is a key input) would be an interesting aspect to explore.
In the Fig.5(b), we show \(P(\mu)\), the distribution of local storage modulus along with a Gaussian fit. Unlike the frozen matrix method, which perturbs the elasticity of the background drastically, the soft matrix method allows for having the background elasticity close to the
Figure 4: **Large length scale behaviour.** Absolute yield strain \(\gamma_{y}\) as a function of subsystem size \(L_{s}\) for two different system sizes (opaque circle: L=90 and open circle: L=320) for the 2D BMLJ model (using poorly aged samples). Fit lines (red dashed line - exponential fit and solid blue line - power law fit with exponent 1.57) show a cross-over (at around \(15a\)) from an exponential regime at small sub-systems to a power law regime at large sub-systems.
unconstrained bulk value and hence \(P(\mu)\) is distributed around the bulk value of the storage modulus (marked by green upper triangle in Fig.5(b)), which is measured from the slope of the load curve obtained from unconstrained simulations.
In conclusion, the soft matrix method introduced in this study not only enables the extraction of the length scale \(\zeta\), but also provides improvements over the frozen matrix approach in terms of estimating local mechanical properties.
## VIII Summary and Discussion
We have devised a numerical technique for measuring local mechanical properties in amorphous solids. Labelled as the "soft matrix method", we have utilized it to successfully extract the characteristic length scale associated with plastic events or non-affine displacements occurring during the deformation process. Our approach provides an opportunity to explore the local mechanical response within a purely elastic background. By applying this method to four different models of amorphous solids, covering a wide range of materials, we have demonstrated the universal existence of this intrinsic length scale. Furthermore, we show that this length scale increases with the age of the sample, indicating its dependence on the preparation history. Importantly, our method allows us to accurately estimate the local yield threshold and local elastic modulus, greatly improving upon the accuracy of the existing frozen matrix method.
A natural question that arises from our results is the origin of the characteristic length scale, \(\zeta\), in the material. The simplest interpretation is that \(\zeta\) corresponds to the spatial scale over which local plastic relaxations are devoid of any interfacial effects; i.e. it captures the intrinsic scale over which the first localized irreversible event would occur in a bulk system under mechanical loading. Thereby, it also relates to the spatial scale beyond which interference from other plastic events would come into play leading to screening of power-law correlations, as has been proposed recently [61]. Previous studies have extensively explored the concept of an intrinsic static correlation length scale associated with a putative amorphous order in supercooled liquids and glasses [15]. In the context of shear amorphous solids, similar length scales have been extracted using non-affine shear deformation protocols [28; 29; 30; 31]. However, a comprehensive comparison with other measures of static length scale that do not involve shear deformation protocols, such as the point-to-set length scale [35], length scale derived from finite size scaling of relaxation times [36], and minimum eigenvalues [37], is currently lacking. It is expected that these different measures of static length scale in the supercooled liquid regime are proportional to each other [62].
Our work offers an opportunity to compare and investigate various length scales, shedding light on the origin of such correlations. The observed increase in \(\zeta\) with better annealing is consistent with the corresponding increase in the static correlation length obtained through analysis of non-affine displacement fields or finite size scaling of different observables in supercooled liquids. It also resembles the increase in the typical distance between soft spots as inferred from the measured number density of quasi-localized modes [63; 64]. However, without a detailed comparison, it is not immediately evident whether the length scale \(\zeta\) obtained using the soft matrix method is identical to these other length scales, although it is highly likely that \(\zeta\) is closely related to the length scale of quasi-localized modes. The connection to the static length scale of glass transition as determined by the point-to-set method is not apparent unless we assume that the point-to-set length scale and the length scale derived from finite size scaling of minimum eigenvalues are fundamentally the same [62].
Despite the aforementioned limitations, it is noteworthy that the characteristic length scale \(\zeta\) obtained through the soft matrix method exhibits remarkable similarity to the coarse-graining length scales reported in various studies on elasto-plastic models. These models often define an appropriate coarse-graining scale, typi
Figure 5: **Local Mechanical Properties.** (a.) Distribution of local yield stress threshold (\(X=\sigma_{y}-\sigma_{0}\)) for 3D soft-rep model (poorly aged), comparing the soft matrix (circle) with frozen matrix (square) methods. Inset shows the probability distribution P(X), and the main figure show the cumulative distribution F(X). The fit lines are drawn using Weibull distribution (see main text for details). (b.) Distribution of local shear modulus (\(\mu\)) for 3D soft-rep model (poorly aged) comparing the two methods, shown along with Gaussian fits (using lines). Also marked is the bulk shear modulus (green triangle).
cally around 5 in suitable units, to denote the size of elementary meso-block beyond which the propagation of plasticity occurs via the Eshelby kernel [44; 45; 46; 47; 49; 40]. It is interesting to observe that in order to study the effect of ageing or annealing in these elasto-plastic models, systematically increasing the coarse-graining length scale is often required to achieve a comparable correspondence with microscopic simulations [51]. Hence, it appears that the length scale derived from the soft matrix method may serve as the desired length scale for the development of a robust mesoscopic elasto-plastic model capable of incorporating ageing or annealing effects in a more fundamental manner.
Overall, our work provides valuable insights into the behavior of amorphous solids at a local level, enhancing our understanding of their macroscopic properties and paving the way for improved mesoscale models required for material design.
## IX Methods
The four different models studied in this work are as follows.
(_1._) **2D BMLJ:** Binary Mixture (65:35) of Lennard-Jones particles is a well-studied model glass forming system [65], whose mechanical properties are well characterised. Two different system size were considered, N=10K and 125K, at a reduced density \(\rho=1.2\). The scales of A-A interactions within the model provide the units for length (\(a\)), energy (\(\epsilon\)).
(_2._) **3D Soft-rep:** A system of 10% polydisperse soft spheres interacting via the WCA potential [66], is a typical colloidal model in 3D. At a volume fraction of \(\phi=0.7\), system size of N=1K, 5K, 10K and 100K considered. the particle has an average diameter of \(a\) (unit length), mass \(m\) (unit mass), and energy \(\epsilon\) (unit mass).
(_3._) **3D SiO\({}_{2}\):** Most common molecular glass is modeled using the VSP modified BKS potential of 1:2 binary mixture of silicon and oxygen atoms [67]. System sizes of \(N=13824,46656\) particles at a density of \(\rho=2.8\,gm/cm^{3}\) are considered. In this model unit length is \(a_{Si-O}\), unit energy \(\epsilon_{Si-O}\) and unit mass \(m_{Si}\).
(_4._) **3D Metallic Glass:** A metallic glass system \(Cu_{64.5}Zr_{35.5}\) alloy using the embedded atom model (EAM) [68]. System sizes \(N=50000,100000\) particles are considered at zero pressure. The units of length, mass, and energy are expressed in angstroms, grams/mole, and electron volts (eV), respectively. The lengths are scaled in terms of the diameter of Cu (2.56 angstroms).
### Sample preparation protocol
For 2D BMLJ and 3D Soft-rep systems, equilibrium NVT MD simulations were performed at \(T=5.0\,\epsilon/k_{B}\) with \(\rho=1.2\) and \(\phi=0.7\) respectively and equilibrated configurations were quenched to \(T=0.01\) at cooling rates \(\Gamma\) varying from \(10^{-2}\) to \(10^{-6}\). These configurations were then subjected to potential energy minimization protocol (FIRE or Conjugate gradient, CG). For 3D Silica, configurations were equilibrated by NVT MD simulations at \(T\approx 3100\,K\) and \(\rho=2.8gm/cm^{3}\) followed by instantaneous quench via potential energy minimization (CG). For 3D metallic glass, NPT MD simulations were performed at zero pressure and at temperatures \(T=1100,1200,1300\,K\) to equilibrate the system. Then configurations were quenched to \(T=300K\) at cooling rates \(\Gamma=10^{10},10^{11},10^{12}\,K/s\) followed by potential energy minimization by CG. In each model and cooling rate, a minimum of 10 to a maximum of 250 samples were prepared. The shear stress \(\sigma\) is computed from the off-diagonal term of the virial stress tensor, using the Irving-Kirkwood form [66].
### Shearing protocol
We probe the mechanical response using the Athermal Quasi-static Shear (AQS) protocol [69]. In this procedure, an elementary shear strain \(\delta\gamma\sim\mathcal{O}(10^{-5}-10^{-6})\) is applied to a sample by the uniform affine deformation rule: \(x_{i}^{{}^{\prime}}=x_{i}^{{}^{\prime}}+\delta\gamma\,y_{i}\), \(y_{i}=y_{i}\), \(z_{i}^{{}^{\prime}}=z_{i}\) to all particles \(i\). Following this step, the system is relaxed using an energy minimization technique (FIRE or CG). This sequence of deformation followed by minimization is thereafter continued. Here we chose the \(\delta\gamma\) such that in most of the samples we find only a single plastic event within the incremental strain window. The maximum applied strain \(\gamma_{max}\) was chosen such that at least one plastic event occurs within \(\gamma_{max}\). In all our simulations, X is the shearing direction, Y is the gradient direction and Z is the vorticity direction.
### Plastic event or Yielding
To detect the plastic event or the yielding event, we measure the largest non-affine displacement in the system. The displacement is computed from two minimized configurations (separated by the elementary strain \(\delta\gamma\)) and defined as \(\Delta z_{max}^{2}(\gamma)=\max[z_{i}(\gamma+\delta\gamma)-z_{i}(\gamma)]^{2}\) where \(z_{i}(\gamma)\) is the coordinate of a particle \(i\) in the direction perpendicular to shear (gradient direction in 2D and vorticity direction in 3D). By studying the corresponding drops in stress and the potential energy, we put a threshold on \(\Delta z_{max}^{2}\), to identify the occurrence of the event; \(\Delta z_{max}^{2}\) is model dependent. We have also verified this identification via the back-shear protocol [46] and we find that our method provides an excellent balance between sensitivity of detection and computational efficiency. More details are provided in _SI_.
## Acknowledgements
PC, SK, SS and VVV acknowledge financial support from NSM grant DST/NSM/R&D_HPC_Applications/2021/29 as well as computational time on PARAM Yukti computing facility. SK acknowledges the Swarna Jayanti Fellowship, grants DST/SJF/PSA01/2018-19, and SB/SFJ/2019-20/05. SS acknowledge the Faculty Initiation Grant from IIT Roorkee and PARAM Ganga computational facility. VVV acknowledge the computational time on the Chandra cluster at IIT Palakkad. Further we would like to thank Srikanth Sastry and Peter Sollich for fruitful discussions.
|
2310.17733 | Kepler Bonus: Light Curves of Kepler Background Sources | NASA's \textit{Kepler} primary mission observed about 116 $deg^2$ in the sky
for 3.5 consecutive years to discover Earth-like exoplanets. This mission
recorded pixel cutouts, known as Target Pixel Files (TPFs), of over $200,000$
targets selected to maximize the scientific yield. The Kepler pipeline
performed aperture photometry for these primary targets to create light curves.
However, hundreds of thousands of background sources were recorded in the TPFs
and have never been systematically analyzed. This work uses the Linearized
Field Deblending (LFD) method, a Point Spread Function (PSF) photometry
algorithm, to extract light curves. We use Gaia DR3 as input catalog to extract
$606,900$ light curves from long-cadence TPFs. $406,548$ are new light curves
of background sources, while the rest are Kepler's targets. These light curves
have comparable quality as those computed by the Kepler pipeline, with CDPP
values $<100$ ppm for sources $G<16$. The light curve files are available as
high-level science products at MAST. Files include PSF and aperture photometry,
and extraction metrics. Additionally, we improve the background and PSF
modeling in the LFD method. The LFD method is implemented in the
\texttt{Python} library \texttt{psfmachine}. We demonstrate the advantages of
this new dataset with two examples; deblending of contaminated false positive
Kepler Object of Interest identifying the origin of the transit signal; and the
changes in estimated transit depth of planets using PSF photometry which
improves dilution when compared to aperture photometry. This new nearly
unbiased catalog enables further studies in planet search, occurrence rates,
and other time-domain studies. | Jorge Martinez-Palomera, Christina Hedges, Jessie Dotson | 2023-10-26T18:48:43Z | http://arxiv.org/abs/2310.17733v1 | # Kepler Bonus: Light Curves of Kepler Background Sources
###### Abstract
NASA's _Kepler_ primary mission observed about \(116\ deg^{2}\) in the sky for \(3.5\) consecutive years to discover Earth-like exoplanets. This mission recorded pixel cutouts, known as Target Pixel Files (TPFs), of over \(200,000\) targets selected to maximize the scientific yield. The Kepler pipeline performed aperture photometry for these primary targets to create light curves. However, hundreds of thousands of background sources were recorded in the TPFs and have never been systematically analyzed. This work uses the Linearized Field Deblending (LFD) method, a Point Spread Function (PSF) photometry algorithm, to extract light curves. We use Gaia DR3 as input catalog to extract \(606,900\) light curves from long-cadence TPFs. \(406,548\) are new light curves of background sources, while the rest are Kepler's targets. These light curves have comparable quality as those computed by the Kepler pipeline, with CDPP values \(<100\) ppm for sources \(G<16\). The light curve files are available as high-level science products at MAST. Files include PSF and aperture photometry, and extraction metrics. Additionally, we improve the background and PSF modeling in the LFD method. The LFD method is implemented in the Python library psfmachine. We demonstrate the advantages of this new dataset with two examples; deblending of contaminated false positive Kepler Object of Interest identifying the origin of the transit signal; and the changes in estimated transit depth of planets using PSF photometry which improves dilution when compared to aperture photometry. This new nearly unbiased catalog enables further studies in planet search, occurrence rates, and other time-domain studies.
Astronomy data analysis (1858); Astronomy databases (83); Time series analysis (1916); Exoplanets (498); Transits (1711); Light curves (918); Photometry (1234) 0000-0002-0002]Jorge Martinez-Palomera
0000-0002-2886-7885]Christina Hedges
0000-0002-4883-0888]Jessie Dotsn
## 1 Introduction
NASA's _Kepler_ mission delivered to the community one of the most precise time series datasets ever produced. During its primary mission, _Kepler_ observed more than \(200,000\) target stars (Borucki et al., 2010). _Kepler_ found more than \(2,600\) exoplanet candidates (Thompson et al., 2018), observed numerous supernovae from earliest stages of explosion (Olling et al., 2015; Garnavich et al., 2016; Li et al., 2019), and more than \(2,900\) eclipsing binary systems (Kirk et al., 2016). The Kepler mission had a significant impact on a range of astrophysical domains owing to its precise, accurate time series of a large sample of stars, for the \(3.5\) year prime mission. Yet its contribution to time domain astronomy is not finished. Thanks to the use of current catalogs and methods it is possible to significantly expand the volume of data products originating from Kepler's observations. This work presents a catalog of \(606,900\) light curves including \(406,548\) from new sources and \(200,352\) Kepler targets.
### Kepler _Target Selection Function_
Kepler's primary mission selected over 200,000 targets to maximize the yield of Earth-like exoplanet discoveries (Batalha et al., 2010). These targets were selected from approximately half a million stars brighter than 16th magnitude in the Kepler passband (\(K_{p}\)). The selection used stellar parameters to estimate the radius of the smallest planet detectable in the habitable zone, the
number of detectable transits, and samples per transit. These combined with a crowding metric for the photometric aperture and the target brightness resulted in a prioritization criteria that was used to rank and select the target list. The target list mainly focuses on main-sequence G-type stars (half of the target sample) with a large fraction of them brighter than magnitude \(K_{p}<14\). The target list also includes M-type dwarfs and a small sample of hot main-sequence O- and B-type stars. Using Gaia DR2 catalog Wolniewicz et al. (2021) found that Kepler's target selection is nearly complete for main-sequence stars brighter than \(K_{p}<14\) mag and it is biased against binary systems. The study found that Kepler's selection favored cool dwarfs fainter at the faint end. Additionally, the target selection effectively separated red giants from red dwarfs. The abovementioned study found a significant drop in the observed fraction of red giants at fainter magnitudes, particularly for low-luminosity, cool giants. The same work also found no significant bias in target kinematics.
### Kepler Data Products
Kepler data products are available in three categories, discussed below; Target Pixel Files (TPFs), Light Curve Files (LCFs) and Full Frame Images (FFIs).
During its primary mission, _Kepler_ observed seventeen 93-days periods named quarters. The Kepler instrument consisted of 21 science modules, each having 4 output channels, for a total of 84 CCD channels. The telescope rotated \(90^{\circ}\) every quarter which led to the same objects being observed in the same CCD channel every 4 quarters. The _Kepler_ telescope observed an approximately 116 squared-degree region of the sky at a cadence of 30 minutes. During the prime mission, pre-defined targets were downloaded as images and were converted to flux time-series on the ground. Target cutouts were centered on stars selected from the Kepler Input Catalog (KIC, Brown et al., 2011) and are typically 4 to 9 pixels around the target. The Kepler Science Data Processing Pipeline (Jenkins et al., 2010), produced two science products from these cutouts, the Target Pixel Files (TPFs) and the Light Curve Files (LCFs).
TPFs contain the time series at the pixel level and the aperture mask used to compute the photometry of the target. LCFs are flux time series of the target. Both data products were created for a short 1-minute and a long 30-minute cadence mode. Short cadence targets required more onboard storage and different processing on the ground, and so were used on high-value targets only. All short cadence targets also produced long cadence products. In this work, we will only consider the long cadence targets, as these are available for the full Kepler sample. We leave any discussion or treatment of short cadence targets to future work.
_Kepler_'s prime mission also downlinked single Full Frame Images (FFIs) of the entire field of view each month. FFIs were downloaded for calibration and diagnostic purposes (Van Cleve and Caldwell, 2016). FFIs have an exposure time of 30 minutes but were only captured each month, then in this work, we will not use them for time series. We leave any discussion of the benefits of FFI data for extracting time series for future work.
_Kepler_ light curves were extracted using Simple Aperture Photometry (SAP) with a pre-computed aperture mask. This aperture mask balanced the precision of the flux measurement while keeping the contamination from neighbors low. The LCFs also contain a corrected version of the SAP flux, the Presearch Data Conditioned Simple Aperture Photometry (PDCSAP, Smith et al., 2012) (PDCSAP), which corrects for the systematics of the instrument. PDCSAP light curves are corrected using vectors of common trends from targets on the same detector channel, and largely address the systematics introduced by effects such as differential velocity aberration, and any spacecraft motion. Thanks to the instrument design, observation strategy, and data analysis the pipeline delivered light curves with high precision, enabling the detection of transits with \(<10\) ppm depth.
### Improving Kepler Light Curves with Gaia and PSF Photometry
The _Kepler_ spacecraft was launched in 2009, after years of development. As such, the Kepler Input Catalog (KIC, Brown et al., 2011) predates the advent of the Gaia mission (Gaia Collaboration et al., 2016), and was assembled from earlier, less accurate catalogs.
Using the KIC, the _Kepler_ pipeline performed photometry and computed optimized apertures for every target source, providing metrics that characterize the completeness of the flux and amount of contamination within the aperture. However, with updated knowledge from the Gaia catalog, we can now revisit these apertures and understand that many are significantly contaminated with background fainter sources (\(G>16\)) or by bright neighbor sources.
In total, the _Kepler_ pipeline produced light curves for more than \(206,000\) sources. But current more complete catalogs such as Gaia Data Release 3 (Gaia DR3, Gaia Collaboration et al., 2022) lists a more than 1.4 million sources brighter than magnitude \(G=19\) around the pixel cutouts.
In this work, we revisit _Kepler_'s archival data to create a complete catalog of light curves using robust pho
tometry, with our updated knowledge from Gaia. We use the Linearized Field Deblending (LFD) photometry method (Hedges et al., 2021) to create light curves of \(606,900\) sources. Of this, more than \(400,000\) corresponds to newly extracted light curves of background sources, which doubles the number of _Kepler_ targets. The LFD method provides a fast yet robust approach to perform Point Spread Function (PSF) photometry in _Kepler_-like data. LFD models the image at the pixel level to create a PSF model of the sources in the scene. Here, the scene is defined as the collection of sources observed in a list of neighboring TPFs (here and after also called a stack of TPFs). The LFD method introduces perturbations to a mean PSF model in order to correct instrumental signals such as spacecraft motion and optic changes. Both the PSF fitting and evaluation are modeled as a linear problem and solved using least-square minimization. Through this, the LFD method is able to quickly estimate the PSF shape and perform PSF photometry.
The use of PSF photometry and current Gaia catalogs led to three main improvements over the original Kepler light curve catalogs. First, PSF photometry enables robust flux estimation and target deblending which is extremely relevant in crowded regions. These regions could be particularly problematic for aperture photometry due to close proximity of sources in the image and a varying range of source brightness contrast (the difference in magnitude between two nearby sources). Secondly, Gaia catalogs provide precise astrometry and an improved census of objects in the field when compared to the KIC which enables access to a larger volume of sources. Thirdly, a blind massive light curve extraction leads to a nearly unbiased catalog useful for a better characterization of planet occurrence rate as well as further time-domain studies.
Here we present Kepler Bonus (KBonus), a catalog of extracted light curves that includes Kepler Targets and background sources. All the light curves produced in this work are publicly available to the community as FITS Light Curve Files. These can be accessed via the Mikulski Archive for Space Telescopes (MAST) archive 1. We introduce new functionalities to the original LFD method to improve the PSF modeling and correction. These are available in version 1.1.4 of the Python package psfmachine2. Additionally, accompanying this article we publish the KBonus repository3 that shows examples of the processing pipeline and configuration files used for this work as well as an example of how to load the light curve files and its content.
Footnote 1: KBonus Kepler Background 10.17909/7jbr-w430
Footnote 2: psfmachine v1.1.4 [https://github.com/SSDataLab/psfmachine/tree/v1.1.4](https://github.com/SSDataLab/psfmachine/tree/v1.1.4)
Footnote 3: [https://github.com/jorgemarpa/KBonus/tree/main](https://github.com/jorgemarpa/KBonus/tree/main)
This article is structured as follows. Section 2 details the characteristics of the data used for this work as well as the steps followed to compute the PSF models, photometry, flux metrics, and light curves. In Section 3 we present our results: we characterize the quality of the extracted light curves, discuss the demographics of the resulting catalog, and showcase two science results using these light curves. In Section 4 we discuss the limitations of this work and in Section 5 the opportunities that this new unexplored dataset provides to the community. Finally, Section 6 summarizes this work.
## 2 Data Processing
We process the _Kepler_ data using the Python package psfmachine that performs Linearized Field Deblending (LFD) photometry Hedges et al. (2021), a newly introduced type of rapid PSF photometry. In this work, we further improve psfmachine by adding a background estimator to remove rolling band noise, PSF models estimated with Kepler's FFIs, and the use of custom basis vectors to correct the scene motion due to differential velocity aberration. In this section, we describe the data used for this work, the additional analysis introduced from the original LFD work as well as the new algorithms and modules added to psfmachine. For an in-depth explanation of how the photometry of each source is extracted, we direct the reader to Hedges et al. (2021).
### _Kepler's Target Pixel Files_
Kepler observations are split in The Kepler pipeline delivered the observed data in the form of Target Pixel Files, a stack of pixels around each selected target for all observed cadences. We accessed a total of \(204,933\) TPFs from MAST archive 4 as well as other relevant engineering data (see Section 2.3). Kepler 17 observing quarters and the 84 output channels distributed across the focal plane provide a natural strategy to process the TPFs to isolate instrument systematics spatially and temporally. Therefore, we process the TPFs on a per-quarter-channel basis. Within each quarter/channel combination, we split the list of available TPFs into "batches", in order to make the model fit memory efficient.
Footnote 4: [https://archive.stsci.edu/missions-and-data/kepler/kepler-bulk-downloads](https://archive.stsci.edu/missions-and-data/kepler/kepler-bulk-downloads)
Each "batch" contains around 200 TPFs spatially sorted (i.e. 200 TPFs that are close on the detector). The batch size is not fixed due to the non-homogeneous
distribution of targets around the focal plane and the changing total number of targets observed across quarters. We found that using \(\sim 5\,000\) pixel time-series and \(\sim 400\) unique sources, which is typically reached with \(\sim 200\) TPFs, provides a robust fit of our mean and perturbed PSF model (see Sections 2.4 and 2.5). In some crowded regions like around the open clusters NGC 6819 and NGC 6791, fewer TPFs are needed to constrain the model, owing to the source density in these clusters.
### Source Catalog
The LFD method works by allowing the "scene" of stars to move as one, and each source to vary in brightness, but does not allow any individual source to move with respect to the others. The LFD method requires an astrometric catalog as input to fix the location of sources in the scene and to have a flux reference for each object. For this purpose, we use the Gaia DR 3 catalog. Gaia DR 3 provides a complete catalog between magnitudes \(G=12\) and \(G=17\). It offers an astrometric precision of 0.4 mas and a photometric precision of 6 mmag at magnitude \(G=20\)(Babusiaux et al., 2023). We query the Gaia DR 3 catalog with the center of each available TPF, a generous search radius of the cutout size plus \(16\arcsec\) (\(\approx\)4 _Kepler_ pixels) to allow sources off the TPF edge and a magnitude limit of \(G=19\). We propagate Gaia proper motions for every quarter observed by _Kepler_. We obtain a list of 1.4 million sources which acts as the input catalog for this work.
To increase efficiency, we perform a more conservative query to the input catalog using the psfmachine API. We allow sources up to \(4\arcsec\) away from the TPF edge, remove sources brighter than \(G=10\) to avoid saturated pixels and the nonlinear response of the CCD, and filter blended sources within \(1\arcsec\) by keeping the brighter objects. Highly blended sources, closer than \(1\arcsec\) are difficult to successfully deblend, and imposing this filter helps to diminish the number of degenerated solutions. The resulting catalog of successfully extracted sources contains \(606,900\) entries. From the total, \(200,352\) corresponds to Kepler targets for which the Kepler pipeline produced light curve files. The remaining \(406,548\) objects correspond to background sources for which this work releases new light curves. Additionally, we perform a cross-match between the KIC and Gaia DR3 with a \(2\arcsec\) radius and accounting for proper motion, to identify original Kepler targets.
The apparent magnitude distribution of Kepler targets (Figure 1) shows evidence of the target selection from the prime mission. This is reflected in, for example, the number cutoff at \(G=16\) and the overdensity around \(G=13.8\) due to Sun-like stars being targeted. The apparent magnitude distribution of background sources shows no signs of selection bias based on star properties.
Figure 2 shows the spatial distribution of Kepler targets and background sources across the field of view. The two high-density regions in the Kepler targets are the NGC 6819 and NGC 6791 open clusters. In contrast, the density of background sources shows an increasing number count closer to galactic latitudes.
We removed saturated pixels and bleed columns from the sample to avoid introducing uninformative data points to the fitting process. We used a conservative flux value to flag saturated pixels of \(1.2e5e^{-}/s\) and masked out up to three pixels around saturated ones to account for bleeding. Additionally, we masked out pixels within a \(800\arcsec\) from extremely bright sources (\(G\leq 8\)) which typically exhibit halos due to internal reflections within the telescope. Sources that fall in these removed pixels are also ignored from the analysis.
### Background Model
Kepler data shows a moving background signal known as "rolling band" (Kepler Instrument Handbook, Van Cleve & Caldwell, 2016). This correlated signal is more likely to occur on certain channels, at certain times of the year due to changes in the thermal background, and is difficult to model or predict. The rolling band is observed as a shift in background level that moves almost parallel to the x-axis of the sensor. This artifact signal is small in amplitude, \(\sim 20\) counts per pixel, but coadds to a large signal for large aperture photometry, and can adversely affect quiet sources. Crucially, this background is an additive signal and so can not be ef
Figure 1: G-band magnitude distribution of Kepler targets (red), KBonus background sources (green), and both (blue). Kepler targets are the stars defined by the primary mission. These are in the center of the TPF and the Kepler pipeline created light curves for each of them. KBonus Background sources are stars that are fully or partially contained within a Kepler TPF but are not primary targets and therefore the Kepler pipeline did not analyze them.
fectively removed by methods that divide out systematics (e.g. the CBV method.) The pipeline processed TPFs contain background-subtracted flux values as well as the subtract model computed by the _Kepler_ pipeline. Although the pipeline provides a good estimate of the background model, the _Kepler_ pipeline only addressed the rolling band issue by including a data quality flag (Clarke et al., 2020).
In order to model and remove this signal, we build a background model as a function of time and pixel rows.
Our method relies upon the strong row dependency of the rolling band signal to simplify the model and assume there is no signal in the orthogonal column direction. To constrain the model, we identify and model "background" pixels in the data set.
To identify "background" pixels, we use the source mask computed by psfmachine to find the pixels without a nearby source (see section 4.2 in Hedges et al., 2021), and we perform a sigma clipping to reject pixels that show significant variability. In addition, we augment the TPF pixel dataset with the mission background pixel data. The mission background data was taken during every quarter across every channel on a predefined grid distributed across each CCD (Van Cleve and Caldwell, 2016). Adding this dataset improve significantly the background model, especially in crowded regions where the TPF background pixel count is low. We take the median average of the pixels in the column direction to find the average time series at every unique observed pixel row.
We model the time series of the background pixels as two third-degree b-spline functions in both time (\(t\)) and pixel row number (\(y\)). We use knot spacings for the spline functions of 2 hours in the time direction, and 6 pixels in the row direction. This enables us to produce a flexible model that adapts to the fast-changing rolling band signal. This effectively builds a smooth model that averages values of pixels close in time and space.
We model the background of a batch of TPFs that have \(n_{tot}\) total pixels data, \(n_{bkg}\) of which are background pixels, and \(l\) cadences as follows. We build a design matrix \(\mathbf{X}_{bkg}\) using the combination of two spline functions in time (\(t\)) and pixel row positions of the background pixel time-series (\(y_{bkg}\)):
\[\mathbf{X}_{bkg}=vec(\begin{bmatrix}1\\ \mathbf{t}\\ \mathbf{t}^{2}\\ \mathbf{t}^{3}\end{bmatrix}[1\,\mathbf{y}_{bkg}\,\mathbf{y}_{bkg}^{2}\, \mathbf{y}_{bkg}^{3}]\\ \mathbf{t}^{3}\end{bmatrix} \tag{1}\]
where \(vec()\) denotes the vectorization operation, which unrolls a matrix into a vector, and \(\mathbf{X}_{bkg}\) is a 2D vector with shape (\(l\times n_{bkg},16\)). We find the best fitting model using linear least squares (similarly as shown in Hedges et al., 2021, Appendix B). The resulting background model for each pixel and time \(\mathbf{\hat{f}}_{bkg}\) is given by:
\[\mathbf{\hat{f}}_{bkg}=\mathbf{X}_{\mathbf{bkg}}\cdot\mathbf{\hat{w}} \tag{2}\]
where \(\mathbf{\hat{w}}\) are the best-fitting weights and \(\mathbf{\hat{f}}_{bkg}\) denotes the best-fitting flux time series for the background pixels.
The same weights can now be applied to a design matrix \(\mathbf{X}\) of the pixel row positions of all pixels
Figure 2: Spatial distribution (2D histogram) of Kepler targets (left) and KBonus background sources (right) brighter than magnitude 19th in the G-band across the Kepler field of view. KBonus Background sources are stars that are fully or partially contained within a Kepler TPF but are not primary targets and therefore the Kepler pipeline did not analyze them. The over densities on the left correspond to particular stellar clusters, while on the right the over densities closely follow the true underlying distribution of stars.
\[\mathbf{X}=vec\left(\begin{bmatrix}1\\ \mathbf{t}\\ \mathbf{t}\\ \mathbf{t}^{2}\\ \mathbf{t}^{3}\end{bmatrix}[1\,\mathbf{y}\,\mathbf{y}^{2}\,\mathbf{y}^{3}]\right) \tag{3}\]
to evaluate the model at every pixel as \(\mathbf{\hat{f}}=\mathbf{X}\cdot\mathbf{\hat{w}}\), where \(\mathbf{\hat{f}}\) is the background flux time series of every pixel in the batch of TPFs. Cadences where there is a significant, single deviation from this smooth model are identified and iteratively removed from the fit.
Figure 3 shows the column-wise binned flux data as a function of pixel row and time for both the data (left) and model (center), and average flux (right). The model is able to capture the rolling band signal at the end of the quarter moving vertically in the CCD (vertical pale blue lines).
To enable reproducibility, we package this model as a standalone simple Python package _kbackground_5.
Footnote 5: [https://github.com/SSDataLab/kbackground](https://github.com/SSDataLab/kbackground)
The background model is subtracted from the original flux data to model the rolling band and any background trend (e.g. see the slope in Figure 3) for each pixel. In this way, we obtain a zero-centered flux time series.
This rolling band model is adequate for our purposes but could be improved; we use only pixels that have no significant flux and do not model the sources simultaneously with the background. We use a simple spline model with fixed knot spacings rather than, for example, a Gaussian Process approach where hyper-parameters could be estimated. Finally, we average over the column dimension, removing any possibility of modeling the rolling band in the orthogonal direction. If there is any residual trend in this dimension, we will average over it.
### Point Spread Function Model
We used the PSF models computed in Martinez-Palomera et al. (2022) which used Kepler's FFIs to generate robust and detailed PSF models for each CCD channel and quarter. The FFIs are single-cadence (30-minute exposure) images over every pixel taken at the beginning, middle, and end of a quarter (approximately one per month). Martinez-Palomera et al. (2022) computed PSF models using Gaia EDR3 (Gaia Collaboration et al., 2021) sources with a limiting magnitude \(G=20\) which lead to the use of \(\sim 12\,000\) sources and \(\sim 100\,000\) pixel data per CCD to fit the models. As noted in Martinez-Palomera et al. (2022), the PSF models were computed for quarters and channels where Kepler extended background (EXBA) masks are available, i.e. quarters 4 to 17 and all CCD channels but 5 to 8, therefore we computed missing models for all channels in quarter 0 to 3. The models are stored in a Zenodo 6 repository and are fully integrated into the psfmachine API.
Footnote 6: [https://doi.org/10.5281/zenodo.5504503](https://doi.org/10.5281/zenodo.5504503)
We evaluate the FFI PSF models in a pixel grid 10 times finer than the original Kepler pixel size of \(4^{\prime\prime}/pixel\) (i.e. \(0.4^{\prime\prime}/pixel\)) to find the factor by which the model needs to be scaled such that it integrates to one on the pixel grid from the stack of TPFs. This scaling factor encodes a combination of two effects, the finite integration due to the instrument pixel scale and the differences between Kepler \(K_{p}\) and Gaia \(G\) filters, as the model uses Gaia \(G\) band fluxes as prior (Hedges et al., 2021).
Figure 4 illustrates the PSF model and its residuals for quarter 5 channel 37. This corresponds to the PSF model fitted with the respective FFI data and evaluated at the positions of 250 TPFs. The PSF is fairly round for channels in the center of the Kepler field (like channel 37) with a slight elongation in one axis. The centroid of the PSF data is under \(\sim 0.3^{\prime\prime}\) in each axis, see red marker in Figure 4 top left panel. The scene centroid offset is computed as the mean of the offset values in each cadence. These cadence centroid offsets are estimated by averaging each data point (pixel) distance to its source coordinates (Gaia R.A. and Decl.) weighted by the Poisson uncertainty estimate. See Figure 2 in Martinez-Palomera et al. (2022) for a display of PSF models for all channels (quarter 5). This shows that in channels near the border of the field, the PSFs are significantly distorted, with elongation and characteristic spike patterns.
### Correcting the Scene Motion
In the original LFD method, the differential velocity aberration effect (Van Cleve and Caldwell, 2016) is corrected using a spatial model and a third-order polynomial in time to create a time-dependent model (see Section 4.5 in Hedges et al., 2021, for more details). This time-dependent model is used to "perturb" the PSF model at each cadence, shifting the scene in its entirety. In this work we refer to the model that extracts flux time-series using the average PSF as the "mean" model, and the model that extracts flux time-series using the average PSF having been perturbed as the "perturbed" or "corrected" model. The perturbed model accounts for small motions and slight changes in shape. However, this method is only applicable if the PSF is fairly Stable and does not vary significantly. This is true for
Kepler's primary mission observations, but not for K2 observations where the reaction wheel failure caused a systematic jitter motion in the spacecraft.
An alternative approach to correct the scene motion is to fit the PSF model to each frame separately, building a unique PSF model for every cadence. This process will mean fitting every variable in the PSF model (\(<200\)) for all of the \(\sim 4,500\) of _Kepler_ frames in a quarter. This adds up to \(\sim 900,000\) parameters. With the number of usable pixel data (\(\sim 3,000\)) in a stack of 250 TPFs this problem is not sufficiently constrained, resulting in a noisier estimation of the PSF overall. This problem could be overcome with more sources and more pixels, and fitting PSFs individually per frame becomes more tractable and beneficial but less computationally efficient. With the perturbation method approach, we fit a relatively small number of variables, \(<200\) for the mean PSF model and \(<1,000\) for the full perturbation model, leading to a well constrained and robust model.
We found that the third-order polynomial in time used originally in the LFD method can be too flexible and can introduce large-scale, spurious trends in the corrected light curves. This polynomial also does not address systematics other than large-scale motion, for example, the characteristic "focus change" signal that happens after the spacecraft downlink data.
We implement an improved method to correct the scene motion and other instrumental signals. To find a reasonable solution we explored several approaches and their combinations. i) The centroid positions in each axis as basis vectors. These were either the mission-defined positional correction vectors (POS CORR, Van Cleve et al.2016) or the centroid shifts computed by psfmachine via momentum method (average weighted by the Poisson noise). ii) The components of common trends between the source pixels. These were estimated using principal component analysis (PCA) of the set of pixels belonging to aperture-extracted pixel time series of sources. iii) The mission Cotrending Basis Vectors (CBVs, Van Cleve et al.2016). The CBVs were built by the mission pipeline (Smith et al.2012). CBVs are built from the common trends across sources and contain multiple instrument systematic trends in sixteen basis vectors, systematic such as centroid shift due to reaction wheels adjustment, focus change due to data downlink, and others. For Kepler, single-scale CBVs are available in MAST archive and combine different time-scales systematics, e.g. long time scales to capture trends such as differential velocity aberration or short-term to capture focus change. Based on our investigation, we find of the three approaches CBVs perform the best to remove velocity aberration and focus change without introducing spurious signals. We find using the first four CBV vectors is sufficient to address the instrumental signal while keeping the dimensionality of the matrices low, and therefore computationally efficient. We apply a 2-day window smoothing b-spline function to each CBV vector to avoid introducing high-frequency noise from the CBVs into the corrected light curves. This smoothing step accounts for data discontinuity such as time gaps and value jumps. Figure 5 shows an example of the first four CBV components for channel 37, quarter 5.
While our work and the _Kepler_ pipeline both use CBVs to address long-term trends, the methods used by each work are different. The _Kepler_ pipeline used CBVs to detrend each source individually (Van Cleve et al.2016). In our work, we apply CBVs to find the
Figure 3: Example of background model estimated using the kbackground package for a stack of 200 TPFs in channel 44 observed during quarter 5. The left (center) panel shows the pixel flux data (model) as a function of time, each row on the y-axis represents the median average time-series across column pixels. The color maps the flux deviation from the mean value across all cadences. Because the rolling band signal is more prominent in the row direction, we model it as a function of time and pixel row, and average over the column dimension. The right panel shows the flux value averaged from all background pixels as a function of time, the data is shown in black and the model in blue, red markers are cadences with an extra offset in the model due to outlier.
correction to the PSF model to best fit all sources in the batch of TPFs simultaneously, which is frequently \(>400\) sources. Fitting multiple sources prevents overfitting the velocity aberration for an individual source. Additionally, in our method, we fit a low-resolution model in time to improve computational efficiency. We use the CBVs to build our "perturbation matrix" which is then applied to the mean PSF model in all frames to track its changes. Figure 6 shows an example of the perturbation matrix. This matrix is multiplied into the PSF model in order to change it to best fit the data in time. The PSF changes from wider with significant wings to a narrower PSF, which the perturbation matrix is able to capture, see Hedges et al. (2021) for more discussion.
To build our perturbation matrix we bin data in time. This binning keeps the matrix small, thereby making memory usage and computing time low. By binning we reduced the time resolution from \(\sim 4,500\) frames in time to 90 frames. Once the binned version has been fit to the data to find the best fitting weights, the model can be evaluated at all the cadences. By binning the data
Figure 4: PSF data (left column), model (middle column), and residuals (right column) evaluated on a stack of 250 TPFs in channel 37 observed during quarter 5. The top row shows the data and model in Cartesian coordinates while the bottom row is in polar coordinates. Each data point corresponds to a pixel located \(\delta x\) (radius) and \(\delta y\) (angle) from the center of its corresponding source, and the color maps the normalized flux value. The red marker in the top left panel represents the magnitude of the PSF centroid offset which is \(<0.8^{\prime\prime}\) with respect to the origin. A small fraction of data points have larger residuals due to remaining contaminated pixels that are clipped during model fitting.
Figure 5: Example of first 4 CBV vectors for channel 37 and quarter 5. The colors are the different components, light colors are the original vector values, bold lines are the smooth version using a b-spline function, and vertical dashed lines show data gaps after the spacecraft performs data downlink.
in this way we are assuming the motion is smooth and uniform, and that any differences between the data and this model are Gaussian distributed, which holds largely true during Kepler's primary mission observations. In contrast, data from the K2 mission exhibit a strong pattern due to the roll motion of the spacecraft, therefore this binning approach may not be adequate.
The binned time sequence for each data point of the perturbed model is shown in Figure 7. The figure shows the changes in time of each uncontaminated pixel used to fit the model for a batch of 250 TPFs on channel 37 during quarter 5. The data in this figure are mean normalized, causing there to be a turnover point close to index 50.
The common trend (red to blue) is due largely to velocity aberration, focus change is evident as vertical clear stripes. Note that the magnitude and "sign" of this trend are different for each pixel, depending on whether a pixel is close to the center of a source or the wings, and whether the pixel is on the leading or lagging side of the target as it moves due to velocity aberration. The magnitude of the effect is commonly \(\approx\)20%.
Our time series model is shown in the middle panel, and is built from the PSF model for each source which has been perturbed and then fit to the image data to find the source flux. After the removal of the perturbed model in our approach, the pixel time series residuals markedly improved to \(\approx 2\%\).
Our perturbation matrix results in a well-regularized model that preserves real physical variability, such as stellar activity or long-period variables, and removes most of the systematic trends due to velocity aberration and focus change. This is crucially different to the _Kepler_ Pipeline approach, as we are using the pixel data to inform our fit of the systematics and prevent overfitting. See Appendix C for a direct comparison of long-period variable (LPV) light curves extracted with Kepler's PDCSAP and our PSF photometry.
### Flux Priors and Iteration
Since the LFD method uses linear modeling to fit the flux data, the solver can yield negative solutions that are mathematically correct. As negative flux values for stars are non-physical, we use narrowing priors to ensure target flux remains positive. As discussed in Hedges et al. (2021), to estimate the flux of a source we solve the linear equation \(\mathbf{\hat{f}}=\mathbf{S}\cdot\mathbf{v}\), where \(\mathbf{\hat{f}}\) is our estimate of the pixel flux data, \(\mathbf{S}\) is the PSF model, and \(\mathbf{v}\) is a vector representing the intrinsic flux value of a source. Each source has a prior which is defined as a Gaussian with a mean (\(\mu_{\mathbf{v}}\)) and a standard deviation (\(\sigma_{\mathbf{v}}\)). \(\mu_{\mathbf{v}}\) and \(\sigma_{\mathbf{v}}\) are set as the Gaia G-band flux (\(F_{G}\)) and \(10\sqrt{F_{G}}\), respectively. The latter is a Poisson noise estimate, with a fairly wide prior. In cases where \(\mathbf{\hat{f}}\) for a source is negative, we narrow the priors for that source and its neighbors (up \(5^{\prime\prime}\) apart) by reducing \(\sigma_{\mathbf{v}}\) by a factor of 2, constraining the fit. This narrowing is repeated three times, and any remaining negative sources are dropped from the source catalog. Then a final fit is done with only the remaining positive sources. While narrowing priors could potentially dampen intrinsic source variability for extreme cases, we find this approach to be adequate. With this iteration process, we are able to reduce the number of sources that return negative flux to about \(2-5\%\) depending on how crowded is the area. Ultimately, \(<5\%\) of the input sources are removed from the catalog due to negative flux estimations.
## 3 Results
This work presents a light curve catalog with \(606,900\) sources. Our light curve files provide three main types of photometry; "aperture photometry", "mean PSF", and "corrected PSF"; as well centroid estimations, background model, and chi-square time-series from the PSF model. Both mean and corrected PSF are computed with the methods explained above.
* "aperture" photometry is computed from an aperture mask estimated as in Martinez-Palomera et al. (2022), which optimizes contamination and completeness of the flux within the mask.
Figure 6: Perturbed PSF model for a stack of 250 TPFs in channel 37 quarter 5. The 4 panels show the spatial distribution of data points (top row) used to fit the perturbed model (bottom row) at the first (left column) and last cadence (right column). The shift in color shows the PSF profile change due to velocity aberration and other effects within the observing quarter.
* "mean PSF" photometry uses only the shape model loaded from the corresponding FFI and evaluated in the TPF stack data.
* "corrected PSF" photometry is obtained from the perturbed model fitted with the observed cadences in the TPF stack.
* centroid vectors are computed by correcting the Gaia coordinates with offsets estimated using the momentum method (weighted average by Poisson uncertainty estimate) at every cadence.
* background flux corresponds to the sum within the aperture of the model described in Section 2.3
* chi-square time-series corresponds to \(\chi^{2}=\sum{(\mathbf{f}_{model}-\mathbf{f}_{data})^{2}/\mathbf{f}_{data}}\), where \(\mathbf{f}_{model}\) is the perturbed PSF estimate of the pixel data, \(\mathbf{f}_{data}\) is the pixel flux, and the sum is over the pixel corresponding to the source. Chi-square time-series can be used both to diagnose where our extracted time-series may be imperfect, and any instances where the model does not fit well due to a changing PSF shape (Hedges et al., 2021).
Our light curve files contain the per-quarter light curves with the aforementioned measurements. Additionally, we provide a stitched version that contains the aperture, corrected PSF, and a flattened version of the PSF photometry. The latter was flattened with a 2-day window b-spline function, designed to better enable the community to perform transit searches. Appendix D provides a specification of the content of the light curve files.
### Light Curve quality
To assess the quality of the photometric extraction and performance of the light curves presented in this work, we produce a series of metrics. This section details the extraction of quality metrics for both types of photometry (aperture and PSF) as well as noise metrics to measure the light curve accuracy.
#### 3.1.1 Quality Metrics
During the light curve extraction process, we compute two aperture quality metrics and three quality PSF metrics.
Similarly to the _Kepler_ pipeline we compute FLFRCSAP and CROWDSAP, as described in Martinez-Palomera et al. 2022. FLFRCSAP is the fraction of target flux contained in the photometric aperture over the total target flux. CROWDSAP is the ratio of target flux relative to the total flux within the photometric aperture including contaminating sources. These two metrics are computed using the evaluated PSF model on every source. It is important to highlight that extracted sources with only partial coverage in the pixel data, (i.e. sources partially outside of the pixel cutout) could have overestimated FLFRCSAP and CROWDSAP values. FLFRCSAP can be lower because it is estimated only with recorded pixel data, while the CROWDSAP values could not account for contaminants further than \(4^{\prime\prime}\) away from the TPF edge.
We generate three new metrics to describe the quality of our light curves:
* PSFFAC: how much of the total expected PSF was saved in the TPF (values of 0 to 1). Sources fully enclosed in a TPF will have values near 1
Figure 7: Mean normalized pixel time-series for uncontaminated pixel data in 250 TPFs in channel 37 quarter 5 (same as Figure 6). The 3 panels show the time series of each pixel data (left) after binning (as described in 2.5), the perturbed full model based on the _Kepler_ pipeline CBVs (middle), and the normalized residuals (left). The pixel time series are sorted by flux value with fainter pixels at the bottom. The middle panel shows the ”perturbed” version of the PSF model fit for all sources. Our model results in flux time series for each pixel, where the sources in the pixels are fit for flux at each time. The perturbed version of the full model is a good estimate of the data, as shown by the small residuals (\(\leq 2\%\)). Data points with significant residuals are either remaining contaminated pixels or belong to true variable sources.
(because finite integration values are slightly lower than 1). Background sources that are partially on the TPF have values between 0 and 1.
* PERTRATI: the ratio between the average flux from the mean model, and the average flux from the perturbed model. Sources with stable perturbed model have values close to 1. Values significantly different than 1 suggest a poor perturbation model mostly due to sparse fit data for the source.
* PERTSTD: the ratio between the standard deviation of the mean model, and the standard deviation of the perturbed model. Small values indicate a stable perturbed model that does not introduce large variations to the extracted light curve.
From the PSF model, we estimated the object PSF fraction (PSFFRAC) on the pixel data, this is how much of the expected PSF was saved in the TPF. By design, the Kepler targets have the entire PSF inside the TPF. Background sources can have partial PSF, especially objects near or outside the TPF edges. For these sources, the PSF fraction can also vary between observation seasons due to the change in the spacecraft pointing or changes in TPF size. This can yield to changes in photometric accuracy, leading to noisier light curves when the PSF fraction decreases. Due to this effect, we only provide stitched light curves from quarters with a PSF fraction larger than 0.5 to avoid the use of low-quality photometry. We still include all the extracted quarters in the light curve FITS file, regardless of low PSFFRAC values.
To measure the effects of introducing the perturbation PSF model, we compare it against the mean PSF model estimated early in the process. We computed the ratio between the mean PSF and the perturbed PSF and took the mean (PERTRATI) and the standard deviation (PERTSTD). These metrics measure how much the perturbed model deviates from the mean PSF and the introduced variance. Both metrics can be used to filter light curves where the perturbed model introduces artifacts. In this case, we recommend defaulting to the photometry fitted with the mean PSF model.
#### 3.1.2 Photometric Noise
The _Kepler_ pipeline introduced a metric to estimate the noise quality of light curves, this is the Combined Differential Photometric Precision (CDPP). We use the implementation of the estimate of CDPP metric included in the lightkurve Python package, which implements a simpler version of CDPP (Gilliland et al., 2011; Van Cleve et al., 2016). We compute this metric for every extracted source in this word. Figure 8 shows the estimated CDPP values as a function of G-band magnitude for all sources with a PSF fraction larger than 0.5. A large number of sources, brighter than \(G=16\) have CDPP values under 100 ppm, which is comparable to values estimated from the PDCSAP light curves computed by the _Kepler_ pipeline.
Figure 8 shows there is a turnover point where PSF photometry becomes more accurate than aperture photometry, at approximately G=13.25, indicating a significant benefit in precision. The high-density horizontal ridge at \(log_{10}(6\)h-CDPP) \(\sim-3.5\) between 12th and 14th magnitude shows CDPP values about one order of magnitude larger than the main trend. An inspection of the
Figure 8: 6-hour estimated CDPP in parts-per-million as a function of G-band magnitude from the perturbed PSF (top) and aperture (middle) light curves. CDPP estimates the noise properties of _Kepler_ data, where lower values indicate a more precise light curve. The bottom panel shows a direct comparison of PSF and aperture CDPP values. The black and orange lines follow the CDPP distribution maximum in a brightness bin (0.1 magnitude width). As expected, the noise increases as flux decreases, reaching the 100 ppm value for objects fainter than \(G=16\), where the Kepler target selection bias is shown as a hard cutoff in bin counts. The gray dashed line shows the point at 13.25 mag where on average PSF photometry achieves lower CDDP values than aperture photometry. See Section 3.1.2 for details.
Color-Magnitude diagram (CMD, see Section 3.2 for details) showed that these sources correspond to red giant stars near the horizontal branch, see Figure 9. This sample of red giants is about 1.6% of the total catalog.
* The PSFs of each source vary in shape in a way that is not captured in the model, (e.g. sources of different colors have weakly different PSF shapes)
To assess when light curves are significantly correlated with each other, we compute the Pearson coefficient \(r\) between pairs of light curves.
We found all pairs of time series within \(60\arcsec\) from each other and then removed the long-term trend from the light curves using a third-degree polynomial in time, (removing any long-term variability due to residual systematics, while preserving periodic variability).
We compute the Pearson correlation coefficient between the time-series pairs. Figure 10 shows the distribution of statistically significant (p-value \(<0.05\)) coefficients as a function of pair distance. High values of \(r\) indicate that the pairs are significantly correlated. The majority of pairs have values \(r<0.15\), meaning no significant correlation. Pairs with values \(r\sim 0.4\) demonstrated no visual correlation after inspection. Almost all correlated pairs (\(r>0.5\)) are within \(25\arcsec\), which relates to the typical size of a TPF (\(\sim 5\) pixels across) meaning correlated pairs are likely found in the same TPF. Less than 1% of pairs fall in the correlated region (\(r>0.5\) and \(d<25\arcsec\)). Pairs with \(r>0.5\) beyond \(25\arcsec\) do not show correlated signals and the r values are likely due to remaining monotonic trends. For a correlated pair, we assume the brighter source is the true variable, and the fainter gets contaminated. We opt to remove from our light curve catalog the faint source (\(\sim 1\%\) from the total data set) from every correlated pair.
### Sources Demographic
This work presents the first catalog of light curves using observations from _Kepler_'s prime mission nearly without a selection bias. The number of new light curves (\(>400,000\)) doubles the total delivered by the _Kepler_ pipeline (\(\sim 200,000\)). Figure 11 shows the color-magnitude diagrams (CMD) using Gaia DR3 photometric bands and distances computed by Bailer-Jones et al. (2021). As a comparison, Figure 11 shows _Kepler_ targets only (top left), new sources (top right), all sources in the catalog (bottom left), and the ratio between both samples (bottom right). The KBonus Background sample has significantly more sources around the main sequence region, particularly toward redder colors and the binary sequence. The number of new sources is smaller than _Kepler_'s target for some evolved stars, particularly for luminous red giants, where the addition of new sources is \(\sim 10\%\). However, there is a significant increase in new low-luminosity red giants. This reflects the target selection bias imposed by the Kepler mission
Figure 9: _Top_: color-magnitude diagram of KBonus sources (black) and a sample of evolved red giant and horizontal branch stars (blue) that show consistently higher CDPP values with respect to the main distribution. _Bottom_: 6h-CDDP values as a function of G-band magnitude from the perturbed PSF model. The figure shows sources after removing the above-mentioned evolved stars. The distinctive high-density band around \(log_{10}\)(6h-CDPP) \(\sim-3.5\) seen in Figure 8 is not present.
that favored luminous red giants instead of cool, low-luminosity giants.
The Kepler mission targeted approximately \(3,700\) M-dwarf stars. In this work, we expand the catalog with almost \(27,500\) new light curves in the M-type dwarf region of the CMD. We follow the prescription presented in Bentley et al. (2018) based on Gaia, WISE, and 2MASS bands (if available) to select potential M-dwarfs combined with cuts in the characteristic range of stellar temperature of m-type stars using Gaia's effective temperature. Additionally, there are 50 new light curves in the white dwarf (WD) sequence, in addition to the previously extracted 41 WD Kepler targets.
### Confirmed Exoplanets
We compared the estimated transit depth of previously confirmed Kepler exoplanets between Kepler's PDCSAP and our PSF light curves. We select all exoplanets with Archive Disposition 'CONFIRMED' from the NASA Exoplanet Archive (NASA Exoplanet Archive, 2022). To compare the transit depth measured directly on PDCSAP and PSF light curves, we performed a Box Least-square (BLS, Kovacs et al., 2002) periodogram using a dense grid around the reported periods. The BLS method searches for periodic variability by fitting the data with an upside-down top-hat periodic model and it has been extensively used to analyze transiting signals (e.g. Prsa et al., 2011; Akeson et al., 2013; Foreman-Mackey et al., 2015). Both PDCSAP and PSF light curves were flattened beforehand to remove stellar variability using 2-day window b-spline functions while masking out cadences with transits.
Figure 12 shows the comparison in transit depth between both PDCSAP and PSF light curves. Overall the computed transit depths are consistent. A Skewness value of 5 for the ratio between depth values (PSF over PDCSAP) means the PSF light curves yield slightly deeper transits. This is expected, as some of the Kepler apertures could be contaminated by nearby sources, which is addressed by the PSF photometry. The 6h-CDPP values from both light curves are also consistent.
We measure the impact of the transit depth change on the estimated planet radius by scaling literature planet sizes by a factor which is the ratio between the radius estimation from the PSF and the PDCSAP light curves. We found a minor change in the exoplanet population towards a tighter distribution in planet size (see Figure 13), particularly for orbital periods longer than 20 days. But no significant change in the population of global planet sizes. Further analysis of planet size populations will require complete exoplanet modeling using up-to-date host stellar parameters and Bayesian inference. We leave this analysis for a future study.
### Revisiting False Positives KOIs
To demonstrate the potential unlocked by this new light curve catalog, we examine a sub-sample of Kepler Object of Interest (KOIs) that were flagged as centroid offset false positive exoplanet candidates. These are light curves where it is likely the aperture is contaminated by a background eclipsing source, likely an eclipsing stellar binary, one of the main sources of contamination when searching for exoplanets via the transit method. Thanks to the use of a (nearly) complete source catalog (down to G magnitude 19th) and PSF photometry we are able to separate blended sources up to \(1\arcsec\). In this section, we present two KOI examples where we are able to separate the eclipsing sources. See Appendix Section B for more KOI examples. A full analysis of all the false positive KOIs (\(>1800\)) is left for future work.
#### 3.4.1 Koi 770.01
KOI 770.01 is a false positive candidate with a reported transit period of 1.506 days and a 2211 ppm depth, but it was flagged with centroid offset. The top panels in Figure 14 show the Kepler PSCSAP light curve (red line) computed from a 4-pixel aperture mask (red mask on the pixel image). We generate two PSF time-series for this dataset, for the two sources blended in the image (shown in the pixel image by a black and blue
Figure 10: Pearson correlation coefficient between light curve pairs as a function of distance during quarter 5. Light curve pairs are made of every possible pair of sources within \(60\arcsec\). Higher coefficients mean a significant correlation between light curves. The break in density around \(25\arcsec\) is due to the typical size of a TPF. The high-density feature with \(r<0.2\) represents the majority of non-correlated pairs. The bulge feature between \(7-20\arcsec\) and \(r<0.3\) is due to the typical distance between pairs in a TPF, while the drop in density within \(5\arcsec\) is due to isolated sources in a TPF and the rejection of predicted negative fluxes (see Section 2.6).
marker). Our PSF light curve (black line) of this source (black marker) does not show the transiting signal. The neighbor source _Gaia DR3 2134870879540928896_ (blue marker in the pixel image), is 1.36\({}^{\prime\prime}\) from the KOI and it is 2.6 magnitudes fainter. The PSF photometry for the contaminant shows a clear eclipse signal at the same period (blue light curve). By the shape of the transit and its depth, the contaminant is a potential EB. Our PSF photometry successfully separates both highly contaminated sources at high contrast.
#### 3.4.2 Koi 909.01
Similar to the previous case, the KOI 909.01 is also a centroid offset false positive. Figure 15 shows the pixel image and the light curve of the target and neighbor. The transit depth is 4147 ppm with a period of 16.37 days. The source of contamination in the target aperture is a neighbor Kepler target, KIC 8256044, flagged as an EB. While these two sources are not highly blended, the separation between stars is 8\({}^{\prime\prime}\) (2 pixels), KIC 8256049 flux leaked into the candidate's aperture. The PSF photometry is able to successfully deblend both light curves. The difference in amplitude of the stellar variability seen between the Kepler PDCSAP and
Figure 11: Color-magnitude diagrams of the KBonus catalog presented in this work. Only Kepler targets are in the top left panel, new background sources are in the top right panel, and the combination of both data sets is in the bottom left panel. These three panels are 2D histograms with a 10 counts threshold and scatter data points elsewhere. The fraction of new sources over Kepler targets in the CMD is shown in the bottom right panel. The discrete colors map regions where the background sources are less than 10%, or more than 10%, 50%, 100%, 200%, and 1000% when compared to the number of Kepler targets. Only sources with Gaia parallaxes \(>\) 0.001 mas and valid BP and RP magnitudes are displayed. Notable regions in the CMD are the red clump, the binary sequence, and the main sequence, where the addition of new sources is substantial. This is a result of an unbiased selection of sources in the KBonus catalog.
our PSF photometry for KIC 8256049 is mostly due to aperture contamination.
## 4 Limitations
Due to the assumptions made throughout this work, some limitations for the PSF, the perturbation model, and the light curve arise. Here we list and discuss some of these limitations.
1. The light curve catalog is limited to sources brighter than G band 19th magnitude and dimmer than magnitude 10th. For blended sources within 1'' only the brighter object was extracted while the fainter was removed. Users can use the psfmachine API to extract sources outside the aforementioned ranges, although a fine-tuning of model parameters could be required for the PSF model to work outside the linear response range of the CCDs.
2. Sources that are near the edge of the TPFs or outside of them are fitted using partial data. Although PSF photometry is still able to extract them, the precision is not optimal. Due to seasonal pointing accuracy, changes in the TPF shape, or high proper motion sources around the edge of the TPFs can have a different fraction of flux on the pixel data across quarters. This is reflected as a change in photometry precision between quarters. To minimize the risk of using subpar precision light curves we only stitched quarters with PSFRAC \(\geq 0.5\). Users can still access the light curves for all quarters as they are provided in the multi-extension FITS files. Additionally, aperture photometry, as well as the extraction metrics FLFRC-SAP and CROWDSAP, for these partial sources are underestimated.
3. The PSF models are fitted, as described in Hedges et al. (2021), by solving a linear model as a function of positions. This approach does not account
Figure 12: _Top_: comparison of confirmed exoplanets transit depth measured from Kepler PDCSAP light curves (x-axis) as delivered on the NASA Exoplanet Archive, and the ratio (y-axis) of the transit depth measured in our PSF light curves and the PDCSAP light curves, from quarter 5 only. The right panel shows the distribution of ratios, which is centered near one with a median value of 1.003 and slightly skewed (5.05) to values above 1. i.e. slightly larger transit depths are recovered by our PSF light curves, which is to be expected if some targets are contaminated. _Bottom_: The density distribution shows the ratio in logarithm space of the CDPP values computed from the PSF and PDCSAP light curves. The red vertical line marks the ratio value 1 where both light curves have the same CDPPs. Negative values mean the PSF light curve has less noise, while positive values mean higher CDPP values than PDCSAP light curves. The distribution is offset towards negative values, indicating PSF light curves are less noisy.
Figure 13: Planet radius as a function of the orbital period for the sample of confirmed exoplanets shown in Figure 12. The red squares correspond to the planet radii reported in the NASA Exoplanet Archive. The black points are the planet radii scaled by the ratio between the measured radius from the PSF and PDCSAP light curves. The right panel shows the planet size density distributions in the same color scheme. A small change in the planet size distribution around \(R_{P}\sim 2\) for periods longer than 20 days appears to tighten up the distribution. The significance of this effect needs to be tested with a complete exoplanet analysis, which is out of the scope of this work. The figure is intentionally limited to \(R_{P}<10\) in order to see the most populated region of the parameter space.
for the change in shape due to source brightness. Brighter sources can have a slightly different profile shape than fainter sources. Although we evaluated the option of a flux-dependent PSF model, these changes were noticeable in the outer regions of the PSF wings but at a minor scale. The latter can become relevant for sources showing high-amplitude variability, such as LPVs, where the change in PSF profile can impact amplitude measurements.
Figure 14: KOI 770.01 false positive due to centroid offset. The image shows an example cadence of the pixel data (TPF), highlighted in red is the aperture selected by the Kepler pipeline, the black dot shows the position of the candidate target, and the blue dot shows the position of the neighbor. The light curve panels show the time-based (middle) and period folded (right) time series for the Kepler PDCSAP light curve (top), our PSF photometry for the candidate (middle), and the contaminant (bottom) following the same color code as above. KIC or Gaia identifiers, period values, magnitudes, and pair distances are detailed in the panel titles.
Figure 15: KOI 909.01 false positive due to centroid offset. Legends are the same as Figure 14. The contaminant is KIC 8256044 which was labeled as an EB.
4. The PSF model could also depend on the CCD location. We tested a PSF model with additional dependency on the pixel and row position with respect to the center of the field of view and we did not find significant changes in PSF shape across the CCD. This is expected for Kepler observations, where the sky coverage of a single CCD (\(\sim 1.2\)\(deg^{2}\)) is relatively small. Although, for larger fields of view instruments like the cameras in the Transiting Exoplanet Survey Satellite (TESS, Ricker et al., 2015) that has CCDs covering \(\sim 144deg^{2}\) the PSF changes significantly within the CCD, making this model dependency necessary.
5. The PSF model for CCD channels in the border of the field of view often exhibits extreme distortions and prominent features (see Figure 2 in Martinez-Palomera et al. (2022) for a display of PSF models across CCDs) that could affect the model performance. We tested light curves created with PSF models varying their flexibility (number of spline knots) and their center. These alterations mimic possible miss-calculation of the centroids due to distorted PSF shapes and lack of model fidelity when steep gradients in the PSF profile are present. We computed several metrics such as median flux, linear trend slope, amplitude, CDPP, and multiple flux percentile ratios, to assess the stability of the extracted light curves. We found that even when the PSF centroid is missed by less than 2\({}^{\prime\prime}\) (a half pixel) or when the model struggles with drastic gradient changes (e.g. a PSF shape with two close "leg" features), the light-curve metrics distributions are consistent between models. This shows that our models are statistically robust to model parameters and small imperfections. Although, some exceptions happen for highly blended and high-contrast sources. The latter is the case for Tabby Star (KIC 8462852, G = 11.6 Boyajian et al., 2016) and a contaminant fainter (G = 17.6) star (Gaia DR3 2081900944807842560) located less than 2\({}^{\prime\prime}\) away. For both, our method produced imprecise photometry levels across quarters that were only possible to overcome when fitting the sources alone (i.e. removing one of them from the input catalog). Although this approach is useful when working with a specific target, it is not optimal when performing massive source extraction.
6. As described in section 3.1.3, correlations can still be found between highly variable-bright sources and fainter neighbors, especially when they are close on the detector. We computed a correlation metric across pairs of light curves and removed all faint sources that showed a correlation metric above the threshold. Although this metric is effective in removing correlated sources, remaining small correlations can still be present leading to detecting the wrong cause of variability We encourage users of these light curves to further analyze neighbor sources to secure the true origin of the variability.
7. Although the perturbation model removes most of the velocity aberration and focus change trends, because the model is fitted for the entire scene these trends are not fully suppressed in every target. This is particularly true for highly blended sources and with partial data where only the wings of the PSF are used to fit the models.
8. The LFD photometry method relies on solving a linear model by means of least-square minimization. This simplifies and enables rapid model fitting and light curve extraction, but no physical constraint are placed on the expected flux values, therefore negative solutions are mathematically possible. We mitigate this issue by iterating the solving step while narrowing the priors (see 2.6) for sources with predicted negative fluxes and their contaminating neighbors (which force the negative solution). Sources that still have negative fluxes after the iteration is completed are rejected from the final catalog. We found this affect mainly sources with mid- to high-contrast (\(\geq 2\) mag) and within 15\({}^{\prime\prime}\).
Figure 16 illustrates the combined extraction biases discussed above in (1), (2), (4), and (7). The figure shows the magnitude contrast and distance distribution for pairs of sources from the input and extracted catalogs. The former is denser for low-contrast and blends between 5 and 13 \({}^{\prime\prime}\) mainly because of sources slightly outside the TPF that get rejected due to insufficient pixel data (2) and fewer due to predicted negative fluxes (7). The decrease in density beyond 15 \({}^{\prime\prime}\) seen in the extracted catalog is due to the typical size of TPFs. The absence of pairs within 1\({}^{\prime\prime}\) is due to the selection bias described in (1). The apparent larger number counts in the extracted catalog for pairs with mid- to high-contrast is mainly due to (1) and (2) and partially to (4).
## 5 Future Work
A natural step forward is to use the psfmachine library to extract light curves from K2 (Howell et al., 2014) mission and the TESS mission.
K2 data presents one major challenge, the failure of the spacecraft's reaction wheels caused a loss in the telescope pointing precision leading to a strong and characteristic jitter motion with a half-day timescale. This jitter motion drastically affects our perturbed model. First, the scene motion is not smooth anymore and the binning done to fit the perturbed model needs to increase in resolution to capture the motion. Increasing the time resolution of the perturbed model increases memory usage and computing time. Secondly, the CBV vectors is likely not be the best basis vectors to fit the perturbed model. Preliminary results have shown that using the centroid (or the mission positional corrector pos_corr) vectors leads to better corrected light curves. An alternative approach is to compute PSF models and offset corrections for every cadence. Fitting a PSF model per cadence requires a large number of objects and pixels available, which can only be achieved by increasing the number of TPFs or when working with K2 superstamps pixel masks (Cody et al., 2018), adding to computing costs.
By its design, the LFD method and the Python implementation psfmachine work well with TESS data after fine-tuning model parameters that account for the difference in pixel scale (TESS is \(21^{\prime\prime}\)/pix), integration times, and crowding effects. Although it is tentative to compile large catalogs of light curves for the entire TESS archive (several TB of TPF data), we believe that providing the users with a well-build and robust Python library able to quickly extract light curves (by using pre-computed models) or with full control of model parameters, represents a bigger contribution to the community. Moreover, there are other active pipelines extracting similar PSF photometry to TESS primary and extended mission data. Han and Brandt (2023) follows a similar approach using Gaia DR3 as input catalog and fits the effective PSF and background signal as a single linear model. This model is fitted later to every source but the extracted target to create a model of the full image, subtract this model from the data and then perform photometry on the decontaminated image of the target source. This approach is limited by the assumption that the background level is constant at the target's location and that stars around it are constant. By design, the LFD method does not assume this and could improve on light curve precision.
The current state of the psfmachine API implements loading PSF profile models pre-computed from Kepler's FFIs as discussed in Section 2.4. This enables users to quickly perform PSF photometry on single sources or on a small number of TPFs. However, this is only limited to the use of the mean-PSF model and not the full perturbed PSF model. This limitation is suitable for Kepler data where the perturbed-PSF model can only be fitted with a moderate number (\(\geqslant 150\)) of TPFs and not with the FFIs due to the low number of cadences per quarter. TESS FFIs are observed with a 30- or 10-minute cadence. This data presents the opportunity to compute and save the perturbed model for posterior extraction of light curves from any TESS data. We plan to extend the psfmachine API to implement the saving and loading of the perturbed PSF model, resulting in a way to extract time-series from individual TPFs, using our best fit perturbation model. These new methods will speed up the photometry extraction using the fully corrected model, especially when extracting a small number of targets.
Light curve extraction from TESS FFIs is also possible with psfmachine, with the caveat that this process is considerably more memory intensive due to the loading of thousands of 2048 x 2048 pixel images. A tractable solution is the combination of processing the FFIs in small cutouts (e.g. a 200 x 200 pixels cut has sufficient sources to estimate robust PSF models) and using pre-computed models.
## 6 Summary
Kepler's primary mission consisted of eighteen 90-day quarters during which the telescope constantly observed the same field of view for almost 4 years. These observations enable the community to find thousands of new exoplanets using the transit method, as well as perform numerous stellar variability and transient studies. The Kepler mission delivered more than 200,000 image cutouts around previously selected targets and their aperture photometry light curves. In this work, we reanalyze the
Figure 16: (d) Histogram of magnitude contrast as a function distance for pair sources. The left panel shows the density of all the available Gaia sources (\(G\leq 19\)) on the TPFs which is our input catalog. The right panel shows the density of extracted sources in the KBonus catalog presented in this work. This figure shows the extraction biases discussed in Section 4 in (1), (2), (4), and (7).
image cutouts and extract PSF photometry light curves for all sources detected in the pixel data. We created a catalog with 606,900 extracted sources from which 406,548 are new light curves from background sources. These background sources are objects detected on the pixel data but do not correspond to Kepler targets. In our extraction pipeline, we used the method described in the LFD photometry method (Hedges et al., 2021). The LFD method performs PSF photometry in a collection of TPFs by modeling the scene simultaneously. It leverages the accuracy and precision of Gaia catalogs to fix the source locations and estimate a PSF model of the scene. The method also computes corrections to the PSF model to account for the scene motion due to the velocity aberration effect, focus change, and pointing instabilities. Our extraction pipeline includes background modeling and subtraction, PSF fitting and photometry, aperture photometry using the PSF profile shape, and numerous extraction metrics useful to characterize the quality of the data. The light curves produced in this work are available for public access via the MAST archive. We implemented new methods and routines to the Python package psfmachine such as the background modeling, PSF model loading from pre-computed ones using FFIdata, and user-defined basis vectors for the perturbation model. These new features are included in v1.1.4 of psfmachine.
We demonstrated that the quality of our light curves reaches similar accuracy levels as those delivered by the Kepler pipeline. The computed CDPP values range from 10s ppm for sources brighter than G=14 to 100s ppm for sources between 16th and 18th magnitude. Statistically, PSF photometry performs up to 40% better, in CDPP value, compared to aperture photometry for sources fainter than G=13.25. We listed and discussed the limitations of our extraction pipeline and the resulting light curves. This serves as guidelines for users of this dataset.
We show two applications as examples of what can be accomplished with these high-level science products. First, we compared the transit depth and estimated exoplanet radius between PDCSAP and our PSF light curves. The result suggests that the PSF photometry yields slightly deeper transits and therefore larger planets. Although, we did not find significant changes to the planet size-period relationship. Secondly, we show examples of the power of PSF photometry to deblend contaminated sources by revisiting KOI false positives due to background binary contamination. The LFD photometry method successfully separates highly blended sources at high contrast which is relevant to distinguish false positives from real exoplanet candidates. This new dataset presents other numerous opportunities to the community not limited only to exoplanet studies. To name some, there are 50 new white dwarf light curves which add to the original 41 in the Kepler target list, thousands of light curves of potential M-dwarf stars, and expand asteroseismic analysis of rotating stars.
This paper includes data collected by the Kepler mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the Kepler mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. Funding for this work for JMP is provided by grant number 80NSSC20K0874, through NASA ROSES. Kepler
astropy (Astropy Collaboration et al., 2013), lightkurve (Lightkurve Collaboration et al., 2018), numpy (Harris et al., 2020), psfmachine (Hedges & Martinez-Palomera, 2021), scipy (Virtanen et al., 2020),
## Appendix C Kobnus Light Curve Examples: Long Period Variables
We selected ten LPV from the Gaia DR3 variable catalog (Lebzelter et al., 2022) to illustrate the inter-quarter photometry. Figures 12 and 12 show the light curve examples and the PDCSAP photometry for comparison. Thanks to the perturbed model (see Section 2.5 that fits the scene velocity aberration and long-term instrumental trends, stellar long-term variability is preserved, and the photometry from consecutive quarters matches in almost all cases.
## Appendix D Kobnus Light Curve Files
The light curve files are delivered as multi-extension FITS files following a similar organization as the original Kepler LCFs. Each file is named with the following pattern hlsp_kbonus-bkg_kepler_kepler_<source_id>, kepler_v1.0_lc.fits where source_id is the corresponding Kepler Input Catalog number (e.g. kic_005018361) if the source exists in the catalog or the Gaia DR3 number(e.g. gaia_dr3_2077321719392686592) if not. Table 4 shows a description of the FITS files. The LIGHTCURVE_STTTCHED is the main extension con
taining the fully stitched time series using quarters with a PSF fraction greater than 0.5. Table D2 details
the column in this extension. The LIGHTCURVE_Q extension contains the light curve for single quarters, Table D3 details its content, this extension contains per-quarter metrics and other measurements such as centroid values. The APERTURE_Q extension contains the pixel mask of the corresponding quarter used for the aperture photometry in the shape of the TPF of origin. The FITS files only have LIGHTCURVE and APERTURE extensions for quarters where the source was detected, therefore the number of extensions varies.
The FITS files are structured to work seamlessly with the lightkurve(Lightkurve Collaboration et al., 2018) package. In this way, users can easily load the stitched light curve into a Light Curve object. See the KBonus documentation7 for further details on how to work with these files.
Footnote 7: [https://github.com/jorgemarpa/KBonus/tree/main](https://github.com/jorgemarpa/KBonus/tree/main)
Figure A4: Confirmed exoplanet Kepler-770 b with an orbital period of 18.93 days. Similar to Figure A1.
## Appendix E Kobnus Source Catalog
The catalog released with this work contains the list of extracted sources resulting in a light curve FITS file. It also contains extraction metrics and availability flags that can be used to filter sources. Table 1 shows all the fields available in this catalog.
## Appendix F Data Bundles
To facilitate data access to specific source types, we have created the following data bundles containing the light curves and the source catalog:
* **M-dwarfs**: contains a total of \(29,800\) sources. We follow the object selection described in Section 3.2.
* **KOIs and neighbors**: it packages light curves listed in the NASA Exoplanet Archive, including
confirmed and false positive candidates. It also includes the light curve of neighbor sources around each KOI in a 30'' radius. This data bundle is useful for users that desires to explore false positives candidates and their neighbors.
* **White dwarfs**: contains a total of 91 light curves as described in Section 3.2.
The files are stored in the Mikulski Archive for Space Telescopes (MAST) archive 8 and can be downloaded in bulk mode.
Footnote 8: KBonus Kepler Background 10.17909/7jbr-w430
|
2308.02612 | A pedestrian approach to Einstein's formula $E=mc^2$ with an application
to photon dynamics | There are several ways to derive Einstein's celebrated formula for the energy
of a massive particle at rest, $E=mc^2$. Noether's theorem applied to the
relativistic Lagrange function provides an unambiguous and straightforward
access to energy and momentum conservation laws but those tools were not
available at the beginning of the twentieth century and are not at hand for
newcomers even nowadays. In a pedestrian approach, we start from relativistic
kinematics and analyze elastic and inelastic scattering processes in different
reference frames to derive the relativistic energy-mass relation. We extend the
analysis to Compton scattering between a massive particle and a photon, and a
massive particle emitting two photons. Using the Doppler formula, it follows
that $E=\hbar \omega$ for photons at angular frequency $\omega$ where $\hbar$
is the reduced Planck constant. We relate our work to other derivations of
Einstein's formula in the literature. | A. V. Nenashev, S. D. Baranovskii, F. Gebhard | 2023-08-04T13:57:39Z | http://arxiv.org/abs/2308.02612v1 | # A pedestrian approach to Einstein's formula \(E=mc^{2}\) with an application to photon dynamics
###### Abstract
There are several ways to derive Einstein's celebrated formula for the energy of a massive particle at rest, \(E=mc^{2}\). Noether's theorem applied to the relativistic Lagrange function provides an unambiguous and straightforward access to energy and momentum conservation laws but those tools were not available at the beginning of the twentieth century and are not at hand for newcomers even nowadays. In a pedestrian approach, we start from relativistic kinematics and analyze elastic and inelastic scattering processes in different reference frames to derive the relativistic energy-mass relation. We extend the analysis to Compton scattering between a massive particle and a photon, and a massive particle emitting two photons. Using the Doppler formula, it follows that \(E=\hbar\omega\) for photons at angular frequency \(\omega\) where \(\hbar\) is the reduced Planck constant. We relate our work to other derivations of Einstein's formula in the literature.
## I Introduction
The energy-mass relation for a particle of mass \(m\) at rest (\(c\): speed of light),
\[E_{0}=mc^{2} \tag{1}\]
is one of the most popular formulas in physics. It is the basis for our understanding of the energy production by fusion in stars and by fission in nuclear power plants. For this reason, it is desirable to make it accessible to beginners, not only at university level but preferably already at high-school level. Consequently, physicists seek to provide an elementary derivation of the famous formula (1), as are the actual titles of Einstein's paper in 1935 [1] and of Rohrlich's paper in 1990 [2].
The notion of "elementary derivation" implies at least three points.
1. all physical concepts are stated clearly;
2. notions extrinsic to mechanics, e.g., those from electrodynamics or even quantum mechanics, are avoided;
3. sophisticated mathematics are kept to a minimum.
Naturally, all derivations must be correct. This is not always guaranteed, see the comment by Ruby and Reynolds [3] who pointed out that the non-relativistic Doppler formula used by Rohrlich is insufficient to derive the relativistic relation (1). Moreover, the derivations should be self-contained and not use short-cuts that are justified only a-posteriori.
The second postulation appears impossible to meet because Einstein's formula invokes the speed of light \(c\). In the preface to the celebrated textbook The Classical Theory of Fields by Landau and Lifshitz [4] where relativity and electromagnetism are treated in one volume, the authors explicitly state that 'A complete, logically connected theory of the electromagnetic field includes the special theory of relativity, so the latter has been taken as the basis of the presentation.' Likewise, Einstein's original considerations [5] in 1905 are not based on classical mechanics alone. Einstein addresses a process of emitting electromagnetic waves by a massive object. He concludes from the energy balance that when the object loses some amount \(\Delta E\) of its energy, it simultaneously loses the amount \(\Delta m=\Delta E/c^{2}\) of its mass,
\[\Delta E_{0}=\Delta mc^{2}. \tag{2}\]
In his arguments, Einstein relies on Maxwell's electrodynamics, more precisely, to the results from the electrodynamic part of his famous work 'Zur Elektrodynamik bewegter Korper' [6]. Also, the relativistic formula for kinetic energy is obtained by Einstein from electrodynamic arguments, namely, from equations of motion of a charged particle in an electric field [6]. This little historical excursion demonstrates how closely the relativistic theory is tied to the theory of electromagnetism.
Already in 1906, Planck recognized that the relativistic dynamics fits into the framework of the principle of least action [7]; according to Pais [8] this was most probably the first work on Einstein's special relativity not written by Einstein himself. Then, in 1907, Minkowski gave a talk in which he introduced four-vectors in spacetime and showed that the kinetic energy of a massive body is related to the temporal component of body's four-velocity [9]. Thus, very soon after the invention of special relativity there appeared at least three 'points of support' that permit to derive relativistic dynamics (energy, momentum, etc.) from relativistic kinematics (time dilation, length contraction, Lorentz transformations, etc.). These 'points of support' remain electrodynamics, the principle of least action, Noether's theorem [10], and vectors in Minkowski spacetime.
Albeit of fundamental importance, the concepts introduced by Planck and Minkowski are much more involved than relativistic kinematics alone, and could be seen as an obstacle for beginners who just want to understand relativistic dynamics. One line of argument that avoids electrodynamics, the principle of least action, Noether's theorem, and the notion of four-vectors, is based on the analysis of particle collisions. Already in 1909, Lewis and Tolman [11] used a collision argument to prove the relativistic expression for the momentum. Their proof is generally adopted in various textbooks, such as Spacetime Physics by Taylor and Wheeler [12], The Feynman Lectures on Physics[13], and, in a somewhat restricted version, in the Berkeley Physics Course [14]. Interestingly, even when they rely on particle collisions, the authors of textbooks prefer to introduce relativistic formulas for momentum and energy _ad hoc_, and use thought experiments with collisions only as supporting arguments. For example, in Spacetime Physics [12] the authors postulate that momentum and energy are just parts of the four-vector of relativistic momentum, and write down the corresponding expressions for them. Only at the end of the chapter, as an exercise, they provide a derivation of the relativistic momentum through a collision experiment. This reflects a 'Babylonian' approach to physics [15, p. 47] rather than the 'Greek' method in mathematics' [16].
In this work we shall employ relativistic kinematics and analyze two-particle scattering processes to derive the energy-momentum relation that leads to Einstein's formula (1). Since we shall analyze scattering processes with very simple geometries, we require first and second-order Taylor expansion and the solution of simple first-order differential equations as mathematical tools to turn the Babylonian approach to an Euclidean one.
Our paper is organized as follows. In Sec. II we review the properties of point particles in classical mechanics and collect the Lorentz transformation and relativistic Doppler formulas. In Sec. III, to set a point of reference, we briefly derive the dynamics of a single particle using the principle of least action, and identify momentum and energy using Noether's theorem [4; 10]. In Sec. IV we provide the pedestrian derivation of those formulas using only basic concepts of classical mechanics as outlined in Sec. II applied to two-particle scattering. Using the relativistic Doppler effect, we show in Sec. V from particle-photon (Compton) scattering that a photon with angular frequency \(\omega\) has the energy \(E=\hbar\omega\), where \(\hbar\) is the reduced Planck constant [17]. The same result can be obtained from particle-antiparticle annihilation. Furthermore, in Sec. VI, we briefly review other approaches to derive Einstein's formula. Short conclusions, Sec. VII, close our presentations. Mathematical derivations are deferred to three appendices.
## II Kinematics: point particle and Lorentz transformation
Momentum and energy are concepts of particle dynamics. Before we address the equations of motions of a single particle in Sect. III and derive Einstein's formula in Sect. IV, we first recall the basic concept of a point particle in classical mechanics. Next, we collect the Lorentz transformation formulas for the transformation of coordinates and velocities between two inertial systems. Since we address photon dynamics in Sect. V, we also collect the formulas for the relativistic Doppler effect in the present section.
### Point particle in classical mechanics
The first axiom of mechanics defines the setting of space-time: space-time is four-dimensional, i.e., an event is given by a point \(P\) with four coordinates \((t,x,y,z)\) in some (inertial) reference frame.
The second axiom in classical mechanics states that a particle is at some spatial point \(\mathbf{r}_{1}\) at time \(t_{1}\) and arrives at some other spatial point \(\mathbf{r}_{2}\) at time \(t_{2}>t_{1}\) whereby the world-line that contains all intermediate points \(P(t)=(t_{1}\leq t\leq t_{2},\mathbf{r}(t))\) is continuous and (at least) twice differentiable with respect to the time \(t\).
Kinematics describes the functional dependence of \(\mathbf{r}(t)\) on the time \(t\).
### Lorentz transformation
Apparently, kinematics requires the use of coordinate systems. However, the choice of a reference system seems to prefer one coordinate system over the other. The third axiom of classical mechanics, the Galilean principle of special relativity, states that this must not be the case: the equations of motion must have the same functional form in all inertial reference frames. A frame that moves with constant velocity \(\mathbf{u}\) with respect to an inertial frame also is an inertial frame. The necessity of inertial frames is overcome in the theory of general relativity.
What remains unspecified in the axiom is the transformation of coordinates between two such inertial frames. Maxwell's equations describe the propagation of light. They are form-invariant under coordinate transformations if time and space transform according to the Lorentz transformation formulas. Lorentz transformations guarantee that the relativistic distance between two events is the same in all coordinate systems. The coordinates of an event \(P\) is described in \(K\) with the coordinates \((t,x,y,z)\) and in \(K^{\prime}\) with coordinates \((t^{\prime},x^{\prime},y^{\prime},z^{\prime})\). The invariant distance between two events \(P_{1}\) and \(P_{2}\) is given by
\[s_{12}^{2}=c^{2}(t_{2}-t_{1})^{2}-(x_{2}-x_{1})^{2}-(y_{2}-y_{1})^{2}-(z_{2}- z_{1})^{2}\;, \tag{3}\]
and \(s_{12}^{2}=(s_{12}^{\prime})^{2}\) must hold with
\[(s_{12}^{\prime})^{2}=c^{2}(t_{2}^{\prime}-t_{1}^{\prime})^{2}-(x_{2}^{\prime}-x_ {1}^{\prime})^{2}-(y_{2}^{\prime}-y_{1}^{\prime})^{2}-(z_{2}^{\prime}-z_{1}^{ \prime})^{2}\;, \tag{4}\]
where we use the coordinates of the two events \(P_{1}\) and \(P_{2}\) in the two different reference frames. When the two events are infinitesimally close, \(({\rm d}s)^{2}=c^{2}({\rm d}t)^{2}-({\rm d}x)^{2}-({\rm d}y)^{2}-({\rm d}z)^{2}\) is the invariant distance. Note that the velocity of light is the same in all reference frames, \(c^{\prime}=c\) (Einstein's axiom of the invariance of the speed of light).
To simplify the discussion, we assume that the two reference frames \(K\) and \(K^{\prime}\) coincide at time \(t=t^{\prime}=0\), and \(K^{\prime}\) moves with velocity \(\mathbf{u}=u\mathbf{e}_{x}\). Then, the coordinates of an event \(P\) is described in \(K\) with the coordinates \((t,x,y,z)\) and in \(K^{\prime}\) with coordinates \((t^{\prime},x^{\prime},y^{\prime},z^{\prime})\). The Lorentz transformation provides the relation between the coordinates,
\[x=\frac{x^{\prime}+ut^{\prime}}{\sqrt{1-u^{2}/c^{2}}}\;,\quad y=y^{\prime}\;, \quad z=z^{\prime}\;,\quad t=\frac{t^{\prime}+ux^{\prime}/c^{2}}{\sqrt{1-u^{2} /c^{2}}}\;. \tag{5}\]
For infinitesimal distances, one simply has to replace \((t,x,y,z)\) by \(({\rm d}t,{\rm d}x,{\rm d}y,{\rm d}z)\) and \((t^{\prime},x^{\prime},y^{\prime},z^{\prime})\) by \(({\rm d}t^{\prime},{\rm d}x^{\prime},{\rm d}y^{\prime},{\rm d}z^{\prime})\).
The velocities of a particle are given by \(\mathbf{v}={\rm d}\mathbf{r}/({\rm d}t)\) in \(K\) and \(\mathbf{v}^{\prime}={\rm d}\mathbf{r}^{\prime}/({\rm d}t^{\prime})\) in \(K^{\prime}\). Therefore, they transform according to
\[v_{x}=\frac{v_{x}^{\prime}+u}{1+v_{x}^{\prime}u/c^{2}}\;,\quad v _{y}=v_{y}^{\prime}\frac{\sqrt{1-u^{2}/c^{2}}}{1+v_{x}^{\prime}u/c^{2}}\;,\\ v_{z}=v_{z}^{\prime}\frac{\sqrt{1-u^{2}/c^{2}}}{1+v_{x}^{\prime}u /c^{2}}\;, \tag{6}\]
when we use the Lorentz transformation for the coordinates (5) in their infinitesimal form.
### Doppler effect
Light is described by electromagnetic plane waves with frequency \(\omega\) and wave vector \(\mathbf{k}\) as solutions of the Maxwell equations in the absence of external sources, e.g.,
\[\mathbf{E}(\mathbf{r},t)=\mathbf{E}_{0}\cos(\omega t-\mathbf{k}\cdot\mathbf{r}) \tag{7}\]
for the vector of the electric field where \(\mathbf{E}_{0}\) is a three-dimensional vector with real components. Apparently, \(\mathbf{E}(\mathbf{r},t)\) has extrema and zeros when the phase
\[\varphi(\mathbf{r},t)=\omega t-\mathbf{k}\cdot\mathbf{r} \tag{8}\]
is a multiple of \(\pi/2\). The number of zeros or the number of maxima/minima are independent of the reference frame so that the phase must be a Lorentz scalar. This implies that \((\omega,\mathbf{k})\) form a relativistic four-vector in the same way as \((t,\mathbf{r})\) so that its components transform analogously to eq. (5),
\[k_{x}=\frac{k_{x}^{\prime}+u\omega^{\prime}/c^{2}}{\sqrt{1-u^{2} /c^{2}}}\;,\quad k_{y}=k_{y}^{\prime}\;,\quad k_{z}=k_{z}^{\prime}\;,\\ \omega=\frac{\omega^{\prime}+uk_{x}^{\prime}}{\sqrt{1-u^{2}/c^{2}}} \tag{9}\]
when \(K\) and \(K^{\prime}\) move with constant velocity \(\mathbf{u}=u\mathbf{e}_{x}\) relative to each other. When the light also travels along the \(x\)-axis to the right, we have \(k_{y}=k_{z}=0\) and \(k_{x}=k=\omega/c\) from the dispersion relation
\[\omega=|\mathbf{k}|c\;. \tag{10}\]
Therefore, we obtain the relativistic Doppler formula for the frequency shift,
\[\omega=\omega^{\prime}\sqrt{\frac{1+u/c}{1-u/c}} \tag{11}\]
between the frequencies measured in \(K\) and \(K^{\prime}\). When the light travels to the left (\(k_{y}=k_{z}=0\) and \(k_{x}=-k=-\omega/c\)), the signs '\(+\)' and '\(-\)' in eq. (11) swap their places.
## III Dynamics: Lagrange formalism
The ultimate goal in classical mechanics is to derive the motion of particles from basic principles, i.e., to formulate equations from which the particle trajectory \(\mathbf{r}(t)\) can be deduced. Newton's original formulation was superseded by the Lagrange and Hamilton formulation because the underlying Hamilton principle of least action constitutes the basis of present-day theoretical physics.
### Particle mass
To describe particle dynamics, Newton assigns a second defining property to a point particle, namely its (inertial) mass \(m\). Below we shall assume that
1. the non-relativistic momentum and (kinetic) energy of a particle are proportional to the mass \(m\);
2. the mass is a scalar under Lorentz transformations;
3. a particle with mass \(M\) can decay into two particles with mass \(m\leq M/2\).
Property 1 seems self-evident because it requires twice the force to push two mugs of beer over a counter compared to pushing a single one. Moreover, we tacitly assume that we do not gain or lose liquid when looking at the mug from different reference frames (property 2), and we know that we can split a liquid into equal volumes without loosing any (property 3).
On a more fundamental level, the generation of inertial mass requires an understanding of the interaction of relativistic particle fields with the Higgs field. Even more intricate is the notion of a gravitational mass and its equivalence to the inertial mass. We will not dwell into these fundamental issue here but move on to the equations of motions that govern the particle dynamics.
### Euler-Lagrange equations
Modern physics is based on the principle of least action. For a point particle in classic mechanics, the action \(S\) along a path \(\mathbf{R}(t)\) with velocity \(\dot{\mathbf{R}}(t)\) within the time interval \([t_{1},t_{2}]\) reads
\[S=\int_{t_{1}}^{t_{2}}\mathrm{d}tL(\mathbf{R},\dot{\mathbf{R}},t)\;. \tag{12}\]
Here, \(L\) is the Lagrange function that depends only on the particle coordinates \(\mathbf{R}(t)\), velocities \(\dot{\mathbf{R}}(t)\), and time \(t\). Consequently, the particle acceleration must be a function of the particle position and velocity only, i.e., the particle motion is deterministic.
To find the realized trajectory \(\mathbf{r}(t)\), the principle of least action states that \(S\) is stationary with respect to small variations of the realized trajectory whereby all trajectories start and end at the points \(P_{1}=\mathbf{r}(t_{1})\) and \(P_{2}=\mathbf{r}(t_{2})\), respectively. For this reason, the Lagrange function is not unique. For example, the variation does not change when we add a constant \(C\) to \(L\), i.e., using \(\tilde{L}=L+C\) in \(S\) leads to the same realized trajectory as using \(L\).
As shown in textbooks, the realized trajectory \(\mathbf{r}(t)\) fulfills the Euler-Lagrange equations
\[\frac{\mathrm{d}}{\mathrm{d}t}\left.\frac{\partial L}{\partial\dot{\mathbf{R}}} \right|_{\mathbf{R}=\mathbf{r},\dot{\mathbf{R}}=\dot{\mathbf{r}}}\;=\left.\frac{\partial L}{ \partial\mathbf{R}}\right|_{\mathbf{R}=\mathbf{r},\dot{\mathbf{R}}=\dot{\mathbf{r}}}\;. \tag{13}\]
The equations (13) constitute Newton's second law.
For example, in non-relativistic mechanics, a single point-particle has the Lagrange function
\[L^{\mathrm{nr}}(\mathbf{R},\dot{\mathbf{R}},t)=\frac{m}{2}\dot{\mathbf{R}}^{2}\;. \tag{14}\]
When inserted in eq. (13), the Euler-Lagrange equations read
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(m\dot{\mathbf{r}}\right)=\mathbf{0}\;, \tag{15}\]
i.e., a free particle moves along a straight line (Newton's first law).
### Energy and momentum
One big advantage of the Lagrange formalism over Newton's formulation of classical mechanics lies in the fact that conserved quantities like _momentum_ and _energy_ are well defined. As shown by Noether [10], when the Lagrange function is invariant under translations in space (time), there is a conserved quantity called _momentum_ (_energy_). Thus, these objects and their conservation laws result from the homogeneity of space and time.
The simplest example is the non-relativistic point particle in Sec. III.2 where eq. (15) expresses that the momentum
\[\mathbf{p}^{\mathrm{nr}}=m\mathbf{v}\;, \tag{16}\]
with \(\dot{\mathbf{r}}=\mathbf{v}\) on the realized trajectory, is a conserved quantity for a non-relativistic free particle, i.e., \(\mathbf{p}^{\mathrm{nr}}\) does not change in time.
Apparently, momentum conservation is based on the fact that the Lagrange function does not depend on \(\mathbf{r}\). The theory reflects the fact that space is homogeneous so that \(L\) does not change under translations. Therefore, for a general Lagrange function \(L=L(\dot{\mathbf{R}})\) the conserved momentum is defined by
\[\mathbf{p}=\left.\frac{\partial L(\dot{\mathbf{R}})}{\partial\dot{\mathbf{R}}}\right|_{\bm {R}=\dot{\mathbf{r}}}\;. \tag{17}\]
We shall derive the relativistic Lagrange function in the next subsection III.4 and thus readily find the expression for the relativistic momentum of a point particle.
Homogeneity in time implies that \(L\) does not explicitly depend on time, \(L=L(\mathbf{R},\dot{\mathbf{R}})\). The resulting conserved quantity is the energy (or Jacobi integral),
\[E=\dot{\mathbf{r}}\cdot\left.\frac{\partial L(\mathbf{R},\dot{\mathbf{R}})}{\partial\dot{ \mathbf{R}}}\right|_{\mathbf{R}=\mathbf{r},\dot{\mathbf{R}}=\dot{\mathbf{r}}}-L(\mathbf{r},\dot{\mathbf{r} })\;. \tag{18}\]
For the non-relativistic point particle in Sec. III.2 we thus find \((\dot{\mathbf{r}}=\mathbf{v})\)
\[E^{\mathrm{nr}}=\dot{\mathbf{r}}\cdot(m\dot{\mathbf{r}})-\frac{m}{2}\dot{\mathbf{r}}^{2}= \frac{m}{2}\mathbf{v}^{2}\equiv T^{\mathrm{nr}}\;, \tag{19}\]
the well-known expression for the (kinetic) energy of a non-relativistic point particle.
### Action for a particle in Minkowski space
Axiomatically, the action \(S\) is a scalar under Lorentz transformations. The only infinitesimal scalar for a single particle on the world line from \(P_{1}\) to \(P_{2}\) is the infinitesimal distance \(\mathrm{d}s\) between two point on the world-line. Therefore,
\[S=-\alpha\int_{P_{1}}^{P_{2}}\mathrm{d}s=-\alpha c\int_{t_{1}}^{t_{2}} \mathrm{d}t\sqrt{1-\dot{\mathbf{R}}^{2}/c^{2}}\;. \tag{20}\]
Thus, we know the relativistic Lagrange function up to a constant \(\alpha>0\) that is determined from the comparison
with the non-relativistic limit. Indeed, we can read off the relativistic Lagrange function from eq. (20),
\[L(\mathbf{R},\dot{\mathbf{R}},t)\equiv L(\dot{\mathbf{R}})=-\alpha c\sqrt{1-\dot{\mathbf{R}}^{2}/c ^{2}}. \tag{21}\]
For small velocities \(|\dot{\mathbf{R}}|\ll c\), the Taylor expansion leads to
\[L(\dot{\mathbf{R}})\approx-\alpha c\left(1-\frac{\dot{\mathbf{R}}^{2}}{2c^{2}}\right)= -\alpha c+\frac{\alpha}{2c}\dot{\mathbf{R}}^{2}. \tag{22}\]
The comparison with the non-relativistic Lagrange function in eq. (14) shows that we must set \(\alpha=mc\) to arrive at \(L(\dot{\mathbf{R}})\approx L^{\text{nr}}(\dot{\mathbf{R}})+C\).
### Energy and momentum for a single particle
With the relativistic Lagrange function from eq. (21),
\[L(\dot{\mathbf{R}})=-mc^{2}\sqrt{1-\dot{\mathbf{R}}^{2}/c^{2}} \tag{23}\]
and the results from Sec. III.3 we can readily determine the conserved momentum of a single point particle with mass \(m\), see eq. (17),
\[\mathbf{p}=\frac{m\mathbf{v}}{\sqrt{1-\mathbf{v}^{2}/c^{2}}}=\gamma m\mathbf{v} \tag{24}\]
with the relativistic factor
\[\gamma=\frac{1}{\sqrt{1-\mathbf{v}^{2}/c^{2}}}. \tag{25}\]
The particle's conserved energy reads, see eq. (18),
\[E=\mathbf{v}\cdot\frac{m\mathbf{v}}{\sqrt{1-\mathbf{v}^{2}/c^{2}}}+mc^{2}\sqrt{1-\mathbf{v}^{ 2}/c^{2}}=\gamma mc^{2}. \tag{26}\]
For a particle at rest, \(\mathbf{v}=\mathbf{0}\), the energy is finite,
\[E_{0}\equiv E(\mathbf{v}=\mathbf{0})=mc^{2}\, \tag{27}\]
the famous Einstein formula for the rest energy of a particle.
Equations (24) and (26) constitute the main results that need to be proven using the pedestrian approach outlined in the next section.
## IV Pedestrian derivation
The derivation in Sect. III is concise and elegant but it uses a number of concepts in theoretical physics that were not common knowledge at the beginning of the 20th century nor are they familiar to newcomers nowadays. For this reason, we collect the main ideas to derive comprehensively the Einstein formula using only basic concepts of classical mechanics as outlined in Secs. II and III.
We pursue the following strategy.
**A.**: Derive the relativistic momentum from elastic scattering of two particles;
**B.**: Derive the relativistic kinetic energy from elastic scattering of two particles;
**C.**: Derive the mass defect formula from fission of a heavy particle into two equal light particles;
**D.**: Derive the Einstein energy-mass relation.
We only invoke the concepts of relativistic classical mechanics and do not refer to classical electrodynamics (electromagnetic waves) or concepts of quantum mechanics (photons).
### Elastic scattering: momentum conservation
We start our investigation with an elastic scattering of two identical classical particles, as shown in Fig. 1. The particles approach each other on the \(x\)-axis and move along the \(y\)-axis after the scattering. Elastic scattering means that there is no loss of energy and the particles remain the same so that the speed of each particle after the impact will remain the same (\(v=|\mathbf{v}|\)), only its direction will change. It is intuitively clear that the laws of momentum and energy conservation are fulfilled: the total momentum is zero before and after the collision, and the total energy of the two particles remains the same.
We now demand that momentum conservation is also fulfilled in a different frame of reference. Let there be two reference frames, one 'primed' where all values will be marked with primes, the other 'unprimed'. Let the primed frame move relative to the unprimed one in the direction of the \(x\)-axis with velocity \(\mathbf{u}=u\mathbf{e}_{x}\). Then, the connection between the 'primed' particle velocity \(\mathbf{v}^{\prime}=(v^{\prime}_{x},v^{\prime}_{y},v^{\prime}_{z})\) and its 'unprimed' velocity \(\mathbf{v}=(v_{x},v_{y},v_{z})\) is given by eq. (6) in Sect. II.
As 'primed frame' we choose the center-of-mass frame as in Fig. 1 so that the 'unprimed frame' is the laboratory frame in which the observer may be viewed at rest. A
Figure 1: Collision of two identical particles with velocities \(\pm\mathbf{v}\) in the center-of-mass frame.
short calculation convinces the reader that the momentum of a particle cannot be given by \(\mathbf{p}^{\rm nr}=m\mathbf{v}\) because \(m\mathbf{v}_{1}+m\mathbf{v}_{2}\neq m\mathbf{v}_{3}+m\mathbf{v}_{4}\). Since the system is rotational invariant, the modulus of the momentum of a particle must be a function of the modulus of the velocity,
\[|\mathbf{p}(\mathbf{v})|=p(v)\;, \tag{28}\]
where \(p=\sqrt{p_{x}^{2}+p_{y}^{2}+p_{z}^{2}}\) and \(v=\sqrt{v_{x}^{2}+v_{y}^{2}+v_{z}^{2}}\) so that
\[p_{x}=\frac{v_{x}}{v}p(v)\;,\quad p_{y}=\frac{v_{y}}{v}p(v)\;,\quad p_{z}= \frac{v_{z}}{v}p(v)\;, \tag{29}\]
with the Cartesian components, \(v_{x,y,z}=\mathbf{e}_{x,y,z}\cdot\mathbf{v}\), \(p_{x,y,z}=\mathbf{e}_{x,y,z}\cdot\mathbf{p}\).
To find the unknown function \(p(v)\), we use the conservation of momentum in a collision,
\[\mathbf{p}_{1}+\mathbf{p}_{2}=\mathbf{p}_{3}+\mathbf{p}_{4} \tag{30}\]
for which we only need to consider the \(x\)-projection of this equality because the other components just give \(0=0\). We consider the elastic scattering event in the laboratory frame, see Fig. 2. We note that
\[p_{1,x}=p_{1}=p(v_{1})=p\left(\frac{v+u}{1+vu/c^{2}}\right) \tag{31}\]
because the velocity \(\mathbf{v}_{1}\) is just directed along the \(x\)-axis. Next,
\[p_{2,x}=-p_{2}=-p(v_{2})=-p\left(\frac{v-u}{1-vu/c^{2}}\right)\;. \tag{32}\]
To be definite, we assume that \(u\) is less than \(v\) so that the velocity \(\mathbf{v}_{2}\) is directed towards the negative \(x\)-axis, and hence \(p_{2,x}=-p_{2}\). Lastly,
\[p_{3,x}=\frac{v_{3,x}}{v_{3}}p(v_{3})=\\ =\frac{u}{\sqrt{v^{2}+u^{2}-v^{2}u^{2}/c^{2}}}p\left(\sqrt{v^{2} +u^{2}-v^{2}u^{2}/c^{2}}\right)\;, \tag{33}\]
and \(p_{4,x}=p_{3,x}\). When we insert these expressions into \(p_{1,x}+p_{2,x}=p_{3,x}+p_{4,x}\), we obtain
\[p\left(\frac{v+u}{1+vu/c^{2}}\right)-p\left(\frac{v-u}{1-vu/c^{ 2}}\right)=\\ =2\frac{u}{\sqrt{v^{2}+u^{2}-v^{2}u^{2}/c^{2}}}p\left(\sqrt{v^{2 }+u^{2}-v^{2}u^{2}/c^{2}}\right)\;. \tag{34}\]
This is the equation for the unknown function \(p(v)\). The unique solution for \(p(v)\) for _all_\(0<u<v<c\) is
\[p(v)=C_{p}\frac{v}{\sqrt{1-v^{2}/c^{2}}} \tag{35}\]
with some constant \(C_{p}\) which is readily achieved after some algebra, or by using Mathematica[18]. A derivation of eq. (35) is given in appendix A.
It remains to determine the constant \(C_{p}\) in eq. (35). It is known that, for small velocities, \(|\mathbf{v}|\ll c\), we have the non-relativistic relation \(\mathbf{p}^{\rm nr}=m\mathbf{v}\) for the relation between particle velocity and its momentum, see eq. (16). The expansion of eq. (35) for small \(v\) gives \(p(v\ll c)\approx C_{p}v\) so that the constant is the mass of the particle, \(C_{p}=m\). Thus, \(p(v)=mv/\sqrt{1-v^{2}/c^{2}}\), \(p=\gamma mv\), or, in vector form,
\[\mathbf{p}=\gamma m\mathbf{v}\;. \tag{36}\]
In sum, we re-derived the relativistic momentum as a function of the particle velocity as given in eq. (24).
### Elastic scattering: kinetic energy conservation
We reconsider the scattering experiment in Figs. 1 and 2 in the context of energy conservation. The collision in this experiment is _elastic_, i.e., the internal states of the particles do not change. The only difference between the states of the particles results from their velocities. Therefore, the energy \(E\) of each particle is a function of its velocity \(v\) only, \(E=E(v)\), again assuming rotational invariance. Energy conservation thus implies
\[E(v_{1})+E(v_{2})=E(v_{3})+E(v_{4}) \tag{37}\]
in the scattering event in both reference frames.
It is convenient to divide the particle's energy \(E\) into its internal energy (also called 'potential energy' or'rest energy') \(E_{0}\), and its kinetic energy \(T\),
\[E(v)=E_{0}+T(v)\;. \tag{38}\]
The internal energy is nothing else but the energy at zero velocity, \(E_{0}\equiv E(0)\). Since the internal energy does not change in elastic collisions, we can rewrite the energy conservation law for our scattering experiment as conservation of the kinetic energy,
\[T(v_{1})+T(v_{2})=T(v_{3})+T(v_{4})\;. \tag{39}\]
Figure 2: Elastic collision in the ‘primed’ center-of-mass reference frame (left) and in the ‘unprimed’ laboratory reference frame (right). The ‘primed’ frame moves to the right with velocity \(\mathbf{u}=\mathbf{w}_{x}\), relative to the ‘unprimed’ one.
Apparently, the value of \(E_{0}\) cannot be determined using elastic scattering.
Now, we derive the function \(T(v)\) from the condition (39). In the 'primed' reference frame (center-of-mass system, left part of Fig. 2), it is evident that the kinetic energy is conserved, \(T(v_{1}^{\prime})+T(v_{2}^{\prime})=T(v_{3}^{\prime})+T(v_{4}^{\prime})\), because the velocity moduli are all equal, \(v_{1}^{\prime}=v_{2}^{\prime}=v_{3}^{\prime}=v_{4}^{\prime}=v\).
For the laboratory frame, things are a little bit more complicated. The velocity moduli are
\[v_{1}=\frac{v+u}{1+vu/c^{2}}\;, v_{2}=\frac{v-u}{1-vu/c^{2}}\;,\] \[v_{3}=v_{4}=\sqrt{v^{2}+u^{2}-\frac{v^{2}u^{2}}{c^{2}}}\;. \tag{40}\]
As for the case of the momentum, the non-relativistic expression, \(T^{\rm nr}(v)=mv^{2}/2\) in eq. (19), does not fulfill eq. (39) when eq. (40) is used.
We find the correct expression for \(T(v)\) as for the momentum function \(p(v)\). For general \(0<u<v<c\), we must solve eq. (39) for the velocities given in eq. (40),
\[T\left(\frac{v+u}{1+vu/c^{2}}\right)+T\left(\frac{v-u}{1-vu/c^{2 }}\right)=\\ =2T\left(\sqrt{v^{2}+u^{2}-\frac{v^{2}u^{2}}{c^{2}}}\right)\;. \tag{41}\]
The unique solution that reproduces the non-relativistic limit, \(T(v\ll c)\approx mv^{2}/2\), obeys
\[T(v)=(\gamma-1)mc^{2}\;. \tag{42}\]
This is shown explicitly in appendix B. The proof that \(T(v)\) from eq. (42) requires only some basic algebra, or can be done using Mathematica[18].
Eq. (42) provides the relativistic formula for the kinetic energy \(T(v)\) of a particle with mass \(m\) and velocity \(|\mathbf{v}|=v\). We recall that the mass is considered as independent of the velocity. The velocity \(v\) appears here in the \(\gamma\)-factor only, \(\gamma=1/\sqrt{1-v^{2}/c^{2}}\).
### Particle fission: mass defect
Thus far, we used elastic collisions as a convenient tool to derive the relativistic formulas for the momentum and the kinetic energy. But elastic processes, by definition, do not change internal states of particles. Therefore, we need to address _inelastic_ processes to get access to the internal energy. The simplest example process is the fission of a particle of mass \(M\) into two lighter identical particles of mass \(m\leq M/2\). This process is shown in Fig. 3 in two different reference frames. When we reverse the arrow of time, the process describes particle fusion.
The left part of Fig. 3 shows the particle decay in the center-of-mass frame. The particle with mass \(M\) splits into two particles that fly away in opposite directions with velocities \(\mathbf{v}=\pm v\mathbf{e}_{x}\). Before the decay, the kinetic energy of the heavy particle is equal to zero. After the decay, the total kinetic energy is equal to \(2mc^{2}(\gamma-1)\), where \(\gamma=1/\sqrt{1-v^{2}/c^{2}}\). Hence, due to energy conservation, the total internal energy decreases by the same amount,
\[\Delta E_{0}(m)=2mc^{2}(\gamma-1)\;. \tag{43}\]
This decrease in internal energy is proportional to the mass defect as we shall show next.
To this end, we apply the momentum conservation law in the unprimed (laboratory) frame. The right part of Fig. 3 shows the same fission process in a reference frame, where the center-of-mass frame moves to the right with velocity \(u<v<c\). In this frame, the heavy particle moves to the right with velocity \(u\), one light particles moves to the right with velocity
\[v_{1}=\frac{v+u}{1+vu/c^{2}}\;, \tag{44}\]
and the other light particle moves to the left with velocity
\[v_{2}=\frac{v-u}{1-vu/c^{2}}\;. \tag{45}\]
The momentum conservation law along the \(x\)-axis reads
\[\frac{Mu}{\sqrt{1-u^{2}/c^{2}}}=\frac{mv_{1}}{\sqrt{1-v_{1}^{2}/c^{2}}}-\frac {mv_{2}}{\sqrt{1-v_{2}^{2}/c^{2}}}\;. \tag{46}\]
Therefore, the heavy mass reads
\[M=m\frac{\sqrt{1-u^{2}/c^{2}}}{u}\left(\frac{v_{1}}{\sqrt{1-v_{1}^{2}/c^{2}}}- \frac{v_{2}}{\sqrt{1-v_{2}^{2}/c^{2}}}\right)\;. \tag{47}\]
We substitute \(v_{1,2}\) from eqs. (44) and (45) and find after some algebra
\[M=\frac{2m}{\sqrt{1-v^{2}/c^{2}}}\;. \tag{48}\]
We see that the mass \(M\) of the heavy particle is larger than the total mass of the debris particles \(2m\), assuming that the decay particles move away from each other,
Figure 3: Decay of a particle of mass \(M\) into two identical particles of mass \(m\leq M/2\). The left part depicts this process in the (primed) center-of-mass reference frame, and the right part in the unprimed frame, in which the center of mass moves to the right with velocity \(u\) (laboratory frame).
\(0\). Due to the decay, the total mass decreases by
\[\Delta M(m)=M-2m=2m\left(\frac{1}{\sqrt{1-v^{2}/c^{2}}}-1\right)=2m(\gamma-1)\;, \tag{49}\]
which is sometimes called the _mass defect_.
Now we can compare the mass defect \(\Delta M(m)\) in eq. (49) with the decrease of internal energy \(\Delta E_{0}(m)\) in eq. (43), which leads to
\[\Delta E_{0}(m)=\Delta M(m)c^{2}\;. \tag{50}\]
Apparently, every change of the total internal energy \(\Delta E_{0}(m)\) of the system is accompanied by a change of its total mass \(\Delta M(m)\), such that eq. (50) holds.
### Energy-mass formula
A point particle has no internal degrees of freedom like, e.g., a molecule. Point particles in classical physics are solely characterized by their mass. Therefore, its internal energy can only depend on \(m\), \(E_{0}\equiv E_{0}(m)\). We apply this Ansatz to eq. (43) and write explicitly
\[\Delta E_{0}(m)=E_{0}(M)-2E_{0}(m)=2mc^{2}(\gamma-1)\;. \tag{51}\]
Note that the velocity \(v\) was never specified because, in a real experiment, it is measured and used to interpret the fission process and the internal structure of the particle of mass \(M\).
In our thought experiment, we can consider the extreme case that the heavy particle just splits into two halves that do not separate from each other, \(v=0\). Therefore, the internal energy must obey the relation
\[\Delta E_{0}(m)=E_{0}(2m)-2E_{0}(m)=0 \tag{52}\]
because there is no mass defect in this case, \(M=2m\), as \(\Delta M(m)=0\), see eq. (49). Since eq. (52) must hold for all \(m\), it follows that
\[E_{0}(m)=C_{0}m\;, \tag{53}\]
and the constant must be \(C_{0}=c^{2}\) in view of eq. (50). Therefore, we finally arrive at the desired result
\[E_{0}(m)=mc^{2} \tag{54}\]
for the internal energy of a point particle of mass \(m\).
The total energy for a moving particle is then
\[E(v)=\gamma mc^{2}=\frac{mc^{2}}{\sqrt{1-v^{2}/c^{2}}}\;, \tag{55}\]
according to eqs. (38) and (42), and in agreement with the expression from the Lagrange formalism outlined in Sec. III, see eq. (26).
## V Photon dynamics
So far we considered collisions of massive particles. In this section, we study the collision of a massive particle with a photon, i.e., Compton scattering, and the emission of two photons by a massive particle.
### Compton scattering
First, we show that energy and momentum conservation in the Compton scattering event provides enough information to prove
\[E_{\rm ph}=\hbar\omega\quad,\quad E_{\rm ph}=|\mathbf{p}_{\rm ph}|c \tag{56}\]
for photons. In this sense, particle scattering alone also proves Einstein's second famous formula (56), without resorting to the photoelectric effect.
Our derivation is based on the assumption that the photon energy is some (yet unknown) function \(f(\omega)\) of its (angular) frequency \(\omega\), and that the absolute value of its momentum is given by another function \(g(\omega)\),
\[E_{\rm ph}(\omega)=f(\omega)\quad,\quad|\mathbf{p}_{\rm ph}|=g(\omega)\;. \tag{57}\]
The latter also depends only on \(\omega\) due to the dispersion relation (10).
For simplicity, we direct the photon momentum along its wave vector. As in the previous sections, we imply that the functions \(f(\omega)\) and \(g(\omega)\) permit Taylor expansions at any \(\omega>0\).
Let us consider a collision of a photon with a massive particle (an electron for brevity) in the center-of-mass frame where the net momentum is equal to zero. To be definite, we let the particles travel towards each other along the \(x\)-axis before the collision, the electron to the right with some velocity \(\mathbf{v}=v\mathbf{e}_{x}\), and the photon to the left. The condition of zero net momentum relates the electron velocity \(v\) to the photon frequency \(\omega\)
\[\gamma(v)mv=g(\omega)\;, \tag{58}\]
where \(m\) is the electron mass, and \(\gamma(v)=(1-v^{2}/c^{2})^{-1/2}\) as in Sect. IV. The conservation laws permit that both particle fly apart also along the \(x\) axis after the collision, the electron to the left with velocity \(-v\mathbf{e}_{x}\), and the photon to the right with the same frequency \(\omega\) as before the collision. This process is schematically depicted in the left part of Fig. 4.
Now we look at this very process from another reference frame \(K\), see Fig. 4, right part, that moves to the left along \(x\)-axes with some velocity \(u\), as seen from the center-of-mass frame \(K^{\prime}\). In the frame \(K\), the initial and the final electron velocities are
\[\mathbf{v}_{1}=\frac{v+u}{1+vu/c^{2}}\,\mathbf{e}_{x}\quad\text{and}\quad\mathbf{v}_{3}=- \frac{v-u}{1-vu/c^{2}}\,\mathbf{e}_{x}\;, \tag{59}\]
correspondingly, as follows from the relativistic velocity addition rules (6). The photon frequency undergoes the
Doppler shift when switching from frame \(K^{\prime}\) to frame \(K\), see eq. (11),
\[\omega_{2}=\omega\sqrt{\frac{1-u/c}{1+u/c}}\quad\text{and}\quad\omega_{4}=\omega \sqrt{\frac{1+u/c}{1-u/c}} \tag{60}\]
before and after the collision, respectively.
The energy conservation law in the laboratory frame \(K\) therefore reads
\[\gamma\left(\frac{v+u}{1+vu/c^{2}}\right)mc^{2}+f\left(\omega \sqrt{\frac{1-u/c}{1+u/c}}\right)=\\ =\gamma\left(\frac{v-u}{1-vu/c^{2}}\right)mc^{2}+f\left(\omega \sqrt{\frac{1+u/c}{1-u/c}}\right)\;. \tag{61}\]
Similarly, the momentum conservation law in the frame \(K\) has the following form when projected onto the \(x\)-axis
\[L(v,u,\omega)=R(v,u,\omega)\;,\]
\[L(v,u,\omega)=\\ =\gamma\left(\frac{v+u}{1+vu/c^{2}}\right)m\,\frac{v+u}{1+vu/c^{ 2}}-g\left(\omega\sqrt{\frac{1-u/c}{1+u/c}}\right)\;,\]
\[R(v,u,\omega)=\\ =-\gamma\left(\frac{v-u}{1-vu/c^{2}}\right)m\,\frac{v-u}{1-vu/c^ {2}}+g\left(\omega\sqrt{\frac{1+u/c}{1-u/c}}\right)\;. \tag{62}\]
Equations (61) and (62) must be fulfilled for all positive values \(\omega\) and for all values of \(u\), provided that electron velocity \(v\) is related to \(\omega\) by eq. (58). They permit to determine the functions \(f(\omega)\) and \(g(\omega)\), up to a few parameters.
As in the previous section, we expand eqs. (61) and (62) in a power series in \(u\) which provides differential equations for the functions \(f(\omega)\) and \(g(\omega)\), see Appendix C. Their solution leads to the following expressions for the photon energy \(f(\omega)\),
\[f(\omega)=C_{1}\omega-\frac{C_{2}}{\omega}+C_{3}\;, \tag{63}\]
and for the photon momentum \(g(\omega)\),
\[g(\omega)=\left(C_{1}\omega+\frac{C_{2}}{\omega}\right)\frac{1}{c}\;, \tag{64}\]
where \(C_{1}\), \(C_{2}\), and \(C_{3}\) are some constants. To fix these constants, we note that the energy and pressure of light are always positive, thence \(f(\omega)>0\) and \(g(\omega)>0\) for all positive frequencies \(\omega\). Therefore,
\[C_{1}>0,\quad C_{2}=0,\quad\text{and}\quad C_{3}\geq 0\;. \tag{65}\]
Also, knowing that photons can have arbitrarily small energies, we conclude that \(C_{3}=0\). Hence, the only free constant is \(C_{1}\) so that from \(f(\omega)=cg(\omega)\) we find that
\[E_{\text{ph}}=|\mathbf{p}_{\text{ph}}|c \tag{66}\]
holds. Moreover, the result
\[E_{\text{ph}}(\omega)=C_{1}\omega \tag{67}\]
is nothing but Planck's fundamental assertion that light comes in quanta (called photons) with energy \(E_{\text{ph}}(\omega)=\hbar\omega\), i.e., the constant \(C_{1}\) was first named by Planck, \(C_{1}=\hbar\). Using this identification, eqs. (66) and (67) prove eq. (56) entirely.
### Two-photon emission
An alternative approach to the mass-defect formula (49) modifies Einstein's original thought experiment [5] by making it independent of the knowledge of
Figure 4: Compton scattering process as seen in the center-of-mass frame \(K^{\prime}\) (left part) and in another reference frame \(K\) that moves to the left relative to \(K^{\prime}\) with velocity \(u\) (right part).
electrodynamics. In the original setting, a body at rest with mass \(M\) emits two equal bunches of light in opposite directions, see Fig. 5a. Rohrlich [2] argues that even the nineteen-century non-relativistic physics, supplied by the photons' energy and momentum formulas, makes it possible to derive the mass-defect formula (49) by an analysis of the experiment depicted in Fig. 5a. Feigenbaum and Mermin [19] suggest to replace the light by massive particles which makes the problem purely mechanical, and we are left with the problem analyzed in Sec. IV.3.
Here, we retrace the thought experiment of particle annihilation into two photons. Note that we make use of the information that a photon of frequency \(\omega\) has energy \(E=\hbar\omega\) and momentum \(|\mathbf{p}|=\hbar\omega/c\).
In the frame \(K^{\prime}\) where the particle is at rest, see Fig. 5b, both photons must have the same absolute value of momentum and, consequently, the same frequency \(\omega\). The energy balance in \(K^{\prime}\) therefore reads
\[E_{0}=2\hbar\omega\, \tag{68}\]
where \(E_{0}\) is the particle's internal energy, and \(\hbar\omega\) is the energy of one photon. In another frame \(K\), see Fig. 5c, that moves with speed \(u\) to the left relative to \(K^{\prime}\), the photons undergo the Doppler shift and acquire the frequencies
\[\omega_{1}=\omega\sqrt{\frac{1+u/c}{1-u/c}}\quad\text{and}\quad\omega_{2}= \omega\sqrt{\frac{1-u/c}{1+u/c}}\, \tag{69}\]
according to eq. (11). Next, we apply momentum conservation in the frame \(K\). The particle's momentum \(\gamma(u)mu\) turns into the photon momenta, \(\hbar\omega_{1}/c\) and \(\hbar\omega_{2}/c\), which are directed to the right and to the left, respectively. Therefore,
\[\gamma(u)\,mu=\frac{\hbar\omega}{c}\sqrt{\frac{1+u/c}{1-u/c}}-\frac{\hbar \omega}{c}\sqrt{\frac{1-u/c}{1+u/c}}. \tag{70}\]
We divide both sides by \(\gamma(u)=1/\sqrt{(1+u/c)(1-u/c)}\) and obtain
\[mu=\frac{2\hbar\omega u}{c^{2}}. \tag{71}\]
This gives \(\hbar\omega=mc^{2}/2\) so that substituting into eq. (68) finally gives
\[E_{0}=mc^{2}. \tag{72}\]
Note that, instead of the momentum conservation in frame \(K\), we can also consider the energy conservation in this frame which leads to the same conclusion.
## VI Relation to other work
In the literature, one can find many clever designs that aim at the construction or derivation of the relativistic momentum, the kinetic energy, or the mass-to-energy relation from thought experiments with collisions, including a version by Einstein himself, dated from the year 1935 [1]. Many of them are cited and discussed by Hu [20] who also suggests two additional collision schemes. Here, we briefly relate to those that employ particle collisions.
### Relativistic momentum
Lewis and Tolman [11] consider the elastic collision of two identical particles that approach each other at an infinitely small angle \(\theta\) relative to the \(x\)-axis, and move at the same angle \(\theta\) after the collision, see Fig. 6a. When we look at this process from the reference frame \(K_{1}\) that moves to the right relative to the center-of-mass frame \(K^{\prime}\) along with the lower particle, see Fig. 6b, one recognizes that the lower particle moves up and down at some small, non-relativistic speed \(w\). The higher particle moves at some speed \(v\). In the frame \(K_{2}\) that moves to the left along with the upper particle, see Fig. 6c, the particles exchange their roles: the upper one moves at the low speed \(w\), and the lower one at the high speed \(v\). The comparison of Figs. 6b and 6c shows that the vertical projection of the upper particle's velocity in frame \(K_{1}\) is equal to \(\pm w/\gamma(v)\), where the denominator arises from the relativistic time dilation. Therefore the momentum conservation law in the projection to the vertical axis reads
\[mw-p(v)\frac{w/\gamma(v)}{v}=-mw+p(v)\,\frac{w/\gamma(v)}{v}\,, \tag{73}\]
where \(m\) is the particle mass. Resolving this equation with respect to the particle's momentum \(p(v)\), one readily
Figure 5: (a) Einstein’s seminal thought experiment [5] modified later by Feigenbaum and Mermin [19] and by Rohrlich [2]; (b, c) its simplified version, where a particle annihilates into two photons.
Figure 6: Thought experiment by Lewis and Tolman [11; 12; 13] that proves the relativistic formula for the momentum, eq. (24).
recovers the relativistic formula \(p(v)=\gamma(v)mv\), eq. (24).
A number of schemes use the following convenient rule for the transformation of the \(\gamma\)-factor between the 'primed' and 'unprimed' reference frames,
\[\gamma(v)=\gamma(v^{\prime})\gamma(u)\left(1+\frac{\mathbf{v}^{\prime}\cdot\mathbf{u}}{c ^{2}}\right)\;, \tag{74}\]
where \(\mathbf{u}\) is the velocity of the 'primed' frame relative to the 'unprimed' one. Eq. (74) is a direct consequence of the velocity transformation rule (6). Using eq. (74) one can rewrite the transformation rules for the components of the velocity in the \(y,z\)-directions, assuming that \(\mathbf{u}\) is parallel to the \(x\)-axis,
\[v_{y}=v_{y}^{\prime}\;\frac{\gamma(v^{\prime})}{\gamma(v)}\;,\quad v_{z}=v_{z} ^{\prime}\;\frac{\gamma(v^{\prime})}{\gamma(v)}\;. \tag{75}\]
Finkler [21] considers two identical particles that move in the \(xy\)-plane with opposite velocities \(\mathbf{v}_{1}^{\prime}\) and \(\mathbf{v}_{2}^{\prime}=-\mathbf{v}_{1}^{\prime}\) relative to the center-of-mass frame \(K^{\prime}\), see Fig. 7a. In an 'unprimed' frame \(K\), see Fig. 7b, that moves along the \(x\)-axis with respect to \(K^{\prime}\), the \(y\)-component of the total momentum \(p_{1y}+p_{2y}\) is equal to zero. Indeed, one may imagine that the particles originally move along the \(y\)-axis in frame \(K^{\prime}\), as shown by dashed lines, and were scattered elastically. Then, in frame \(K\), the \(y\)-components of their momenta before the collision compensated each other by symmetry, as seen in Fig. 7b. We write the \(y\)-component of a particle's momentum as \(p_{y}=p(v)\,v_{y}/v\), and express the vanishing of the total \(y\)-momentum in frame \(K\) as
\[p(v_{1})\,\frac{v_{1y}}{v_{1}}=-p(v_{2})\,\frac{v_{2y}}{v_{2}}\,. \tag{76}\]
The \(y\)-components of the particle velocities \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) in the 'unprimed' frame \(K\) can be transformed to frame \(K^{\prime}\) using the rule (75),
\[v_{1y}=v_{1y}^{\prime}\,\frac{\gamma(v_{1}^{\prime})}{\gamma(v_{1})}\,,\quad v _{2y}=v_{2y}^{\prime}\,\frac{\gamma(v_{2}^{\prime})}{\gamma(v_{2})}\,. \tag{77}\]
Since \(v_{1}^{\prime}=v_{2}^{\prime}\) and \(v_{1y}^{\prime}=-v_{2y}^{\prime}\) it follows from eq. (77) that
\[\gamma(v_{1})\,v_{1y}=-\gamma(v_{2})\,v_{2y}\;. \tag{78}\]
Finally, dividing each side of eq. (76) by the corresponding side of eq. (78) we find
\[\frac{p(v_{1})}{\gamma(v_{1})\,v_{1}}=\frac{p(v_{2})}{\gamma(v_{2})\,v_{2}}\;. \tag{79}\]
Varying the conditions of this thought experiment with velocities and inclination angles in frame \(K^{\prime}\), and the velocity of frame \(K\) relative to \(K^{\prime}\), we can independently vary \(v_{1}\) and \(v_{2}\) in eq. (79). Therefore, eq. (79) implies that both sides are constants,
\[F=\frac{p(v)}{\gamma(v)\,v} \tag{80}\]
is independent of \(v\). In other words, it is proven that the relativistic momentum \(p(v)\) is proportional to \(\gamma(v)v\). Using the non-relativistic limit it is seen that \(F\equiv m\) is the particle mass.
### Relativistic kinetic energy
In his unpublished lectures in the 1920s, Langevin uses the collision scheme as in Fig. 2 to derive the relativistic formula for the kinetic energy. Later, his arguments were reconstructed by Penrose, Rindler, and Ehlers [22; 23]. Here, we only provide a brief summary of their arguments.
Energy conservation during the elastic collision in the 'unprimed' frame, see the right part of Fig. 2, states that \(T(v_{1})+T(v_{2})=T(v_{3})+T(v_{4})\) where \(T(v)\) denotes the kinetic energy. Since \(v_{3}=v_{4}\) by symmetry, we have
\[T(v_{3})=\frac{T(v_{1})+T(v_{2})}{2}\;. \tag{81}\]
Then, applying the rule (74) to the velocities \(\mathbf{v}_{1}\), \(\mathbf{v}_{2}\), \(\mathbf{v}_{3}\), and \(\mathbf{v}_{4}\), one obtains
\[\gamma(v_{1}) = \gamma(v)\gamma(u)\left(1+\frac{vu}{c^{2}}\right)\;, \tag{82}\] \[\gamma(v_{2}) = \gamma(v)\gamma(u)\left(1-\frac{vu}{c^{2}}\right)\;,\] (83) \[\gamma(v_{3}) = \gamma(v_{4})=\gamma(v)\gamma(u)\;, \tag{84}\]
Figure 8: Kinetic energy as a function of the \(\gamma\)-factor.
Figure 7: Thought experiment adopted from the paper by Finkler [21] that proves the relativistic formula for the momentum, eq. (24).
whence
\[\gamma(v_{3})=\frac{\gamma(v_{1})+\gamma(v_{2})}{2}\;. \tag{85}\]
When we plot the kinetic energy \(T(v)\) versus \(\gamma(v)\), see Fig. 8, eqs. (81) and (85) imply that the point \(\big{(}\gamma(v_{3}),T(v_{3})\big{)}\) in the plot is always located exactly in the middle between the points \(\big{(}\gamma(v_{1}),T(v_{1})\big{)}\) and \(\big{(}\gamma(v_{2}),T(v_{2})\big{)}\). Since \(v_{1}\) and \(v_{2}\) can be varied independently, the only possible shape of the curve \(T(v)\) as a function of \(\gamma(v)\) is a straight line,
\[T(v)=c_{1}\,\gamma(v)+c_{2}\;. \tag{86}\]
Here, \(c_{1}\) and \(c_{2}\) are some coefficients to be found from comparison with the non-relativistic expression \(T(v\ll c)\approx mv^{2}/2\), namely, \(c_{1}=mc^{2}=-c_{2}\). Using these coefficients, eq. (86) reduces to the relativistic expression \(T(v)=mc^{2}(\gamma(v)-1)\) for the relativistic kinetic energy, see eq. (42).
### Energy-mass relation
With relativistic formulas for the momentum and the kinetic energy in our hands, we can go ahead and find the relation between mass and energy. In the Feynman Lectures on Physics [13] this is done in the following way.
Imaging that two equal masses \(m\) approach each other along the \(x\)-axis with equal speeds \(|\mathbf{v}|=v\). When they meet, they coalesce into a larger body of some mass \(M\), see Fig. 9a. Let us look at this process from another reference frame \(K\), see Fig. 9b, that moves along the \(y\)-axis relative to the center-of-mass frame \(K^{\prime}\) with a very small, non-relativistic speed \(u\). Conservation of the \(y\)-component of the momentum in the frame \(K\) reads
\[2\gamma(\tilde{v})mu=Mu\;, \tag{87}\]
where \(\tilde{v}\) is the speed of the initial masses in the frame \(K\). Neglecting the difference between \(v\) and \(\tilde{v}\), we find the fused mass \(M\) from eq. (87),
\[M=2\gamma(v)m\;. \tag{88}\]
Therefore, the fused mass exceeds the original particle masses by
\[\Delta m=M-2m=2m(\gamma(v)-1)\;. \tag{89}\]
At the same time, the sum of the internal energies of the particles have increased by
\[\Delta E_{0}=2mc^{2}(\gamma(v)-1) \tag{90}\]
because all the kinetic energy \(T(v)=2mc^{2}(\gamma(v)-1)\) of the two masses, as seen from the center-of-mass frame \(K^{\prime}\), transformed into internal energy. From comparison between equations (89) and (90) we come to the conclusion that
\[\Delta E_{0}=\Delta mc^{2}\;, \tag{91}\]
the mass defect formula (49).
## VII Conclusions
In this work we used relativistic kinematics of point particles and two-particle collisions to derive the energy-mass relation that reduces to Einstein's famous formula (1) for a particle at rest. Our derivation offers several advantages over other derivations briefly discussed in Sec. VI.
1. Since it does not appeal to electrodynamics, it is conceptually simpler than Einstein's original argument that involves electromagnetic radiation. Moreover, using the relativistic Doppler formula, we can derive the energy of photons as massless particles and Planck's formula (67) from an analysis of Compton scattering or from particle annihilation into two photons.
2. We address very simple geometries for the two-particle scattering, see Fig. 2, which already provides us with the relativistic expressions for the relativistic expressions for particle momentum and its kinetic energy.
3. The mass defect formula straightforwardly follows from the decay of a massive particle at rest into two identical particle, see Fig. 3. With the plain assumption that the internal energy of a point particle can only depend on its mass, Einstein's formula (1) readily follows.
The price of the conceptual simplicity is the use of some standard elements of calculus, namely (second-order) Taylor expansion and ordinary first-order differential equations to find the unique solutions of eqs. (34), (41), (61), and (62). The proof that the given expressions solve these equations requires only elementary mathematics.
Students who are scared off by calculus and prefer physically motivated shortcuts may resort to the literature, some of which we briefly reviewed above. We presume that the most straightforward 'derivation' of Einstein's energy formula starts from the four-vector of the relativistic velocity. Scattering thought experiments
Figure 9: Thought experiment considered in the Feynman Lectures on Physics [13] for obtaining the energy-mass relation.
show that the three spatial components are conserved in a scattering experiment, i.e., they must be the particle momentum up to a mass factor. Therefore, the 'Babylonian approach' inspires the notion that the zero component must be the particle energy, up to a mass factor [1; 9].
In this work we argue that the pedestrian but still Euclidean way to Einstein's formula is neither short nor simple. Instead, it requires the detailed analysis of scattering experiments and some calculus to extract the correct formulas for the relativistic momentum and kinetic energy.
## Appendix A Derivation of the momentum modulus
In this appendix we derive eq. (35). We expand eq. (34) in a Taylor series in \(u\) around \(u=0\) up to first order in \(u\). First, we find
\[p\left(\frac{v+u}{1+vu/c^{2}}\right)=p(v)+(1-v^{2}/c^{2})p^{\prime}(v)u+\dots\;, \tag{41}\]
where the ellipsis denotes further terms that are proportional to \(u^{2}\), \(u^{3}\), and so on. The prime denotes the derivative with respect to \(v\). Next,
\[p\left(\frac{v-u}{1-vu/c^{2}}\right)=p(v)-(1-v^{2}/c^{2})p^{\prime}(v)u+\dots \tag{42}\]
and
\[2\frac{u}{\sqrt{v^{2}+u^{2}-v^{2}u^{2}/c^{2}}}p\left(\sqrt{v^{2} +u^{2}-v^{2}u^{2}/c^{2}}\right)=\] \[=\frac{2u}{v}p(v)+\dots\;. \tag{43}\]
We collect the terms to first order in \(u\) in eq. (34) and find the condition
\[2(1-v^{2}/c^{2})p^{\prime}(v)=\frac{2}{v}p(v)\;. \tag{44}\]
Writing \(p^{\prime}(v)={\rm d}p/{\rm d}v\), this differential equation can be solved by separation of variables,
\[\frac{dp}{p}=\frac{dv}{v(1-v^{2}/c^{2})}\;, \tag{45}\]
so that the integration of both sides leads to
\[\ln p(v)=\ln\left[\frac{v}{\sqrt{1-v^{2}/c^{2}}}\right]+{\rm const}\;, \tag{46}\]
or
\[p(v)=C_{p}\frac{v}{\sqrt{1-v^{2}/c^{2}}}\;, \tag{47}\]
which proves eq. (35).
## Appendix B Derivation of the kinetic energy
Eq. (41) defines the yet unknown function \(T(v)\) for the kinetic energy. As in appendix A, we expand each term of this equation in a Taylor series around \(u=0\) to second order in \(u\),
\[T\left(\frac{v+u}{1+vu/c^{2}}\right)\approx T(v)+u\left(1-\frac{ v^{2}}{c^{2}}\right)T^{\prime}(v)\] \[+\frac{u^{2}}{2}\left(1-\frac{v^{2}}{c^{2}}\right)\left[-\frac{2 v}{c^{2}}T^{\prime}(v)+\left(1-\frac{v^{2}}{c^{2}}\right)T^{\prime\prime}(v)\right] \tag{48}\] \[T\left(\frac{v-u}{1-vu/c^{2}}\right)\approx T(v)-u\left(1-\frac {v^{2}}{c^{2}}\right)T^{\prime}(v)\] \[+\frac{u^{2}}{2}\left(1-\frac{v^{2}}{c^{2}}\right)\left[-\frac{2 v}{c^{2}}T^{\prime}(v)+\left(1-\frac{v^{2}}{c^{2}}\right)T^{\prime\prime}(v)\right]\] (49) \[T\left(\sqrt{v^{2}+u^{2}-\frac{v^{2}u^{2}}{c^{2}}}\right)\approx T (v)+\frac{u^{2}}{2}\left(1-\frac{v^{2}}{c^{2}}\right)\frac{T^{\prime}(v)}{v} \tag{50}\]
where \(T^{\prime}(v)\) and \(T^{\prime\prime}(v)\) are first and second derivatives of function \(T(v)\), and higher orders in the Taylor expansion were ignored. When inserted into eq. (41), the constant and linear terms drop out, and the quadratic terms lead to the differential equation
\[-\frac{2v}{c^{2}}T^{\prime}(v)+\left(1-\frac{v^{2}}{c^{2}}\right)T^{\prime \prime}(v)=\frac{T^{\prime}(v)}{v}\;. \tag{51}\]
We temporarily denote \(T^{\prime}(v)\) as \(f\), and \(T^{\prime\prime}(v)\) as \(df/dv\) and find the first-order differential equation
\[-\frac{2v}{c^{2}}f+\left(1-\frac{v^{2}}{c^{2}}\right)\frac{df}{dv}=\frac{f}{v} \tag{52}\]
that is solved by separation of variables. It has the solution
\[f=f_{0}\frac{v}{(1-v^{2}/c^{2})^{3/2}}\;, \tag{53}\]
where \(f_{0}\) is an arbitrary constant. We recall that \(f=T^{\prime}(v)\), and integrate once more to find
\[T(v)=\int^{v}{\rm d}x\,T^{\prime}(x) = \int^{v}{\rm d}x\,f_{0}\frac{x}{(1-x^{2}/c^{2})^{3/2}}= \tag{54}\] \[=C_{1}\frac{1}{\sqrt{1-v^{2}/c^{2}}}+C_{2}\;,\]
or
\[T(v)=\gamma C_{1}+C_{2}\;, \tag{55}\]
where \(C_{1}\) and \(C_{2}\) are some constants.
We can fix \(C_{2}\) by considering the particle at rest, when \(v=0\), \(T=0\), and \(\gamma=1\),
\[0=C_{1}+C_{2}\;, \tag{56}\]
hence \(C_{2}=-C_{1}\) and
\[T(v)=(\gamma-1)C_{1}\;. \tag{57}\]
To determine \(C_{1}\), we consider small but non-zero velocities \(v/c\ll 1\). The expansion of the \(\gamma\)-factor near \(v=0\) gives \(\gamma(v)\approx 1+v^{2}/(2c^{2})\) so that
\[T(v\ll c)\approx\frac{C_{1}}{2c^{2}}\,v^{2}\;. \tag{111}\]
The non-relativistic formula reads \(T^{\rm nr}=mv^{2}/2\), see eq. (19), so that we must set \(C_{1}=mc^{2}\). Hence,
\[T(v)=(\gamma-1)mc^{2}\;, \tag{112}\]
as given in eq. (42).
## Appendix C Derivation of photon energy and momentum
For better readability, we set the speed of light \(c\) to unity within this appendix. Hence, equations (61) and (62) take the simpler forms
\[\gamma\left(\frac{v+u}{1+vu}\right)m+f\left(\omega\sqrt{\frac{1- u}{1+u}}\right)=\\ =\gamma\left(\frac{v-u}{1-vu}\right)m+f\left(\omega\sqrt{\frac{1 +u}{1-u}}\right)\;, \tag{113}\]
and
\[\gamma\left(\frac{v+u}{1+vu}\right)m\,\frac{v+u}{1+vu}-g\left( \omega\sqrt{\frac{1-u}{1+u}}\right)=\\ =-\gamma\left(\frac{v-u}{1-vu}\right)m\,\frac{v-u}{1-vu}+g\left( \omega\sqrt{\frac{1+u}{1-u}}\right)\;, \tag{114}\]
where \(\gamma(v)=(1-v^{2})^{-1/2}\).
The Taylor expansions in eq. (113) at \(u=0\) up to and including linear terms in \(u\) require
\[\gamma\left(\frac{v\pm u}{1\pm vu}\right)m = \gamma(v)m\left[1\pm uv+\mathcal{O}(u^{2})\right]\;, \tag{115}\] \[f\left(\omega\sqrt{\frac{1\pm u}{1\mp u}}\right) = f(\omega)\pm u\omega f^{\prime}(\omega)+\mathcal{O}(u^{2})\;. \tag{116}\]
Substituting these expansions into eq. (113), we see that the constant terms cancel each other, and the linear terms lead to
\[\gamma(v)mv=\omega f^{\prime}(\omega)\;. \tag{117}\]
The left-hand side of eq. (117) can be replaced with \(g(\omega)\), see eq. (58). Hence, from eqs. (113) and (58) we obtain a differential relation between the photon energy \(f(\omega)\) and its momentum \(g(\omega)\),
\[\omega f^{\prime}(\omega)=g(\omega)\;. \tag{118}\]
Note that, for a massive particle traveling with some velocity \(v\), the momentum-to-energy ratio fulfills \(p/E=\gamma mv/\gamma mc^{2}=v/c^{2}\). Into the latter relation we might insert \(v=c\) for a photon traveling with velocity \(c\). Then, \(g(\omega)/f(\omega)\equiv p/E=1/c\) immediately follows, i.e., \(g(\omega)=f(\omega)\) when \(c=1\). Therefore, eq. (118) takes the form \(\omega f^{\prime}(\omega)=f(\omega)\) that immediately gives rise to the conclusion that the photon energy \(f(\omega)\) is proportional to the frequency \(\omega\). However, we shall not follow this shortcut. Instead, we will employ the momentum conservation law, eq. (114).
The Taylor expansion of the terms contributing to eq. (114) at \(u=0\) require
\[\gamma\left(\frac{v\pm u}{1\pm vu}\right)m\,\frac{v\pm u}{1\pm vu}=\gamma(v)m \left[v\pm u+\frac{u^{2}v}{2}+\mathcal{O}(u^{3})\right] \tag{119}\]
and
\[g\left(\omega\sqrt{\frac{1\pm u}{1\mp u}}\right)=\\ =g(\omega)\pm u\omega g^{\prime}(\omega)+\frac{u^{2}}{2}\left[ \omega g^{\prime}(\omega)+\omega^{2}g^{\prime\prime}(\omega)\right]+\mathcal{ O}(u^{3}) \tag{120}\]
up to and including second order in \(u\). Substituting these expansions into eq. (114), we see that the constant terms reproduce the known eq. (58) whereas the terms proportional to \(u\) cancel each other. Collecting the terms proportional to \(u^{2}\) we find
\[\gamma(v)mv=\omega g^{\prime}(\omega)+\omega^{2}g^{\prime\prime}(\omega)\;. \tag{121}\]
In eq. (121), the left-hand side can be replaced by \(g(\omega)\) due to eq. (58). Therefore, we arrive at a differential equation for the function \(g(\omega)\),
\[\omega^{2}g^{\prime\prime}(\omega)+\omega g^{\prime}(\omega)-g(\omega)=0\;. \tag{122}\]
This is a homogeneous second-order linear equation with the two independent solutions \(g_{1}(\omega)=\omega\) and \(g_{2}(\omega)=\omega^{-1}\). Hence, the general solution is
\[g(\omega)=C_{1}\omega+\frac{C_{2}}{\omega}\;, \tag{123}\]
where \(C_{1}\) and \(C_{2}\) are arbitrary constants. Substituting this solution into eq. (118) and integrating over \(\omega\) leads to the function \(f(\omega)\),
\[f(\omega)=C_{1}\omega-\frac{C_{2}}{\omega}+C_{3}\;, \tag{124}\]
where \(C_{3}\) is yet another constant. When we return to physical units, we have to divide the right-hand side of eq. (123) by \(c\) to account the difference between the unit of energy and that of momentum. In this way, we obtain eqs. (63) and (64) of the main text. |
2308.14070 | DETDet: Dual Ensemble Teeth Detection | The field of dentistry is in the era of digital transformation. Particularly,
artificial intelligence is anticipated to play a significant role in digital
dentistry. AI holds the potential to significantly assist dental practitioners
and elevate diagnostic accuracy. In alignment with this vision, the 2023 MICCAI
DENTEX challenge aims to enhance the performance of dental panoramic X-ray
diagnosis and enumeration through technological advancement. In response, we
introduce DETDet, a Dual Ensemble Teeth Detection network. DETDet encompasses
two distinct modules dedicated to enumeration and diagnosis. Leveraging the
advantages of teeth mask data, we employ Mask-RCNN for the enumeration module.
For the diagnosis module, we adopt an ensemble model comprising DiffusionDet
and DINO. To further enhance precision scores, we integrate a complementary
module to harness the potential of unlabeled data. The code for our approach
will be made accessible at https://github.com/Bestever-choi/Evident | Kyoungyeon Choi, Jaewon Shin, Eunyi Lyou | 2023-08-27T11:04:26Z | http://arxiv.org/abs/2308.14070v1 | # DETDet: Dual Ensemble Teeth Detection
###### Abstract
The field of dentistry is in the era of digital transformation. Particularly, artificial intelligence is anticipated to play a significant role in digital dentistry. AI holds the potential to significantly assist dental practitioners and elevate diagnostic accuracy. In alignment with this vision, the 2023 DENTEX challenge aims to enhance the performance of dental panoramic X-ray diagnosis and enumeration through technological advancement. In response, we introduce **DETDet**, a Dual Ensemble Teeth Detection network. DETDet encompasses two distinct modules dedicated to enumeration and diagnosis. Leveraging the advantages of teeth mask data, we employ Mask-RCNN for the enumeration module. For the diagnosis module, we adopt an ensemble model comprising DiffusionDet and DINO. To further enhance precision scores, we integrate a complementary module to harness the potential of unlabeled data. The code for our approach will be made accessible at [https://github.com/Bestever-choi/Evident](https://github.com/Bestever-choi/Evident)
Keywords:Detection Artificial Intelligence Diagnosis.
## 1 Method
_Data preprocessing_ All of the training X-ray images were normalized using a mean of 0.5 and a standard deviation of 0.1. We employed random resizing in the enumeration module and included a random horizontal flip in the diagnosis module. For model validation and testing, we split the enumeration dataset having 634 panoramic dental X-rays into 534 images for training, 50 images for validation, and 50 images for test. Similarly, diagnosis dataset containing 705 images were divided into 605 training data, along with 50 instances each for validation and testing.
_Overview_ In our proposed method, illustrated in Fig. 1, we input a panoramic radiograph into the **enumeration module**, utilizing mask RCNN to predict dental bounding boxes and notations. Simultaneously, the same radiograph is fed into the **diagnosis module**, an ensemble of DiffusionDet and DINO, which diagnoses diseases and predicts bounding boxes. These outputs are integrated to yield comprehensive dental notations and bounding boxes for teeth affected by diseases.
### Enumeration module
Given the 705 training images for quadrant-enumeration-disease data, we observed that there are only a few labels of enumeration in the disease data. To address this problem of small number of labels, HierarchicalDet[1] applied weight transfer to learn enumeration bounding boxes from the 634 enumeration-only data. Instead, we used the instance segmentation method to increase the mean average precision (mAP). Generally, instance segmentation yields better mean average precision (mAP) than detection algorithms, as the segmentation network learns additional mask information[2], enabling to understand finer-grained signals in images. Specifically, we trained Mask-RCNN with the SwinT backbone on the 634 enumeration-only data. As using Mask-RCNN with SwinT backbone outperformed SOTA methods(HTC[3] and Cascade Mask-RCNN[4]) in the given enumeration data, we adopted the former to predict quadrant and numerical index of each tooth, along with their corresponding bounding boxes. Finally, we only considered the bounding box predictions with scores higher than 0.7 to exclude low-scoring enumeration outputs.
### Ensembling diagnosis module
In our preliminary experiment, two state-of-the-art methods, DiffusionDet[5] and DINO[7], a DETR variant with improved denoising anchor boxes, were evaluated on diagnosis dataset. While DiffusionDet yields a higher Average Precision than other object detection algorithms, DINO leads to a higher Average Recall than any other detection methods. Empirically, the large number of low-score DINO detections, approximately 3000 detections per image, contributes to the high Average Recall, whereas DiffusionDet predicts a substantial number of high-scoring true positives leading to the high Average Precision. Therefore, we employ an
Figure 1: Model Architecture of DETDet.
ensemble approach that combines both methods to achieve high scores in both Average Precision and Average Recall. Specifically, we utilize DiffusionDet predictions with bounding box scores higher than 0.05 and DINO predictions with scores lower than 0.05 to increase the Average Recall while maintaining precision. Additionally, it is noteworthy that, according to our test settings, DiffusionDet with the ResNet50 backbone outperformed SwinT. Hence, we use the ResNet50 backbone for both DINO and DiffusionDet in the diagnosis module.
### Dual stage integration
In order to integrate the enumeration and diagnosis modules, we introduce a method called _closest bounding box center matching_. This method operates in two stages: it first predicts the enumeration bounding boxes and then matches these boxes to the nearest diagnosis bounding boxes. Through this integration, we are able to derive category IDs for all three elements: quadrant, enumeration, and disease. The combined bounding box score is computed by multiplying the scores from the enumeration and diagnosis modules. In summary, DETDet integrates enumeration and diagnosis through a dual-stage approach, ensembling two models to achieve high precision and recall.
### Complementary module
Figure 2: Training process of the complementary module
Furthermore, the dual-stage integration offers the advantage of explicitly utilizing the 1571 unlabeled data dealing with the insufficient amount of original training data. The training data suffers from significant imbalance, as there is a notable disparity in the amount of caries data compared to other diseases. In particular, the dataset contains a scarcity of Periapical Lesion data. To address this data imbalance, we introduce a complementary module, as depicted in Fig. 2, which is designed to leverage unlabeled data for increasing the number of train samples. The complementary module is trained using a pseudo-labeling approach. Specifically, we segment all the dental images using the enumeration module and assign each tooth to its corresponding category. Subsequently, an EfficientNetB4[6] classifier is trained to classify teeth into four categories: normal, caries, deep caries, impacted, and periapical lesion. We apply this classifier to the unlabeled data, assigning pseudo labels to all teeth. Through pseudo labeling, we are able to augment the periapical lesion data, thereby achieving a more balanced dataset overall. While the complementary module generates fewer predictions compared to the diagnosis module, it effectively compensates for the predictions that the diagnosis module may miss.
## 2 Results
### Enumeration results
The enumeration module, trained using Mask-RCNN with the SwinT backbone, is evaluated on our set of 50 test images. The outcomes are detailed in Table 1. The AP50 score stands at 0.987, indicating that the module adeptly classifies and detects teeth with a high level of precision. Nevertheless, the AP75 score is relatively lower. This discrepancy can be attributed to variations in the ground truth bounding boxes, as they were labeled by different individuals, resulting in potential differences in exact bounding box coordinates across all images.
### Ensemble diagnosis results
The diagnosis module, trained using DiffusionDet and DINO with a ResNet50 backbone, is evaluated on our set of 50 test images. The outcomes are outlined
\begin{table}
\begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{Task} \\ \cline{2-3} & Bounding box & Segmentation \\ \hline mAP & 0.589 & 0.548 \\ AP50 & 0.987 & 0.976 \\ AP75 & 0.636 & 0.552 \\ AR & 0.665 & 0.615 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of Enumeration Module
in Table 2. The mAP, AP50, and AP75 scores are notably higher in DiffusionDet compared to DINO. However, the AR score is significantly superior in DINO, owing to its output of 32% more detections than DiffusionDet. Consequently, we combine these two models in an ensemble approach to capitalize on their strengths. The ensemble module selects the DiffusionDet model when the prediction score exceeds a certain threshold, otherwise it selects DINO. After conducting experimentation on the test data, we set the threshold at 0.05. This strategic choice yields a slight increase in mAP and AP50, along with a substantial improvement in AR. The results indicate a notable enhancement in overall performance.
### Dual stage integration results
The outcomes of the dual-stage integration are outlined in Table 3. We employed the closest bounding box matching method to integrate the enumeration and diagnosis modules, enabling the prediction of category IDs 1, 2, and 3.
### Complementary module results
The inclusion of the complementary module yields an overall enhancement in the metrics. The outcomes are consolidated in Table 4. Notably, there is a substantial increase in the AP50 scores for both disease and enumeration. The complementary module is trained using an expanded dataset, which encompasses unlabeled
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Metric} \\ \cline{2-5} & mAP & AP50 & AP75 & AR \\ \hline DiffusionDet & 0.373 & **0.614** & 0.436 & 0.634 \\ DINO & 0.282 & 0.438 & 0.323 & 0.705 \\ Ensemble & **0.380** & 0.610 & **0.451** & **0.715** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of Diagnosis Module
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Metric} \\ \cline{2-5} & mAP & AP50 & AP75 & AR \\ \hline Quadrant & 0.404 & 0.638 & 0.479 & 0.749 \\ Enumeration & 0.749 & 0.365 & 0.246 & 0.643 \\ Disease & 0.380 & 0.610 & 0.451 & 0.715 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of Dual-Stage Integration
data. In particular, we augmented the periapical lesion and deep caries data by a factor of two, thereby mitigating the training data's imbalance. Consequently, the complementary module contributes supplementary predictions to the dual-stage integration outputs, resulting in improved precision and recall.
|
2304.03801 | Towards Inclusive Fairness Evaluation via Eliciting Disagreement
Feedback from Non-Expert Stakeholders | Traditional algorithmic fairness notions rely on label feedback, which can
only be elicited from expert critics. However, in most practical applications,
several non-expert stakeholders also play a major role in the system and can
have distinctive opinions about the decision making philosophy. For example, in
kidney placement programs, transplant surgeons are very wary about accepting
kidney offers for black patients due to genetic reasons. However, non-expert
stakeholders in kidney placement programs (e.g. patients, donors and their
family members) may misinterpret such decisions from the perspective of social
discrimination. This paper evaluates group fairness notions from the viewpoint
of non-expert stakeholders, who can only provide binary
\emph{agreement/disagreement feedback} regarding the decision in context.
Specifically, two types of group fairness notions have been identified: (i)
\emph{definite notions} (e.g. calibration), which can be evaluated exactly
using disagreement feedback, and (ii) \emph{indefinite notions} (e.g. equal
opportunity) which suffer from uncertainty due to lack of label feedback. In
the case of indefinite notions, bounds are presented based on disagreement
rates, and an estimate is constructed based on established bounds. The efficacy
of all our findings are validated empirically on real human feedback dataset. | Mukund Telukunta, Venkata Sriram Siddhardh Nadendla | 2023-04-07T18:09:43Z | http://arxiv.org/abs/2304.03801v1 | # Towards Inclusive Fairness Evaluation via
###### Abstract
Traditional algorithmic fairness notions rely on label feedback, which can only be elicited from expert critics. However, in most practical applications, several non-expert stakeholders also play a major role in the system and can have distinctive opinions about the decision making philosophy. For example, in kidney placement programs, transplant surgeons are very warp about accepting kidney offers for black patients due to genetic reasons. However, non-expert stakeholders in kidney placement programs (e.g. patients, donors and their family members) may misinterpret such decisions from the perspective of social discrimination. This paper evaluates group fairness notions from the viewpoint of non-expert stakeholders, who can only provide binary _agreement/disagreement feedback_ regarding the decision in context. Specifically, two types of group fairness notions have been identified: (i) _definite notions_ (e.g. calibration), which can be evaluated exactly using disagreement feedback, and (ii) _indefinite notions_ (e.g. equal opportunity) which suffer from uncertainty due to lack of label feedback. In the case of indefinite notions, bounds are presented based on disagreement rates, and an estimate is constructed based on established bounds. The efficacy of all our findings are validated empirically on real human feedback dataset.
## 1 Introduction
Social discrimination and biases in trained machine learning (ML) models have been investigated extensively using group fairness notions, where the ML-based classifier is compared against ground truth label elicited from expert critics. However, most practical systems comprises of multiple stakeholders with heterogeneous expertise and backgrounds who are either decision-makers, or participants who get impacted by these decisions. Unfortunately, the state-of-the-art algorithmic fairness notions rely on label feedback which can only be elicited from expert stakeholders. This selective feedback elicitation policy has raised concerns amongst other non-expert stakeholders, since their opinions are neglected in the fairness evaluation. Therefore, the main goal of this paper is to develop an inclusive fairness evaluation approach to assess various group fairness notions [14, 5] in ML-based predictive systems which rely on non-expert stakeholders' feedback.
However, fairness evaluation through non-expert feedback elicitation comes with many challenges. Firstly, non-expert stakeholders lack expertise due to which they cannot fathom the technical attributes of a given input. Revealing such attributes can only lead to cognitive overloading, thereby discouraging non-expert critics to participate in the fairness evaluation process. Secondly, the ambiguity in non-expert critic feedback increases with the number of classes within the classification problem. For example, in a score-based classifier (e.g. COMPAS' recidivism score predictors), non-expert critics may not decipher the nuances between two scores with little gap between them. Thirdly, most practical systems comprises of several stakeholders with diverse opinions. This makes it economically infeasible to collect feedback using large surveys.
Due to the aforementioned reasons, this paper propose a simple feedback elicitation model for \(M\)-ary (magnitude of output space is greater than 2) classifiers, where non-expert stakeholders are requested to reveal binary feedback regarding their agreement/disagreement with the classifier's outcome label. The proposed feedback model does not necessarily align with any one fairness notion. Given such a generalized feedback elicitation model, traditional group fairness notions are assorted broadly into two categories: (i) _definite notions_ (e.g. calibration [5]), which can be precisely evaluated form disagreement feedback, and (ii) _indefinite notions_ (e.g. equal opportunity [14], predictive parity [5]), which can be estimated from disagreement feedback along with given system information. Both upper and lower bounds to indefinite group fairness notions are computed based on elicited disagreement rates. Using these bounds, indefinite group fairness notions are estimated and validated empirically on simulated disagreements constructed using a real dataset collected from 400 crowd workers on the COMPAS system [9]. Results demonstrate that the proposed estimates of indefinite group fairness notions based on disagreement feedback, exhibit low error across a wide range of critics in the crowd.
## 2 Case Study: Kidney Placement in United States
Most patients with end-stage renal disease (ESRD) prefer kidney transplantation over long-term dialysis to improve their survival rates. Unfortunately, a significant fraction of procured kidneys are discarded (e.g. 20% kidneys are discarded in 2019) in the U.S. due to diverse reasons, in spite of severe shortage of kidneys available for transplantation [16]. As a result, certain social groups (e.g. African American population, patients with hard-to-place kidneys) are severely disadvantaged in finding a match with deceased donor organs. Various researchers, including the United Network of Organ Sharing (UNOS), have developed recommender systems [20, 28, 26] using machine learning (ML) algorithms which provides appropriate insights to the experts (e.g. doctors, surgeons) to help make quick and reliable decisions regarding kidney offers and minimize discards. One notable example is the model developed by [2], which predicts the probability of a patient being offered a deceased donor kidney of some quality within some time-frame, given patient characteristics. However, the presence of biases in the
data collected at various stages within kidney matching workflow continues to be a major cause for a significantly high kidney discard rate. For example, Kidney Donor Risk Index (KDRI) scores quantify the quality of a deceased donor kidney based on several factors such as age, race, diabetes, and hypertension. However, it also explicitly introduces racial bias against African American donors at the organ procurement organizations (OPOs) during the measurement of donor's kidney quality. Although such a bias is associated with lower allograft or patient survival [3] amongst Black donors, models trained on such data continues to feed the disparate treatment of Black population, thus leading to distrust in such communities. On the other end, the herding behavior is observed at the transplant centers (TXCs), when healthcare professionals blindly reject kidney offers from OPOs if they have been rejected repeatedly in the past. Such unintended biases incorporated into the training workflow results in a discriminatory model regardless of its accuracy. Moreover, many surgeons could have optimism bias, which causes them to believe the availability and promptness of a better-matched organ before they really do [20].
In the past, biases in such systems have been investigated based on feedback elicited from experts (e.g. transplant surgeons), who possess enough knowledge regarding the application at hand. On the contrary, it is also essential to consider the opinions of non-expert critics (e.g. patients, donors) who are potential stakeholders and more importantly, the victims of social discrimination. Unlike experts, it is infeasible to elicit feedback from the non-experts due to lack of domain-based knowledge. For example, given the characteristics of a deceased donor kidney (e.g. KDRI, history of diabetes and cancer), it is impossible for a non-expert critic to decide whether or not to accept the kidney for transplantation. In order to address this challenge, we propose a novel feedback elicitation model based on _disagreements_. When presented with the decision made by the system for a specific deceased donor, the non-expert critic can either _agree_ or _disagree_ with the decision based on their intrinsic and unknown fairness relation. From the provided disagreements we show that various group fairness notions can be estimated.
## 3 Preliminaries and Related Work
### Group Fairness
Over the past decade, several statistical group fairness notions have been proposed to measure the biases in a given system. Such fairness notions seek for parity of some statistical measure (e.g. true positive rate, predictive parity value) across all the sensitive attributes (e.g. race, gender) present in the data. Specifically, group fairness notions measure the difference in a specific statistical measure between protected (e.g. Caucasians) and unprotected (e.g. African-Americans) groups of a sensitive attribute. Different versions of group-conditional metrics led to different statistical definitions of fairness [4, 6, 21, 23]. Consider \(\mathcal{X}\) and \(\mathcal{Y}\) as input and output spaces respectively where, \(|\mathcal{Y}|>2\). Let \(y=g(x)\in\mathcal{Y}\) be the outcome label given by the ML-based system for some input \(x\in\mathcal{X}\). On
the other hand, let \(z=f(x)\) be the label given by an alternate classifier (e.g. expert/non-expert critic) for the input \(x\). Furthermore, let \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\) denote the protected and unprotected sensitive groups respectively. Inspired from prior work [8], we define various group fairness notions in case of \(M\)-ary classification (i.e. \(|\mathcal{Y}|>2\)) as follows.
Statistical parity [10]:This measure seeks to compute the probability difference of individuals who are predicted to be positive across different sensitive groups. Formally, if \(SP_{m,k}=\mathbb{P}(y=k\mid x\in\mathcal{X}_{m})\) denotes the conditional probability of the sensitive group \(\mathcal{X}_{m}\) to receive a label \(k\in\mathcal{Y}\), the statistical parity of the system \(g\) can be quantified as
\[\max_{k}\left(\max_{m,m^{\prime}}\ SP_{m,k}-SP_{m^{\prime},k}\right), \tag{1}\]
for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\). If the difference \(SP_{m,k}-SP_{m^{\prime},k}\) is greater than 0, then the protected group is benefited. On the other hand, if the difference is less than 0, the unprotected group is benefited. Note that the statistical parity of any system can be directly measured from the system's outcome labels, without any need for an alternative classifier. However, this is not the case with other statistical fairness metrics.
Calibration [5]:A classifier is said to satisfy calibration if both protected and unprotected groups have almost similar positive predictive values (PPV). The PPV represents the probability of an individual with a positive prediction actually experiencing a positive outcome. Formally, if \(C_{m,k}=\mathbb{P}(z=k\mid y=k,x\in\mathcal{X}_{m})\) denote the positive predictive rate for the group \(\mathcal{X}_{m}\), the calibration of the system \(g\) with respect to the classifier \(f\) is computed as
\[\max_{k}\left(\max_{m,m^{\prime}}\ C_{m,k}-C_{m^{\prime},k}\right), \tag{2}\]
for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\).
Accuracy Equality [1]:This statistical measure computes the probability that both the classifiers \(g\) and \(f\) yielding the same label. Specifically, if \(AE_{m,k}=\mathbb{P}(y=z\mid x\in\mathcal{X}_{m})\) denote the conditional probability that both classifiers output the same label for the sensitive group \(\mathcal{X}_{m}\), the accuracy equality of the system \(g\) with respect to the alternative classifier \(f\) is quantified as
\[\max_{k}\left(\max_{m,m^{\prime}}\ AE_{m,k}-AE_{m^{\prime},k}\right), \tag{3}\]
for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\).
Equal Opportunity [13]:A classifier is said to satisfy equal opportunity when both protected and unprotected groups have similar true positive rates (TPR). Formally, if \(EO_{m,k}=\mathbb{P}(y=k\mid z=k,x\in\mathcal{X}_{m})\) denotes the equal opportunity
rate for the group \(\mathcal{X}_{m}\), the equal opportunity of the system \(g\) with respect to the label \(z=f(x)\) is given as
\[\max_{k}\left(\max_{m,m^{\prime}}\ EO_{m,k}-EO_{m^{\prime},k}\right), \tag{4}\]
for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\).
Predictive Equality [7]:A classifier is said to satisfy equal opportunity when both protected and unprotected groups have similar false positive rates (FPR). Formally, if \(PE_{m,k}=\mathbb{P}(y=k\ |\ z\neq k,x\in\mathcal{X}_{m})\) denotes the predictive equality rate of the group \(\mathcal{X}_{m}\), the predictive equality of the system \(g\) with respect to the classifier \(f\) is computed as
\[\max_{k}\left(\max_{m,m^{\prime}}\ PE_{m,k}-PE_{m^{\prime},k}\right), \tag{5}\]
for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\).
Overall Misclassification Rate [24]:If \(OMR_{m,k}=\mathbb{P}(y\neq k\ |\ z=k,x\in\mathcal{X}_{m})\), the overall misclassification rate of the system \(g\) with respect the classifier \(f\) is given as
\[\max_{k}\left(\max_{m,m^{\prime}}\ OMR_{m,k}-OMR_{m^{\prime},k}\right), \tag{6}\]
for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\).
### Survey on Feedback Elicitation for Group Fairness Evaluation
In the past, several researchers have attempted to model human perception of fairness, but have always tried to fit their revealed feedback to one of the traditional fairness notions. For instance, in an experiment performed by [27], critics were asked to choose among two different models to identify which notion of fairness (demographic parity or equalized odds) best captures people's perception in the context of both risk assessment and medical applications. Likewise, another team surveyed 502 workers on Amazon's Mturk platform and observed a preference towards _equal opportunity_ in [15]. Dressel and Farid in [9] showed that COMPAS is as accurate and fair as that of untrained human auditors regarding predicting recidivism scores. On the other hand, [29] proposed a novel fairness notion, equality of opportunity (EOP), which requires that the distribution of utility should be the same for individuals with similar desert. Based on eliciting human judgments, they learned the proposed EOP notion in terms of criminal risk assessment context. Results show that EOP performs better than existing notions of algorithmic fairness in terms of equalizing utility distribution across groups. Another interesting work is by [12], who discovered that people's fairness concerns are typically multi-dimensional (relevance, reliability, and volitionality), especially when binary feedback was elicited. This means that
modeling human feedback should consider several factors beyond social discrimination. A major drawback of these approaches is that the demographics of the participants involved in the experiments [29, 12, 15, 25] are not evenly distributed. For instance, the conducted experiments ask how models treated Caucasians and African-Americans, but there were insufficient non-Caucasian participants to assess whether there was a relationship between the participant's own demographics and what group was disadvantaged. Moreover, the participants are presented with multiple questions in the existing literature which cannot be scaled for larger decision-based models [29]. Similar efforts have also been carried out in the case of individual fairness notions [10]. Since individual fairness is beyond the scope of this work, a survey on his topic is omitted for the sake of brevity. Interested readers may refer to [18, 11, 25, 17, 19] for more details.
## 4 Non-Expert Disagreement Model
As defined earlier in Section 3.1, let \(\mathcal{X}\) denote the high-dimensional input space where each \(x_{i}\in\mathcal{X}\) represents the input characteristics (e.g. gender, race, history of cancer/history of crime) for all \(i=\{1,\cdots,N\}\) input samples. On the other hand, let \(\mathcal{Y}\) denote the output space (e.g. donor kidney quality score, decile score) where \(|\mathcal{Y}|>2\). Consider \(g:\mathcal{X}\rightarrow\mathcal{Y}\) a ML-based system and \(\hat{y}=g(x)\in\mathcal{Y}\) denote the score given to the input profile \(x\in\mathcal{X}\). The goal of this paper is to evaluate the social biases present in the system \(g\) from the outcome disagreements elicited from a non-expert stakeholder who evaluates using an intrinsic classifier \(f\). Without any loss of generality, let \(z=f(x)\) denote the intrinsic label of the non-expert critic, who uses an unknown classifier \(f\) on the given input \(x\). Henceforth, we regard _outcome label_ as the label \(y\) given by the recommender system, and _intrinsic label_ as the unknown label \(z\) of the non-expert critic. The non-expert classifier \(f\) is typically deemed unreliable since the critics often lack technical knowledge and make amateur judgements. Therefore, this paper assumes that, given an input profile \(x\) and outcome label \(y=g(x)\), feedback from non-expert critics is elicited in the form of a binary _disagreement_\(s\in\{0,1\}\), as defined below.
Definition 1 (Non-Expert Disagreement Model): Given the outcome label of the system \(y=g(x)\), the disagreement feedback at the non-expert critic is given by
\[s(y)\ =\ \begin{cases}1,&\text{if }z\neq y,\\ 0,&\text{otherwise.}\end{cases} \tag{7}\]
If the non-expert critic agrees with the outcome, we assume that the true intrinsic label is similar to the outcome label given by the system i.e. \(z=y\). Whereas, if the non-expert critic disagrees with the outcome, the intrinsic label can be any other outcome label. For example, assume that the recidivism tool COMPAS predicts a decile score (usually on a scale of 1-10) of 8 for male,
African-American defendant. Assuming that the non-expert disagrees with this outcome given by the COMPAS, his/her true intrinsic outcome may lie anywhere in the range \([1,8)\cup(8,10]\). Note that, this uncertainty in the non-expert true intrinsic labels can increase with number of labels in the outcome space.
Furthermore, assume that the input population space \(\mathcal{X}\) is partitioned into \(M\) groups, namely \(X_{0},\cdots X_{M-1}\), where \(X_{0}\) represents the non-sensitive group, while all other groups are sensitive in nature. For example, if there are two types of attributes in input profiles, namely gender (male vs. non-male) and race (Caucasian vs. others), \(\mathcal{X}\) can be partitioned into \(\mathcal{X}_{0}\triangleq\mathcal{X}_{M,C}\) (a group of Caucasian males), \(\mathcal{X}_{1}\triangleq\mathcal{X}_{NM,C}\) (a group of Caucasian non-males), \(\mathcal{X}_{2}\triangleq\mathcal{X}_{M,O}\) (a group of males from other races) and \(\mathcal{X}_{3}\triangleq\mathcal{X}_{NM,O}\) (a group of non-males from other races). In such a partition, \(\mathcal{X}_{0}\triangleq\mathcal{X}_{M,C}\) represents the non-sensitive group, while all other groups are sensitive. Then, the \(\epsilon\)-disagreement rate with respect to the group \(\mathcal{X}_{m}\) is defined as
\[\begin{split} DR_{m}&=\mathbb{P}(s=1\ |\ x\in\mathcal{X}_{m})\\ &=\mathbb{P}(z\neq y\ |\ x\in\mathcal{X}_{m})\end{split} \tag{8}\]
where \(s\) is the \(\epsilon\)-disagreement from Equation (7). Similarly, let the conditional probability of _disagreements_ for a given outcome label \(k\in\mathcal{Y}\) be denoted as
\[\begin{split} DR_{m,k}&=\mathbb{P}(s=1\ |\ y=k,x\in \mathcal{X}_{m})\\ &=\mathbb{P}(z\neq k\ |\ y=k,x\in\mathcal{X}_{m})\end{split} \tag{9}\]
## 5 Definite Notions
The set of group fairness notions which can be precisely computed from disagreement rates (and/or statistical parity rates of the system) are identified as definite notions. We determine the notion of accuracy equality [1] and calibration [5] as definite notions, which are quantified as follows.
### Accuracy Equality
Proposition 1: _Given the disagreement rates of the non-expert \(DR_{m,k},DR_{m^{\prime},k}\) for sensitive groups \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\), the accuracy equality of the system \(g\) can be precisely computed as_
\[\begin{split}\max_{k}&\left(\max_{m,m^{\prime}}\ \ AE_{m,k}-AE_{m^{\prime},k}\right)\\ &\triangleq\max_{k}\left(\max_{m,m^{\prime}}\ \sum_{k\in\mathcal{Y}}DR_{m,k} \cdot SP_{m,k}-\sum_{k\in\mathcal{Y}}DR_{m^{\prime},k}\cdot SP_{m^{\prime},k} \right),\end{split} \tag{10}\]
_for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\)._
Proof: Using the accuracy equality rate \(AE_{m,k}\) for the label \(k\in\mathcal{Y}\) and the group \(\mathcal{X}_{m}\in\mathcal{X}\) from the Equation (3),
\[AE_{m,k}=\mathbb{P}(y=z\ |\ x\in\mathcal{X}_{m})=1-\mathbb{P}(y\neq z\ |\ x\in \mathcal{X}_{m}) \tag{11}\]
Considering all the possible labels \(k\in\mathcal{Y}\),
\[\begin{split} AE_{m,k}&=1-\sum_{k\in\mathcal{Y}} \mathbb{P}(y=k,z\neq k\ |\ x\in\mathcal{X}_{m})\\ &=1-\sum_{k\in\mathcal{Y}}\mathbb{P}(z\neq k\ |\ y=k,x\in \mathcal{X}_{m})\cdot\mathbb{P}(y=k\ |\ x\in\mathcal{X}_{m})\end{split} \tag{12}\]
Substituting the disagreement rate \(DR_{m,k}\) and statistical parity rate \(SP_{m,k}\) for the label \(k\) and group \(\mathcal{X}_{m}\), we obtain
\[AE_{m,k}=1-\sum_{k\in\mathcal{Y}}DR_{m,k}\cdot SP_{m,k} \tag{13}\]
\(\sqcap\)\(\sqcup\)
### Calibration
Proposition 2: _Given the disagreement rates of the non-expert \(DR_{m,k},DR_{m^{\prime},k}\) for sensitive groups \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\), the calibration of the system \(g\) can be precisely computed as_
\[\max_{k}\left(\max_{m,m^{\prime}}\ C_{m,k}-C_{m^{\prime},k}\right)\triangleq \max_{k}\left(\max_{m,m^{\prime}}\ DR_{m,k}-DR_{m^{\prime},k}\right) \tag{14}\]
_for all \(k\in\mathcal{Y}\) and \(\mathcal{X}_{m},\mathcal{X}_{m^{\prime}}\in\mathcal{X}\)._
Proof: Using the definition of calibration from the Equation (2),
\[\begin{split}|C_{i,k}-C_{j,k}|&=|1-DR_{i,k}-1+DR_{j,k}|\\ &=|DR_{i,k}-DR_{j,k}|.\end{split} \tag{15}\]
\(\sqcap\)\(\sqcup\)
## 6 Indefinite Notions: Bounds and Estimates
The set of group fairness notions which _cannot_ be computed precisely but can be estimated from disagreement rates (along with statistical parity rates) are identified as indefinite notions. We classify the notions of equal opportunity [14], predictive equality [7], and overall misclassification rate [24] as indefinite notions. Given any indefinite group fairness rate \(R_{m,k}\) for a given label \(k\) under a sensitive group \(\mathcal{X}_{m}\), let any generalized group fairness (GF) notion be defined as
\[GF=\max_{k}\Big{(}\max_{m,m^{\prime}}R_{m,k}-R_{m^{\prime},k}\Big{)}. \tag{16}\]
Then, we have the following estimate for \(GF\) based on lower \(L_{m,k}\) and upper \(U_{m,k}\) bounds computed using disagreement rates.
**Theorem 1**: _If an indefinite group fairness rate \(R_{m,k}\) is bounded by \(L_{m,k}\) from below and \(U_{m,k}\) from above, such that both \(L_{m,k}\) and \(U_{m,k}\) are computed using disagreement feedback. In other words, if_
\[L_{m,k}\leq R_{m,k}\leq U_{m,k},\]
_then the group fairness notion is bounded by_
\[\max_{k}\Big{(}\max_{m,m^{\prime}}L_{m,k}-U_{m^{\prime},k}\Big{)}\ \leq\ GF\ \leq\ \max_{k}\Big{(}\max_{m,m^{\prime}}U_{m,k}-L_{m^{\prime},k}\Big{)}. \tag{17}\]
_Furthermore, an estimate for \(GF\) based on these bounds is given by_
\[\hat{GF}=\frac{1}{2}\left[\max_{k}\Big{(}\max_{m,m^{\prime}}L_{m,k}-U_{m^{ \prime},k}\Big{)}+\max_{k}\Big{(}\max_{m,m^{\prime}}U_{m,k}-L_{m^{\prime},k} \Big{)}\right]. \tag{18}\]
Unfortunately, the upper bound computation from disagreement rates is found to be non-trivial. Therefore, we consider \(U_{m,k}=1\) for every indefinite notion.
### Equal Opportunity
**Proposition 3**: _Given the rate of disagreements \(DR_{m,k}\) and statistical parity rate \(SP_{m,k}\) of the system, the lower bound on equal opportunity of the recommender system \(g\) is given by_
\[EO_{m,k}\geq\frac{(1-DR_{m,k})\cdot SP_{m,k}}{(1-DR_{m,k})\cdot SP_{m,k}+\sum_{ l\neq k}SP_{m,l}} \tag{19}\]
_for every sensitive group \(\mathcal{X}_{m}\) and output score \(k\in\mathcal{Y}\)._
_Proof._ Taking equal opportunity rate of the sensitive group \(\mathcal{X}_{m}\) and expanding it using Bayes theorem.
\[\begin{split} EO_{m,k}&=\mathbb{P}[y=k\ |\ z=k,x\in\mathcal{X}_{m}]\\ &=\frac{\mathbb{P}(y=k,z=k\ |\ x\in\mathcal{X}_{m})}{\mathbb{P}(z=k \ |\ x\in\mathcal{X}_{m})}.\end{split} \tag{20}\]
Considering all possible labels \(l\in\mathcal{Y}\) in the denominator of Equation (20), we obtain the following lower bound:
\[EO_{m,k}=\frac{\mathbb{P}(z=k\ |\ y=k,x\in\mathcal{X}_{m})\cdot\mathbb{P}(y=k\ |\ x\in\mathcal{X}_{m})}{\sum_{ l\in\mathcal{Y}}\mathbb{P}(z=k\ |\ y=l,x\in\mathcal{X}_{m})\cdot\mathbb{P}(y=l\ |\ x\in \mathcal{X}_{m})} \tag{21}\]
Substituting the disagreement rate \(DR_{m,k}\) and statistical parity rate \(SP_{m,k}\) for the label \(k\) and group \(\mathcal{X}_{m}\), we obtain
\[EO_{m,k} =\frac{(1-DR_{m,k})\cdot SP_{m,k}}{\sum_{l\in\mathcal{Y}}\mathbb{P }(z=k\ |\ y=l,x\in\mathcal{X}_{m})\cdot SP_{m,l}}\] \[=\frac{(1-DR_{m,k})\cdot SP_{m,k}}{(1-DR_{m,k})\cdot SP_{m,k}+ \sum_{l\neq k}\mathbb{P}(z=k\ |\ y=l,x\in\mathcal{X}_{m})\cdot SP_{m,l}} \tag{22}\] \[=\frac{(1-DR_{m,k})\cdot SP_{m,k}}{(1-DR_{m,k})\cdot SP_{m,k}+ \sum_{l\neq k}\mathbb{P}(z=k,y=l\ |\ x\in\mathcal{X}_{m})}\]
We know that, \(\mathbb{P}(z=k,y=l\ |\ x\in\mathcal{X}_{m})=\mathbb{P}(z=k\ |\ y=l,x\in \mathcal{X}_{m})\cdot SP_{m,l}\leq SP_{m,l}\),
\[EO_{m,k}\geq\frac{(1-DR_{m,k})\cdot SP_{m,k}}{(1-DR_{m,k})\cdot SP_{m,k}+\sum_ {l\neq k}SP_{m,l}} \tag{23}\]
Therefore, from Equation (17), the notion of equal opportunity is bounded by
\[\max_{k}\left(\max_{m,m^{\prime}}\frac{\phi_{m,k}}{\phi_{m,k}+\sum_{l\neq k} SP_{m,l}}-1\right)\ \leq\ EO\ \leq\ \max_{k}\left(\max_{m,m^{\prime}}1-\frac{\phi_{m^{\prime},k}}{\phi_{m^{\prime},k}+\sum_{l\neq k}SP_{m^{\prime},l}}\right) \tag{24}\]
and, from Equation (18), the estimate of equal opportunity from the computed bounds is as follows.
\[\hat{EO}=\frac{1}{2}\left[\max_{k}\left(\max_{m,m^{\prime}}\frac{\phi_{m,k}}{ \phi_{m,k}+\sum_{l\neq k}SP_{m,l}}-1\right)+\max_{k}\left(\max_{m,m^{\prime}}1 -\frac{\phi_{m^{\prime},k}}{\phi_{m^{\prime},k}+\sum_{l\neq k}SP_{m^{\prime}, l}}\right)\right] \tag{25}\]
where \(\phi_{m,k}=(1-DR_{m,k})\cdot SP_{m,k}\).
### Predictive Equality
**Proposition 4**: _Given the rate of disagreements \(DR_{m,k}\) and statistical parity rate \(SP_{m,k}\) of the system, the lower bound on predictive equality of the recommender system \(g\) can be estimated as_
\[PE_{m,k}\geq\frac{DR_{m,k}\cdot SP_{m,k}}{DR_{m,k}\cdot SP_{m,k}+\sum_{l\neq k }SP_{m,l}} \tag{26}\]
_for every sensitive group \(\mathcal{X}_{m}\) and output score \(k\in\mathcal{Y}\)._
Proof: Taking the predictive parity rate of the sensitive group \(a\)
\[\begin{split} PE_{m,k}&=\mathbb{P}(y=k\ |\ z\neq k,x\in\mathcal{X}_{m})\\ &=\frac{\mathbb{P}(z\neq k\ |\ y=k,x\in\mathcal{X}_{m})\cdot \mathbb{P}(y=k\ |\ x\in\mathcal{X}_{m})}{\mathbb{P}(z\neq k\ |\ x\in\mathcal{X}_{m})}\end{split} \tag{27}\]
Substituting the disagreement rate \(DR_{m,k}\) and statistical parity rate \(SP_{m,k}\) for the label \(k\) and group \(\mathcal{X}_{m}\), we obtain
\[\begin{split} PE_{m,k}&=\frac{DR_{m,k}\cdot SP_{m, k}}{\mathbb{P}(z\neq k\ |\ x\in\mathcal{X}_{m})}\\ &=\frac{DR_{m,k}\cdot SP_{m,k}}{\sum_{l\in\mathcal{Y}}\mathbb{P}( z\neq k,y=l\ |\ x\in\mathcal{X}_{m})}\\ &=\frac{DR_{m,k}\cdot SP_{m,k}}{DR_{m,k}\cdot SP_{m,k}+\sum_{l \neq k}\mathbb{P}(z\neq k,y=l\ |\ x\in\mathcal{X}_{m})}\\ &\geq\frac{DR_{m,k}\cdot SP_{m,k}}{DR_{m,k}\cdot SP_{m,k}+\sum_{l \neq k}SP_{m,l}}\end{split} \tag{28}\]
From Equation (17), the notion of predictive equality is bounded by
\[\max_{k}\left(\max_{m,m^{\prime}}\frac{\mu_{m,k}}{\mu_{m,k}+\sum_{l\neq k}SP_{m,l}}-1\right)\ \leq\ PE\ \leq\ \max_{k}\left(\max_{m,m^{\prime}}1-\frac{\mu_{m^{\prime},k}}{\mu_{m^{\prime}, k}+\sum_{l\neq k}SP_{m^{\prime},l}}\right) \tag{29}\]
and, from Equation (18), the estimate of predictive equality from the computed bounds is as follows.
\[\hat{PE}=\frac{1}{2}\left[\max_{k}\left(\max_{m,m^{\prime}}\frac{\mu_{m,k}}{ \mu_{m,k}+\sum_{l\neq k}SP_{m,l}}-1\right)+\max_{k}\left(\max_{m,m^{\prime}}1- \frac{\mu_{m^{\prime},k}}{\mu_{m^{\prime},k}+\sum_{l\neq k}SP_{m^{\prime},l}} \right)\right] \tag{30}\]
where \(\mu_{m,k}=DR_{m,k}\cdot SP_{m,k}\).
### Overall Misclassification Rate
**Proposition 5**: _Given the rate of disagreements \(DR_{m,k}\) and statistical parity rate \(SP_{m,k}\) of the system, the lower bound on overall misclassification rate of the
recommender system \(g\) is given by_
\[OMR_{m,k}\geq\frac{\sum_{l\neq k}SP_{m,l}}{(1-DR_{m,k})\cdot SP_{m,k}+\sum_{l\neq k }SP_{m,l}} \tag{31}\]
_for every sensitive group \(\mathcal{X}_{m}\) and output score \(k\in\mathcal{Y}\)._
Proof: From the definition of equal opportunity, we have
\[EO_{m,k}\geq\frac{(1-DR_{m,k})\cdot SP_{m,k}}{(1-DR_{m,k})\cdot SP_{m,k}+\sum_ {l\neq k}SP_{m,l}} \tag{32}\]
We know that, \(OMR_{m,k}=1-EO_{m,k}\). Therefore we have,
\[OMR_{m,k}\geq 1-\frac{(1-DR_{m,k})\cdot SP_{m,k}}{(1-DR_{m,k})\cdot SP _{m,k}+\sum_{l\neq k}SP_{m,l}} \tag{33}\] \[=\frac{\sum_{l\neq k}SP_{m,l}}{(1-DR_{m,k})\cdot SP_{m,k}+\sum_{l \neq k}SP_{m,l}}\]
Therefore, from Equation (17), the notion of overall misclassification is bounded by
\[\max_{k}\left(\max_{m,m^{\prime}}\frac{\Omega_{m,k}}{\phi_{m,k}+\Omega_{m,k}} -1\right)\ \leq\ OMR\ \leq\ \max_{k}\left(\max_{m,m^{\prime}}1-\frac{\Omega_{m^{\prime},k}}{\phi_{m^{ \prime},k}+\Omega_{m^{\prime},k}}\right) \tag{34}\]
and, from Equation (18), the estimate of overall misclassification from the computed bounds is as follows.
\[\hat{P}\hat{E}=\frac{1}{2}\left[\max_{k}\left(\max_{m,m^{\prime}}\frac{\Omega _{m,k}}{\phi_{m,k}+\Omega_{m,k}}-1\right)+\max_{k}\left(\max_{m,m^{\prime}}1- \frac{\Omega_{m^{\prime},k}}{\phi_{m^{\prime},k}+\Omega_{m^{\prime},k}}\right)\right] \tag{35}\]
where \(\Omega_{m,k}=\sum_{l\neq k}SP_{m,l}\) and \(\phi_{m,k}=(1-DR_{m,k})\cdot SP_{m,k}\).
## 7 Validation Methodology and Results
We validate our theoretical findings using the real human feedback collected by Dressel and Farid [9]. This data acquisition experiment consists of a short
description of the defendant (gender, age, race, and previous criminal history) is provided to the human critics. A total of 1000 defendant descriptions are used that are drawn randomly from the original ProPublica's COMPAS dataset. Furthermore, these descriptions were divided into 20 subsets of 50 each. The experiment consisted of 400 different critics recruited from Amazon Mechanical Turk and each one of them was randomly assigned to see one of these 20 subsets. The participants were then asked to respond either _yes_ or _no_ to the question "Do you think this person will commit another crime within 2 years?". From these responses, we process the dataset to obtain disagreement feedback of each critic. Since the responses are binary, the disagreement feedback can be extracted as follows \(s=y\oplus z\) where, \(y\) denote COMPAS outcome label, \(z\) denote the critic's response, and \(\oplus\) represents XOR operation. In other words, if the critic predicts the label correctly, then he/she _agrees_ with the outcome label. On the other hand, if their prediction is incorrect, the critic _disagrees_ with the outcome. We also make an assumption that the outcome labels generated by COMPAS are presented to the critics.
Figure 1: Comparison of True Group Fairness (Grey), Proposed Estimate (Black), Upper (Red) and Lower bounds (Blue), and their respective mean error averaged over all 400 critics
Given the disagreement feedback from 400 critics, we evaluated COMPAS for different group fairness notions across both race and gender. Figure 0(a) demonstrates that the estimated equal opportunity (black line) from lower and upper bounds is close 0 as opposed to the true equal opportunity (blue line) for most of the critics. Moreover, a few violations in both upper and lower bounds can be observed. These violations may arise because of deviations from Bayes' rule in human behavior, which have been documented in the psychology literature [22]. Similar to equal opportunity, the estimated predictive equality remains close to zero for majority of the critics. Figure 0(b) depicts that the estimated predictive equality follows the trends of the true predictive equality for majority of the critics. Unfortunately, number of violations increased in case of overall misclassification rates where, the predicted overall misclassification lies around 0.2.
Figure 0(d) demonstrates the mean absolute error of 400 critics across three different indefinite notions. In case of estimated equal opportunity, the error varies from about 5% to 22% with the mean error of 12%. On the other hand, the mean error in estimating predictive equality is about 17% with maximum error 30%. Similarly, the mean absolute error in estimating overall misclassification is around 15%. The minimum error rate (around 2.5%) is similar for all three group fairness notions.
## 8 Conclusions and Future Work
In this paper, we proposed a novel feedback elicitation model for non-experts based disagreements. We identify two sets of groups fairness notions, one which can be precisely quantified from disagreements rates, and other which can be estimated based computed lower and upper bounds. Moreover, we validated our theoretical findings using real human feedback data across different group fairness notions and sensitive groups. In future, the objective to apply the proposed feedback elicitation model to kidney placement application by collecting actual disagreement feedback from patients and donors. Additionally, we hope to explore the relation between individual fairness and disagreements as well.
|
2303.01069 | Implicit Neural Representations for Modeling of Abdominal Aortic
Aneurysm Progression | Abdominal aortic aneurysms (AAAs) are progressive dilatations of the
abdominal aorta that, if left untreated, can rupture with lethal consequences.
Imaging-based patient monitoring is required to select patients eligible for
surgical repair. In this work, we present a model based on implicit neural
representations (INRs) to model AAA progression. We represent the AAA wall over
time as the zero-level set of a signed distance function (SDF), estimated by a
multilayer perception that operates on space and time. We optimize this INR
using automatically extracted segmentation masks in longitudinal CT data. This
network is conditioned on spatiotemporal coordinates and represents the AAA
surface at any desired resolution at any moment in time. Using regularization
on spatial and temporal gradients of the SDF, we ensure proper interpolation of
the AAA shape. We demonstrate the network's ability to produce AAA
interpolations with average surface distances ranging between 0.72 and 2.52 mm
from images acquired at highly irregular intervals. The results indicate that
our model can accurately interpolate AAA shapes over time, with potential
clinical value for a more personalised assessment of AAA progression. | Dieuwertje Alblas, Marieke Hofman, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink | 2023-03-02T08:43:40Z | http://arxiv.org/abs/2303.01069v1 | # Implicit Neural Representations for Modeling of Abdominal Aortic Aneurysm Progression
###### Abstract
Abdominal aortic aneurysms (AAAs) are progressive dilatations of the abdominal aorta that, if left untreated, can rupture with lethal consequences. Imaging-based patient monitoring is required to select patients eligible for surgical repair. In this work, we present a model based on implicit neural representations (INRs) to model AAA progression. We represent the AAA wall over time as the zero-level set of a signed distance function (SDF), estimated by a multilayer perception that operates on space and time. We optimize this INR using automatically extracted segmentation masks in longitudinal CT data. This network is conditioned on spatiotemporal coordinates and represents the AAA surface at any desired resolution at any moment in time. Using regularisation on spatial and temporal gradients of the SDF, we ensure proper interpolation of the AAA shape. We demonstrate the network's ability to produce AAA interpolations with average surface distances ranging between 0.72 and 2.52 mm from images acquired at highly irregular intervals. The results indicate that our model can accurately interpolate AAA shapes over time, with potential clinical value for a more personalised assessment of AAA progression.
Keywords:Abdominal aortic aneurysm Implicit neural representation Deep learning Aneurysm progression
## 1 Introduction
Abdominal aortic aneurysms (AAAs) are progressive local dilatations of the abdominal aorta of at least 30 mm that most frequently occur below the renal arteries. AAAs are mostly asymptomatic, but rupture of an AAA has a mortality rate of 70-80% [3]. To avert rupture, patients can undergo elective repair via either open surgery or an endovascular procedure. Patients become eligible for surgical repair if the diameter of the AAA exceeds a threshold (5.5 cm in men,
5.0 cm in women) or if the AAA diameter has increased more than 1 cm in a year [16].
Prior to elective repair, patients are monitored via periodic outpatient clinic visits and imaging with ultrasound or CT. Although these longitudinal images are primarily used to measure the diameter of the aneurysm, they contain a wealth of information that may be leveraged to better model AAA progression in individual patients [6]. Detailed insight into personalised AAA progression has the potential to aid the physician in clinical decision-making by filling in the gaps in surveillance data. Previous efforts to model the progression of AAAs based on longitudinal imaging include models based on Gaussian processes that represent an underlying deformation field [4], Markov chains [21], deep belief networks [9], or CNNs operating on the surface of the AAA [10].
Recently, implicit neural representations (INRs) have gained traction as natural representations for signals on a spatial or spatiotemporal domain [20]. INRs are multilayer perceptrons that take continuous coordinates as input and output the value of the signal or function at that point [14]. INRs are attractive representation models as derivatives of the signal can be analytically computed using automatic differentiation. In medical imaging, INRs have been used for, e.g., sparse-view CT reconstruction [13, 15] and image registration [19]. Moreover, INRs can be used to accurately represent shapes [12], which has led to applications in cell shape synthesis [18] statistical shape modeling [11, 2] or surface fitting based on point cloud annotations [1].
In this work, we propose to use INRs with a time coordinate to represent a longitudinal 3D AAA model of a patient and investigate to what extent such a model can be used to _interpolate_ and _extrapolate_ the AAA surface in time.
## 2 Methods
We represent the evolving AAA surface as the zero level set of its temporal signed distance function (SDF). We parametrize this function by a neural network \(f(\mathbf{x},t;\theta)\), with weights \(\theta\).
### Signed distance function
A surface can be implicitly represented by the zero level set of its signed distance function. We consider a manifold evolving over time, that we represent by a temporal SDF: \(SDF(\mathbf{x},t):\mathbb{R}^{3}\times\mathbb{R}\mapsto\mathbb{R}\). The value of the \(SDF(\mathbf{x},t)\) represents the minimum distance to the surface at location \(\mathbf{x}\) at time \(t\). The temporal SDF of an evolving 2D manifold \(\mathcal{M}\) embedded in \(\mathbb{R}^{3}\times\mathbb{R}\) is defined as:
\[SDF_{\mathcal{M}}(\mathbf{x},t)=\begin{cases}-d(\mathbf{x},\mathcal{M})&\mathbf{x}\text{ inside }\mathcal{M}\text{ at time }t\\ 0&\mathbf{x}\text{ on }\mathcal{M}\text{ at time }t\\ d(\mathbf{x},\mathcal{M})&\mathbf{x}\text{ outside }\mathcal{M}\text{ at time }t.\end{cases} \tag{1}\]
Moreover, the signed distance function is a solution to the Eikonal equation at each instance in time: \(||\nabla_{x}SDF(\mathbf{x},t)||=1,\forall\mathbf{x},t\).
### Implicit Neural Representations
In previous work, it has been shown that an SDF of a manifold can be represented by a neural network [1, 7, 12, 18]. Similarly, we embed the remodeling of the AAA over time in an implicit neural representation (INR). We use 4D coordinates from the spatiotemporal domain \(\Omega:=[-1,1]^{3}\times[-1,1]\) as input to the network \(f(\mathbf{x},t;\theta)\). The output node of our INR approximates the SDF value at the input coordinate. Figure 1 shows a schematic overview of our INR.
We aim to reconstruct \(SDF_{\text{AAA}}(\mathbf{x},t)\) given a sequence of point clouds of the AAA surface, representing the aneurysm shape of a single patient over \(J\) scans: \(\{\mathcal{X}_{j}\}_{j=1,\dots,J}\), where \(\mathcal{X}_{j,\cdot}\subset[-1,1]^{3}\). We denote individual points on the \(j^{\text{th}}\) AAA surface \(\mathbf{x}_{i}^{j}\). To optimise the INR, we sample points on and off the AAA surface at multiple instances in time.
The loss function we use to optimize the INR consists of two terms: a term \(\mathcal{L}_{\text{data}_{j}}\) at each time point \(t_{j}\) where we have ground-truth scan data, and a term \(\mathcal{L}_{\text{reg}}\) that regularises the SDF at times the surface is unknown.
\[\mathcal{L}(\theta) =\sum_{1\leq j\leq J}\mathcal{L}_{\text{data}_{j}}(\theta)+ \mathcal{L}_{\text{reg}}(\theta), \tag{2}\] \[\mathcal{L}_{\text{data}_{j}}(\theta) =\frac{1}{N_{j}}\sum_{1\leq i\leq N_{j}}|f(\mathbf{x}_{i}^{j},t_{j}; \theta)|+\lambda_{1}\mathbb{E}(||\nabla_{x}f(\mathbf{x},t_{j};\theta)||-1)^{2}\] (3) \[+\lambda_{2}\mathbb{E}(|\nabla_{t}f(\mathbf{x},t_{j};\theta)|)\] \[\mathcal{L}_{\text{reg}} =\lambda_{3}\mathbb{E}\left(||\nabla_{x}f(\mathbf{x},\tilde{t}; \theta)||-1\right)^{2}+\lambda_{4}\mathbb{E}\left(|\nabla_{t}f(\mathbf{x},t;\theta )|\right). \tag{4}\]
The first term of \(\mathcal{L}_{\text{data}_{j}}\) was introduced in [7]. It ensures \(SDF(\mathbf{x}_{i}^{j},t_{j})=0\) for all points \(\mathbf{x}_{i}^{j}\) in pointcloud \(\mathcal{X}_{j}\), i.e. that points that are known to be on the AAA surface are indeed on the zero level set of the SDF. The remaining terms in both parts of the loss function regularise the INR's spatial and temporal gradient. As
Figure 1: Schematic representation of our INR, taking spatiotemporal coordinates \((\mathbf{x},t)\) as an input, outputting \(SDF(\mathbf{x},t)\) of the AAA surface. Note that a single INR represents the complete evolving AAA of a patient.
these terms do not depend on pointcloud data, we evaluate them both at times \(t_{j}\) as well as times data is unavailable. Regularising the norm of the spatial gradient was also introduced in [7] and enforces the INR to be a solution to the Eikonal equation. We evaluate this term at time \(t_{j}\) in \(\mathcal{L}_{\mathrm{data}_{j}}\), and at an arbitrary time point \(\tilde{t}\) in \(\mathcal{L}_{\mathrm{reg}}\). The temporal regularisation term is introduced in this work to restrict temporal changes of the INR. These are evaluated at time \(t_{j}\) and at multiple arbitrary time points in \(\mathcal{L}_{\mathrm{data}_{j}}\) and \(\mathcal{L}_{\mathrm{reg}}\) respectively.
### Data
We retrospectively included longitudinal CT data of four patients scanned at Amsterdam AMC (Amsterdam, The Netherlands) between 2011 and 2020. Three patients were scanned four times, and one patient was scanned five times (Fig. 2). Scan dates were shifted so that the first scan date of each patient became day 0. Patient 1 was scanned three times between day 0 and day 103, followed by a gap of almost three years. The first follow-up image of Patient 2 was after 851 days, after which two additional follow-up images were acquired relatively soon. Patient 3 was scanned more regularly. The follow-up for Patient 4 is the longest, with over 75 months of follow-up. CT scans were a mixture of non-contrast and contrast-enhanced images.
We obtained automatic segmentations of the AAA and vertebra in each of these patients. All CT scans were processed using TotalSegmentator [17], a Python library based on nn-UNet [8] that segments \(>100\) structures in 3D CT images. This library segmented the verteb with good accuracy in both non-contrast and contrast-enhanced images and the AAA with high accuracy in all non-contrast images. However, segmentation of the AAA in contrast-enhanced images was unsatisfactory. Instead, we used an in-house dataset of 80 contrast-enhanced CT images of AAA patients with annotations of the AAA ranging between the top of the T12 vertebra and the iliac bifurcation to train an additional nn-UNet model. This model achieved a mean Dice similarity coefficient of 0.90 on a separate test set consisting of 13 contrast-enhanced CT scans.
Figure 2: Timeline of the CT scans of the four patients with longitudinal data, showing scan instances in days. Non-contrast scans are indicated with \({}^{\mathrm{NC}}\).
### Preprocessing
In order to evaluate local changes in shape over time, all shapes should be aligned in the same coordinate system. For this, we used rigid registration in ITK on the verteb segmentations [4]. Subsequently, the surface of each aorta was extracted from the mask and represented as a point set. This resulted in aligned pointcloud representations of the AAA surface for each scan. Finally, before serving as input to the network, the spatial coordinates of the pointclouds of each patient were jointly normalized to the \([-1,1]^{3}\) domain. Similarly, the time scale of each patient was normalized to the \([-1,1]\) interval.
## 3 Experiments and Results
In all cases, we used an MLP with six fully connected layers containing 256 nodes with ReLU activations and a final node representing the estimated SDF of the AAA surface. Like [1, 7], we used a skip connection, connecting the input to the third hidden layer. The regularization coefficients were set to \(\lambda_{1}=\lambda_{2}=\lambda_{3}=\lambda_{4}=0.1\). We used an Adam optimizer with a learning rate of 0.0001 to train our network for 25,000 epochs on an NVIDIA Quadro RTX 6000 GPU. The batch sizes depended on the size of point clouds and ranged between 2877 and 6027.
### Interpolation and Extrapolation
For each patient, we first optimised a single INR (Fig. 1) based on point clouds from all available scans. Because the spatiotemporal input coordinates to the INR are continuous, we can retrieve a shape at any point in time at any resolution. We visualize this in Fig. 3(_left_), where we show ten AAA shapes of Patient 4 at regularly spaced intervals. In Fig. 3(_right_) we compare the diameters along the AAA centerlines of the ground-truth segmentation masks to the
Figure 3: An optimised INR can be used to extract shape interpolations at an arbitrary number of time points, here we show results for Patient 4. _Left_: We show extracted shapes at ten regularly spaced intervals in time. _Right:_ Diameter plots along the centerlines of the aorta, comparing the ground-truth segmentation mask (solid) to the surface fitted by the network at five time points where reference CT scans are available (dashed).
AAA surfaces reconstructed by the network, represented by solid and dashed lines respectively. We observe that the model accurately represents the AAA shapes at scan instances and will thus be used to evaluate the next experiment.
Next, we performed a series of leave-one-out experiments in which we optimised an INR for a patient but left out one of the time points. We used the optimised INR to estimate what the surface would have been at that time point,
Figure 4: _Left:_ Inter- and extrapolated AAA shapes for each scan. Colors indicate distances to reference shapes, averages are indicated below each AAA. _Right:_ Diameter profiles along each aorta. Solid lines represent reference diameters, dashed lines show interpolated or extrapolated diameters. A complete set of diameter plots can be found in the Appendix.
and compare it to the real reconstruction. Figure 4 shows the results of this leave-one-out experiment. Results for individual patients are visualized per row. The left column in each row shows the interpolated or extrapolated AAA shapes predicted by the network when that scan was left out of the training data. Colors indicate the minimal surface distance between the interpolated AAA surface and the reference AAA surface, where lower is better. The right column contains diameter plots for each aorta along its centerline estimated based on an inscribed sphere method [5, 9]. Solid lines represent the diameters of the reference AAA surfaces, and the dashed line represents the diameter of the AAA from a scan that was left out. Note that we here show the diameter profile for one leave-one-out experiment per patient and that a full set of diameter profiles can be found in the Appendix.
Figure 4 shows that the INR model can _interpolate_ AAA shapes to a decent extent. For example, in Patient 3, the interpolated surfaces at \(t=407\) and \(t=573\) had average surface distances of \(1.23\) and \(1.01\) mm, respectively, compared to the ground-truth shapes. This is also reflected in the diameter plot for Patient 3, where the interpolated (dashed) line for \(t=407\) days closely follows the reference (solid) line. The results in Figure 4 also indicate that interpolation might work better in cases where the interval between scans is shorter. For example, interpolation results for Patient 1 at \(t=15\), which is only \(15\) and \(88\) days apart from two other scans, have an average surface distance of \(0.91\) mm. In contrast, interpolation results for Patient 4 at \(t=547\), which is \(547\) and \(707\) days apart from two other scans, show relatively large errors on the aneurysm sac. However, this is not consistently the case. For Patient 2, the ASD is \(1.47\) mm when interpolating at \(t=900\) days, which is larger than the ASD when interpolating for \(t=851\) days. From the diameter plots shown for Patients 3 and 4, we see that interpolations of the model consistently lie between the surrounding two scans and are close to the diameters of the reference shape.
Results also indicate that extrapolation is challenging for the model. The INR particularly struggles to extrapolate over bigger time gaps. For Patient 1, we observe that the extrapolations at \(t=0\) days and \(t=1022\) days have worse results than the interpolations. Moreover, the extrapolation at \(t=1022\) days differs more from the reference shape than at \(t=0\) days due to the difference in time gaps. The diameter profiles for Patient 1 and Patient 2 reveal that the model tends to reconstruct the surface of the last known shape. We hypothesize that this might be due to the temporal regularization term in Eq. 4.
Finally, Figure 4 indicates that our INR model reacts strongly to small misalignments of the original AAA shapes. Following [4], we register AAA shapes based on segmentation masks of the vertebrae, but this alignment might lead to small local shifts of the AAA. For example, the result for Patient 2, \(t=0\) in Fig. 4 shows errors on the healthy part of the aorta, an area that, in principle, should not show growth over time.
## 4 Discussion and Conclusion
In this work, we have obtained a personalised model for AAA progression, based on longitudinal CT data. We combine fully automatic state-of-the-art image segmentation methods, registration, and shape modeling with implicit neural representations and adequate regularisation terms to build personalised models of an evolving anatomical structure. In experiments with four longitudinally scanned AAA patients, we have demonstrated how the model represents the evolving shape of an AAA over time. This may impact patient monitoring and treatment; accurate knowledge about the progression of an AAA allows the physician to personalise surveillance and time intervention better based on AAA diameter and growth rate [16].
One appealing aspect of our approach is the continuity of the implicit neural representations. This allows us to reconstruct an AAA mesh at any point in time, at any desired resolution. We have here modeled shape changes over multiple years with sparse and irregularly spaced shape data. Modeling this change through linear interpolation of alternative surface representations, such as meshes or point clouds, would require point-to-point correspondence, a challenging problem that we here circumvent. Moreover, since our network relies on pointcloud data, it is agnostic to imaging modality. This is important for longitudinal studies of AAAs, where imaging modalities such as MRI and 3D US are increasingly used. All these scans can be incorporated into this framework as long as we can extract AAA surfaces. Furthermore, because we represent an evolving shape in space and time in a differentiable neural network, we can add any gradient-based regularisation term to the loss function. We have here included an Eikonal term and temporal regularization, but this framework could be further extended. Lastly, we found that our model is sensitive to errors in the initial alignment of AAA shapes. Although we have followed [4] in registering based on the location of the vertebrae, better results can likely be achieved by registering based on other landmarks, such as the renal arteries and iliac bifurcation.
One limitation of the current approach is the relatively limited test set of four longitudinally scanned patients, which we aim to increase in future work. Moreover, our approach is purely based on morphology and does not include other biomarkers for AAA growth and rupture [6]. In future work, we will investigate if we can incorporate, e.g., results of computational fluid dynamics and fluid-structure interaction modeling in our model [10]. Furthermore, additional optimization constraints could more properly model the pathophysiology of aneurysms. Whereas our temporal regularization term now aimed to minimize the gradient of the SDF, in future work, we could optimize this gradient within biologically plausible growth rates. This kind of regularization could also be obtained in a data-driven way, by learning a generalisable model from a larger set of patients with longitudinal data. Finally, there is evidence that intraluminal thrombus shape plays a key role in AAA remodeling [21], and it might be beneficial to explicitly represent thrombus in our INR [1].
In conclusion, we have shown that INRs are promising tools in modeling AAA evolution. In future work, this flexible model could be extended with biologically plausible regularization terms and hemodynamic parameters.
## 5 Acknowledgements
Jelmer M. Wolterink was supported by the NWO domain Applied and Engineering Sciences VENI grant (18192).
|
2303.06706 | Constructing Galois representations with prescribed Iwasawa
$λ$-invariant | Let $p\geq 5$ be a prime number. We consider the Iwasawa $\lambda$-invariants
associated to modular Bloch-Kato Selmer groups, considered over the cyclotomic
$\mathbb{Z}_p$-extension of $\mathbb{Q}$. Let $g$ be a $p$-ordinary cuspidal
newform of weight $2$ and trivial nebentype. We assume that the $\mu$-invariant
of $g$ vanishes, and that the image of the residual representation associated
to $g$ is suitably large. We show that for any number greater $n$ greater than
or equal to the $\lambda$-invariant of $g$, there are infinitely many newforms
$f$ that are $p$-congruent to $g$, with $\lambda$-invariant equal to $n$. We
also prove quantitative results regarding the levels of such modular forms with
prescribed $\lambda$-invariant. | Anwesh Ray | 2023-03-12T16:56:06Z | http://arxiv.org/abs/2303.06706v2 | # Constructing Galois representations with prescribed Iwasawa \(\lambda\)-invariant
###### Abstract.
Let \(p\geq 5\) be a prime number. We consider the Iwasawa \(\lambda\)-invariants associated to modular Bloch-Kato Selmer groups, considered over the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). Let \(g\) be a \(p\)-ordinary cuspidal newform of weight \(2\) and trivial nebentype. We assume that the \(\mu\)-invariant of \(g\) vanishes, and that the image of the residual representation associated to \(g\) is suitably large. We show that for any number greater \(n\) greater than or equal to the \(\lambda\)-invariant of \(g\), there are infinitely many newforms \(f\) that are \(p\)-congruent to \(g\), with \(\lambda\)-invariant equal to \(n\). We also prove quantitative results regarding the levels of such modular forms with prescribed \(\lambda\)-invariant.
Key words and phrases:Iwasawa theory, modular forms, Selmer groups, Galois representations, Bloch-Kato Selmer groups 2020 Mathematics Subject Classification: IIR23, IIF80 (primary); IIFll, IIR18 (secondary)
## I. Introduction
The Iwasawa theory of Selmer groups associated to Galois representations captures significant arithmetic information about motives. Let \(p\) be a prime number. Given an elliptic curve \(E_{/\mathbb{Q}}\), and a number field extension \(F/\mathbb{Q}\), the \(p\)-primary Selmer group of \(E\) over \(F\) is of fundamental importance and captures information about the Mordell-Weil group \(E(F)\) and the \(p\)-primary part of the Tate-Shafarevich group \(\operatorname{\mathrm{III}}(E/F)\). The fundamental object of study in the Iwasawa theory of elliptic curves is the \(p\)-primary Selmer group over the cyclotomic \(\mathbb{Z}_{p}\)-extension, denoted \(\operatorname{\mathrm{Sel}}_{p^{\infty}}(E/\mathbb{Q}_{\mathrm{cyc}})\). Mazur [10] initiated the Iwasawa theory of elliptic curves \(E\) with good ordinary reduction at \(p\), and associated structural invariants associated with these Selmer groups.
### Main results
In this paper we consider \(\lambda\)-invariants associated to Bloch-Kato Selmer groups attached to modular Galois representations. We prove certain qualitative and quantitative results about the levels of modular forms that arise in natural families, for which the \(\lambda\)-invariant is prescribed to be a fixed value. We fix a cuspidal Hecke newform \(g\) of weight \(2\) on \(\Gamma_{0}(N_{g})\). Associated with a fixed choice of embedding \(\iota_{p}:\bar{\mathbb{Q}}\hookrightarrow\bar{\mathbb{Q}}_{p}\), let \(\rho_{g}\) be the associated Galois representation. It is assumed that the image of the residual representation \(\bar{\rho}_{g}:\operatorname{\mathrm{Gal}}(\bar{\mathbb{Q}}/\mathbb{Q}) \to\operatorname{\mathrm{GL}}_{2}(\bar{\mathbb{F}}_{p})\) is up to conjugation, equal to \(\operatorname{\mathrm{GL}}_{2}(\mathbb{F}_{p})\). When we assume that \(p\) is odd and \(g\) is \(p\)-ordinary, the Iwasawa \(\mu\) and \(\lambda\)-invariants are well defined, and denoted by \(\mu_{p}(g)\) and \(\lambda_{p}(g)\) respectively (cf. section 3.2).
**Theorem I.1**.: _Let \(p\geq 5\) be a prime and let \(g\) be a normalized newform of weigbt \(2\) on \(\Gamma_{0}(N_{g})\). We assume the following conditions_
1. _We shall assume tbat the image of the residual representation_ \(\bar{\rho}_{g}\) _lies in_ \(\operatorname{\mathrm{GL}}_{2}(\mathbb{F}_{p})\)_. Moreover, the Galois representation_ \(\bar{\rho}_{g}:\operatorname{\mathrm{Gal}}(\bar{\mathbb{Q}}/\mathbb{Q})\to \operatorname{\mathrm{GL}}_{2}(\mathbb{F}_{p})\) _is surjective._
2. _The modular form_ \(g\) _is_ \(p\)_-ordinary and_ \(p\nmid N_{g}\)_,_
3. \(g\) _bas optimal level, i.e.,_ \(N_{g}\) _is the prime to_ \(p\) _part of the Artin conductor of the residual representation,_
4. \(\mu_{p}(g)=0\)_._
_Associated with \(g\), let \(\Pi_{g}\) be tbe set of prime numbers \(\ell\) such tbat the following conditions are satisfied:_
1. \(\ell\nmid N_{g}p\)_,_
2. \(\ell\not\equiv\pm 1\mod p\) _and_ \(\ell^{p-1}\not\equiv 1\mod p^{2}\)_,_
3. \(\bar{\rho}_{g}(\sigma_{\ell})=\left(\begin{array}{cc}\ell&0\\ 0&1\end{array}\right)\) _for a suitable choice of basis for the residual representation._
_Let \(\Omega_{g}\) be the set of prime numbers such tbat_
1. \(\ell\nmid N_{g}p\)_,_
2. \(\ell\not\equiv\pm 1\mod p\)_,_
3. \(\bar{\rho}_{g}(\sigma_{\ell})=\left(\begin{array}{cc}-\ell&0\\ 0&-1\end{array}\right)\) _for a suitable choice of basis for tbe residual representation._
_The primes \(\Pi_{g}\) and \(\Omega_{g}\) have Dirichlet density \(\frac{(p-3)}{p(p-1)}\) and \(\frac{(p-3)}{(p-1)^{2}}\) respectively._
_Let \(n,r\in\mathbb{Z}_{\geq 0}\) be such tbat \(\max\{n,r\}>0\). Let \(n\geq 1\) be an integer. Then for any sets of primes \(\{q_{1},\ldots,q_{n}\}\subset\Pi_{g}\) and \(\{\ell_{1},\ldots,\ell_{r}\}\subset\Omega_{g}\), there exists a normalized newform \(f\) of weigbt \(2\) of level \(N_{f}=N_{g}\times\prod_{i=1}^{n}q_{i}\times\prod_{j=1}^{r}\ell_{j}\) such that_
1. \(f\) _has good ordinary reduction at_ \(p\)_,_
2. \(\bar{\rho}_{g}\simeq\bar{\rho}_{f}\)_,_
3. \(\mu_{p}(f)=0\)_,_
4. \(\lambda_{p}(f)=\lambda_{p}(g)+n\)_._
In the special case when \(\mu_{p}(g)=0\) and \(\lambda_{p}(g)\leq 1\), one is able to realize an infinite family of modular forms \(f\) that are \(p\)-congruent to \(g\), for which the Bloch-Kato Selmer group over \(\mathbb{Q}\) has prescribed corank. We refer to Theorem 5.5 for the statement of the result.
The following result further illustrates Theorem l.1.
**Theorem l.2**.: _There is a positive density set of primes \(p\), such tbat for any \(n\geq 0\), there exist infinitely many normalized Hecke cuspidal newforms \(f\) of weight \(2\) such tbat_
1. \(f\) _has good ordinary reduction at_ \(p\)_,_
2. \(\bar{\rho}_{f}\) _is surjective,_
3. \(\mu_{p}(f)=0\)_,_
4. \(\lambda_{p}(f)=n\)_._
### Relationship with previous work
The existence of Galois representations for which the associated Selmer groups have large \(\lambda\)-invariant has been studied by various authors, cf. [10, 11]. Our results are significantly stronger since we are able to explicitly realize any large enough integer as a \(\lambda\)-invariant. If at a given prime \(p\geq 5\), there exists a newform \(g\) satisfying the conditions of Theorem l.1, and such that \(\lambda_{p}(g)=0\), then, every integer \(n\geq 0\) is seen to arise from a modular form which is \(p\)-congruent to \(g\). This is indeed the case for a density \(1\) set of primes \(p\), as shown by the proof of Theorem l.2. Furthermore, not only is one able to construct infinitely many modular Galois representations giving rise to a prescribed \(\lambda\)-invariant, one also
obtains an explicit and satisfactory quantitative description of their levels. We contrast Theorem 1.1 to the results of recent work by Hatley and Kundu [14], where it is shown that there are infinitely many modular forms \(f\) that are \(p\)-congruent to a fixed modular form \(g\), for which \(\lambda_{p}(f)=\lambda_{p}(g)\). This \(\lambda\)-stability result requires an additional assumption on \(g\): that the \(\lambda\)-invariant of \(g\) is minimal in the family of all \(\lambda\)-invariants for modular forms that are \(p\)-congruent to \(g\). For further details, we refer to p.l5 of _loc. cit._ This assumption is clearly satisfied when \(\lambda_{p}(g)\) is \(0\). This \(\lambda\)-stability result follows from Theorem 1.1 in the special case when \(n=0\), without the additional hypothesis. The method used in proving our results does draw some inspiration from [13, 14].
### Organization
Including the introduction, the manuscript consists of \(5\) sections. In section 2, we set up basic notation and review the level raising theorems of Carayol (cf. Theorem 2.1) and Diamond-Taylor (cf. Theorem 2.3). In section 3, we describe the relationship between the Bloch-Kato and Greenberg Selmer groups over the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). We describe the Iwasawa invariants associated to these Selmer groups. In section 4, we recall the results of Greenberg and Vatsal, and prove Theorem 4.8, which paves the way to the proof of Theorem 1.1. In section 5, we compute the densities of the sets \(\Pi_{g}\) and \(\Omega_{g}\), and prove Theorems 1.1 and 1.2.
### Acknowledgment
The author's research is supported by the CRM Simons postdoctoral fellowship.
## 2. Preliminaries
Fix an algebraic closure \(\bar{\mathbb{Q}}/\mathbb{Q}\), and for each prime \(\ell\), let \(\bar{\mathbb{Q}}_{\ell}\) be an algebraic closure and fix an inclusion of \(\iota_{\ell}:\bar{\mathbb{Q}}\hookrightarrow\bar{\mathbb{Q}}_{\ell}\). Set \(\bar{\mathbb{Z}}\) to be the integral closure of \(\mathbb{Z}\) in \(\bar{\mathbb{Q}}\). The choice of embedding \(\iota_{\ell}\) corresponds to a choice of prime \(\mathfrak{l}|\ell\) of \(\bar{\mathbb{Z}}\). The inclusion \(\iota_{\ell}\) induces an isomorphism of \(\bar{\mathbb{Z}}_{\mathfrak{l}}\) with \(\bar{\mathbb{Z}}_{\ell}\). Let \(\mathrm{G}_{\ell}\) denote the absolute Galois group \(\mathrm{Gal}(\bar{\mathbb{Q}}_{\ell}/\mathbb{Q}_{\ell})\). The inclusion \(\iota_{\ell}\) induces an inclusion of \(\mathrm{G}_{\ell}\) into \(\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\), whose image is the decomposition group of \(\mathfrak{l}\). Let \(\mathrm{I}_{\ell}\) be the inertia group of \(\mathrm{G}_{\ell}\), and choose a Frobenius element \(\sigma_{\ell}\in\mathrm{G}_{\ell}\). Given a Galois representation \(\rho:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\mathrm{GL}_{2}(\cdot)\), let \(\rho_{|\ell}\) be the restriction of \(\rho\) to \(\mathrm{G}_{\ell}\). The representation \(\rho\) is _unramified_ at \(\ell\) if the restriction of \(\rho_{|\ell}\) to \(\mathrm{I}_{\ell}\) is trivial. Fix a prime \(p\geq 5\) and let \(\mathfrak{p}|p\) be the prime above \(p\), corresponding to the inclusion \(\iota_{p}\). Let \(f=\sum_{n=1}^{\infty}a_{n}(f)q^{n}\) be a normalized cuspidal newform of weight \(k\geq 2\) and \(\rho_{f}:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\mathrm{GL}_{2}(\bar{ \mathbb{Q}}_{p})\) be the associated Galois representation. The Hodge-Tate weights for the Galois representation are \(\{k-1,0\}\). We note here that this is simply a matter of convention (instead of the Hodge-Tate weights being \(\{0,1-k\}\)). There is a finite extension \(K\) over \(\mathbb{Q}_{p}\) such that w.r.t a suitable choice of basis, the image of \(\rho_{f}\) lies in \(\mathrm{GL}_{2}(K)\). In greater detail, let \(F\) be a number field containing the Fourier coefficients of \(f\), and \(K\) is the completion \(F_{\mathfrak{p}}\). We set \(V_{f}\simeq K^{2}\) to denote the underlying Galois module for the representation \(\rho_{f}\). Set \(\mathcal{O}\) to denote the valuation ring of \(K\), \(\varpi\) be a uniformizer of \(\mathcal{O}\), and let \(\kappa:=\mathcal{O}/(\varpi)\) be the residue field of \(\mathcal{O}\). There exists a Galois stable \(\mathcal{O}\)-lattice \(T_{f}\subset V_{f}\). We shall also denote the integral representation on \(T_{f}\) by \(\rho_{f}:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\mathrm{GL}_{2}(\mathcal{O})\). We denote its mod-\(\varpi\) reduction by \(\bar{\rho}_{f}:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\mathrm{GL}_{2}(\kappa)\). We shall assume throughout the \(\bar{\rho}_{f}\) is absolutely irreducible. In this setting, it is easy to see that the Galois stable \(\mathcal{O}\)-lattice \(T_{f}\) is uniquely determined, and hence, there is no ambiguity in the notation used.
### The level raising results of Diamond-Taylor
Let \(p\) be an odd prime number and \(\bar{\rho}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\operatorname{GL}_{2}( \bar{\mathbb{F}}_{p})\) be an irreducible Galois representation. Let \(c\in\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\) denote the complex conjugation; \(\bar{\rho}\) is _odd_ if \(\det\bar{\rho}(c)=-1\). Serre [10] conjectured that any odd and irreducible representation is modular. In greater detail, the strong form of the conjecture states that \(\bar{\rho}\) arises from a cuspidal newform \(g\) of weight \(k\geq 2\) on \(\Gamma_{1}(N_{\bar{\rho}})\), where \(N_{\bar{\rho}}\) is the prime to \(p\) part of the Artin-conductor of \(\bar{\rho}\) (cf. [11, p.II]). The weight \(k=k(\bar{\rho})\) is prescribed according to [10, section 2]. Khare and Wintenberger [11, 12] proved Serre's conjecture, building upon prior work of Ribet [13]. Suppose that \(f\) is a newform of weight \(k\) and level \(N_{f}\) coprime to \(p\), such that the associated \(p\)-adic Galois representation \(\rho_{f}\) lifts \(\bar{\rho}\). Then, the optimal level \(N_{\bar{\rho}}\) divides \(N_{f}\). A theorem of Carayol [1] proves necessary conditions for an integer \(N_{f}\) to arise in this way from a newform \(f\).
**Theorem 2.1** (Carayol).: _Suppose there exists a modular form \(f\) of weigbt \(k\) of level \(N_{f}\) such \(\bar{\rho}=\bar{\rho}_{f}\) is absolutely irreducible. The level \(N_{f}\) admits a factorization \(N_{f}=N_{\bar{\rho}}\prod_{\ell}\ell^{\alpha(\ell)}\), and for each \(\ell\) with \(\alpha(\ell)>0\), one of the following holds:_
1. \(\ell\nmid N_{\bar{\rho}}\)_,_ \(\ell\left(\operatorname{trace}\bar{\rho}\left(\sigma_{\ell}\right)\right)^{2} =(1+\ell)^{2}\det\bar{\rho}\left(\sigma_{\ell}\right)\) _in_ \(\bar{\mathbb{F}}_{p}\)_, and_ \(\alpha(\ell)=1\)_;_
2. \(\ell\equiv-1\mod p\) _and one of the following holds:_ 1. \(\ell\nmid N_{\bar{\rho}}\)_,_ \(\operatorname{trace}\bar{\rho}\left(\sigma_{\ell}\right)\equiv 0\) _in_ \(\bar{\mathbb{F}}_{p}\)_, and_ \(\alpha(\ell)=2\)_;_ 2. \(\ell\nmid N_{\bar{\rho}}\)_,_ \(\det\bar{\rho}\) _is unramified at_ \(\ell\)_, and_ \(\alpha(\ell)=1\)_;_
3. \(\ell\equiv 1\mod p\) _and one of the following holds:_ 1. \(\ell\nmid N_{\bar{\rho}}\) _and_ \(\alpha(\ell)=2\)_;_ 2. \(\ell\nmid N_{\bar{\rho}}\) _and_ \(\alpha(\ell)=1\) _or_ \(\ell\nmid N_{\bar{\rho}}\) _and_ \(\alpha(\ell)=1\)_._
**Definition 2.2**.: _The set of levels satisfying the conditions outlined in Theorem 2.1 is denoted by \(\mathcal{S}(\bar{\rho})\)._
**Theorem 2.3** (Diamond-Taylor [14]).: _Let \(p\geq 5\) be a prime,_
\[\bar{\rho}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\operatorname{GL}_ {2}(\bar{\mathbb{F}}_{p})\]
_be an irreducible Galois representation. Assume tbat \(\bar{\rho}\) arises from a newform \(g\) on \(\Gamma_{1}(N_{\bar{\rho}})\) of weigbt \(k\). Assume tbat \(k\) lies in the range \(2\leq k\leq p-2\), and let \(M\in\mathcal{S}(\bar{\rho})\). Then there exists a cuspidal newform \(f\) of weight \(k\) on \(\Gamma_{1}(M)\), such that \(\bar{\rho}_{f}=\bar{\rho}\)._
The result of Carayol and the above level-raising result of Diamond and Taylor shows that \(\mathcal{S}(\bar{\rho})\) is precisely the set of levels for the cuspidal newforms \(f\) such that \(\bar{\rho}_{f}\simeq\bar{\rho}\).
## 3. Selmer groups associated to modular forms
Let \(f=\sum_{n=1}^{\infty}a_{n}(f)q^{n}\) be a normalized new cuspform on \(\Gamma_{1}(N_{f})\) and \(p\) be an odd prime which is coprime to \(N_{f}\). Let \(\rho_{f}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\operatorname{GL}_ {2}(\mathcal{O})\) be the associated Galois representation and assume that the residual representation is absolutely irreducible. Here, \(K/\mathbb{Q}_{p}\) is a finite extension and \(\mathcal{O}\) is the valuation ring of \(K\). Let \(V_{f}\) and \(T_{f}\) be as defined in previous sections. We shall assume throughout that \(p\) is an ordinary prime, i.e., \(p\nmid a_{p}(f)\). Set \(\mu_{p^{n}}\) to denote the \(p^{n}\)-th roots of unity in \(\bar{\mathbb{Q}}\), and let \(\mathbb{Q}(\mu_{p^{n}})\) be the cyclotomic extension of \(\mathbb{Q}\) generated by \(\mu_{p^{n}}\). Note that the Galois group \(\operatorname{Gal}(\mathbb{Q}(\mu_{p^{n}})/\mathbb{Q})\) is isomorphic to \(\left(\mathbb{Z}/p^{n}\mathbb{Z}\right)^{\times}\). Let \(\mathbb{Q}(\mu_{p^{\infty}})\) be the union of cyclotomic fields \(\mathbb{Q}(\mu_{p^{n}})\). The Galois group \(\operatorname{Gal}(\mathbb{Q}(\mu_{p^{\infty}})/\mathbb{Q})\) is isomorphic to \(\mathbb{Z}_{p}^{\times}\) and decomposes into a product \(\Delta\times\mathbb{Z}_{p}\)
where \(\Delta\) is isomorphic to \(\mathbb{Z}/(p-1)\mathbb{Z}\). The cyclotomic \(\mathbb{Z}_{p}\)-extension \(\mathbb{Q}_{\mathrm{cyc}}\) is the field \(\mathbb{Q}(\mu_{p^{\infty}})^{\Delta}\) and the Galois group \(\mathrm{Gal}(\mathbb{Q}_{\mathrm{cyc}}/\mathbb{Q})\) is isomorphic to \(\mathbb{Z}_{p}\). For \(n\geq 0\), the _\(n\)-tb layer_\(\mathbb{Q}_{n}/\mathbb{Q}\) is the extension contained in \(\mathbb{Q}_{\mathrm{cyc}}\) such that \([\mathbb{Q}_{n}:\mathbb{Q}]=p^{n}\).
### Definition of Selmer groups
Let \(S\) be a finite set of rational primes containing \(\{p,\infty\}\), and \(\mathbb{Q}_{S}\subset\bar{\mathbb{Q}}\) be the maximal extension in which all primes \(\ell\notin S\) are unramified. Setting \(A_{f}:=V_{f}/T_{f}\), we note that \(A_{f}\simeq\left(K/\mathcal{O}\right)^{2}\). Let \(d^{\pm}\) be the dimension of the \((\pm 1)\)-eigenspaces for complex conjugation. We note that \(d^{+}=d^{-}=1\). Since we assume that \(f\) is \(p\)-ordinary, there exists a \(1\)-dimensional, \(K\)-subspace \(W_{f}\subseteq V_{f}\) of dimension \(1\), which is stable under the action of \(\mathrm{G}_{p}\). Let \(C\) be the image of \(W_{f}\) in \(A_{f}\), with respect to the mod-\(T_{f}\) map \(V_{f}\to A_{f}\), and set \(D=A_{f}/C\).
We assume that the set \(S\) contains all primes dividing \(N_{f}\). Let \(L\) be an extension of \(\mathbb{Q}\) which is contained in \(\mathbb{Q}_{\mathrm{cyc}}\). Thus \(L\) is the field \(\mathbb{Q}_{n}\) for some \(n\geq 0\), or is the field \(\mathbb{Q}_{\mathrm{cyc}}\). Given a rational prime \(\ell\), set \(\ell(L)\) to be the set of primes of \(L\) that lie above \(\ell\). We note that the set \(\ell(\mathbb{Q}_{\mathrm{cyc}})\) is finite. Given a prime \(w\) of \(\mathbb{Q}_{\mathrm{cyc}}\), let \(\mathbb{Q}_{\mathrm{cyc},w}\) be the union of all completions of \(p\)-adic fields contained in \(\mathbb{Q}_{\mathrm{cyc}}\) at \(w\). Given a prime \(\ell\in S\), the Greenberg local Selmer condition \(\mathcal{H}_{\ell}(L,A_{f})\) is defined as follows. For \(\ell\neq p\), the local condition is defined by
\[\mathcal{H}_{\ell}(L,A_{f}):=\prod_{w\in\ell(L)}H^{1}(L_{w},A_{f}).\]
The local condition at \(\ell=p\) is defined differently. Let \(\eta_{p}\) be the unique prime of \(L\) that lies above \(p\); let \(\mathrm{I}_{\eta_{p}}\) denote the inertia subgroup of \(\mathrm{Gal}(\bar{L}_{\eta_{p}}/L_{\eta_{p}})\). Let
\[H_{\eta_{p}}:=\ker\left(H^{1}(L_{\eta_{p}},A_{f})\longrightarrow H^{1}( \mathrm{I}_{\eta_{p}},D)\right),\]
and set
\[\mathcal{H}_{p}(L,A_{f}):=H^{1}(L_{\eta_{p}},A_{f})/H_{\eta_{p}}.\]
Following [10, 11], the Greenberg Selmer group \(\mathrm{Sel}^{\mathrm{Gr}}(L,A_{f})\) is then defined as follows
\[\mathrm{Sel}^{\mathrm{Gr}}(L,A_{f}):=\ker\left(H^{1}(\mathbb{Q}_{S}/L,A_{f}) \longrightarrow\bigoplus_{\ell\in S}\mathcal{H}_{\ell}(L,A_{f})\right).\]
For \(\ell\in S\), \(\mathcal{H}_{\ell}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\) is a cofinitely generated \(\mathbb{Z}_{p}\)-module (cf. [11, p.37]). We let \(\sigma_{\ell}(f)\) denote the \(\mathcal{O}\)-corank of \(\mathcal{H}_{\ell}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\). For \(\ell\neq p\), we describe an algorithm to compute \(\sigma_{\ell}(f)\). Let \(V_{f}^{\prime}\) be the quotient \(\left(V_{f}\right)_{\mathrm{I}_{\ell}}\) and \(\widetilde{P}_{\ell}(f;X)\) be the mod-\(\varpi\) reduction of \(P_{\ell}(f;X):=\det\left(\mathrm{Id}-\sigma_{\ell}X\mid V_{f}^{\prime}\right)\). Let \(s_{\ell}\) be the number of primes of \(\mathbb{Q}_{\mathrm{cyc}}\) that lie above \(\ell\). Note that \(s_{\ell}\) is the maximal power of \(p\) such that \(\ell^{p-1}\equiv 1\mod ps_{\ell}\). Let \(d_{\ell}(f)\) be the multiplicity of \(\ell^{-1}\) as a root of \(\widetilde{P}_{\ell}(f;X)\). By [11, Proposition 2.4], it follows that \(\sigma_{\ell}(f)\) is given by
\[\sigma_{\ell}(f)=s_{\ell}d_{\ell}(f). \tag{3.l}\]
Given a number field \(L/\mathbb{Q}\) contained in \(\mathbb{Q}_{\mathrm{cyc}}\), we set \(\mathrm{Sel}^{\mathrm{BK}}(L,A_{f})\) denote the Bloch-Kato Selmer group associated to \(A_{f}\), cf. [10] or [1, p.73, l.-l] for the precise
definition. The Bloch-Kato Selmer group over \(\mathbb{Q}_{\mathrm{cyc}}\) is defined as the direct limit
\[\mathrm{Sel}^{BK}(\mathbb{Q}_{\mathrm{cyc}},A_{f}):=\varinjlim_{n}\mathrm{Sel}^{ BK}(\mathbb{Q}_{n},A_{f}).\]
**Proposition 3.1**.: _With respect to notation above, there is a natural map_
\[\mathrm{Sel}^{\mathrm{BK}}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\to\mathrm{Sel}^{ \mathrm{Gr}}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\]
_with finite kernel and cokernel._
Proof.: The result above is [1, Corollary 4.3].
From the point of view of Iwasawa theory, we may work with either Selmer group \(\mathrm{Sel}^{\mathrm{BK}}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\) or \(\mathrm{Sel}^{\mathrm{Gr}}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\). It conveniences us to work with the Greenberg Selmer group, and for ease of notation, we simply set
\[\mathrm{Sel}(\mathbb{Q}_{\mathrm{cyc}},A_{f}):=\mathrm{Sel}^{\mathrm{Gr}}( \mathbb{Q}_{\mathrm{cyc}},A_{f}).\]
### Iwasawa invariants
We set \(\Gamma:=\mathrm{Gal}(\mathbb{Q}_{\mathrm{cyc}}/\mathbb{Q})\) and \(\Lambda\) denote the Iwasawa algebra \(\mathcal{O}[\![\Gamma]\!]:=\varinjlim_{n}\mathcal{O}\left[\Gamma/\Gamma^{p^{n}}\right]\). Given a module \(M\) over \(\Lambda\), we shall set \(M^{\vee}:=\mathrm{Hom}_{\mathrm{cnts}}\left(M,\mathbb{Q}_{p}/\mathbb{Z}_{p}\right)\). A module \(M\) over \(\Lambda\) is said to be cofinitely generated (resp. cotorsion) if \(M^{\vee}\) is finitely generated (resp. torsion) as a module over \(\Lambda\). The Selmer group \(\mathrm{Sel}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\) is a cofinitely generated module over \(\Lambda\). There are many cases of interest for which it is known to be cotorsion over \(\Lambda\), which we shall further discuss in the next subsection. Given \(\Lambda\)-modules \(M\) and \(M^{\prime}\), a _pseudo-isomorpbism_ is a map of \(\Lambda\)-modules \(M\to M^{\prime}\) whose kernel and cokernel both have finite cardinality. Let \(M\) be a cofinitely generated and cotorsion \(\Lambda\)-module, then, as is well known, there is a pseudo-isomorphism
\[M\longrightarrow\left(\bigoplus_{i=1}^{s}\Lambda/(\varpi^{m_{i}})\right) \oplus\left(\bigoplus_{j=1}^{t}\Lambda/(f_{j}(T)^{n_{j}})\right).\]
In the above sum, \(s,t,m_{i},n_{i}\in\mathbb{Z}_{\geq 0}\), and \(f_{j}(T)\) is an irreducible distinguished polynomial in \(\Lambda\), i.e., an irreducible monic polynomial whose non-leading coefficients are divisible by \(\varpi\). The \(\mu\)-invariant is given by
\[\mu_{p}(M):=\begin{cases}\sum_{i}m_{i}&\text{ if }s>0;\\ 0&\text{ if }s=0.\end{cases}\]
On the other hand, the \(\lambda\)-invariant is given by
\[\lambda_{p}(M):=\begin{cases}\sum_{j}n_{j}\deg(f_{j})&\text{ if }t>0;\\ 0&\text{ if }t=0.\end{cases}\]
It follows from results of Kato [10] that \(\mathrm{Sel}(\mathbb{Q}_{\mathrm{cyc}}/A_{f})\) is a cotorsion module over \(\Lambda\). Then, the \(\mu\) (resp. \(\lambda\)) invariant of \(\mathrm{Sel}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\) is denoted by \(\mu_{p}(f)\) (resp. \(\lambda_{p}(g)\)). We note that \(\mu_{p}(f)=0\) if and only if \(\mathrm{Sel}(\mathbb{Q}_{\mathrm{cyc}},A_{f})\) is cofinitely generated as an \(\mathcal{O}\)-module, and in this case,
\[\lambda_{p}(f)=\mathrm{corank}_{\mathcal{O}}\left(\mathrm{Sel}(\mathbb{Q}_{ \mathrm{cyc}},A_{f})\right).\]
It follows from Proposition 3.1 that the \(\lambda\)-invariant of \(\operatorname{Sel}(\mathbb{Q}_{cyc},A_{f})\) is equal to the \(\lambda\)-invariant of \(\operatorname{Sel}^{\operatorname{BK}}(\mathbb{Q}_{cyc},A_{f})\). We shall set \(\operatorname{rank}^{\operatorname{BK}}(f)\) to denote the \(\mathcal{O}\)-corank of the Bloch-Kato Selmer group \(\operatorname{Sel}^{\operatorname{BK}}(\mathbb{Q},A_{f})\).
**Proposition 3.2**.: _Assume the \(\mu_{p}(f)=0\). The following assertions hold:_
1. \(\operatorname{rank}^{\operatorname{BK}}(f)\leq\lambda_{p}(f)\)_,_
2. \(\operatorname{rank}^{\operatorname{BK}}(f)\equiv\lambda_{p}(f)\mod 2\)_,_
3. _subpose that_ \(\lambda_{p}(f)\leq 1\)_, then,_ \(\operatorname{rank}^{\operatorname{BK}}(f)=\lambda_{p}(f)\)_._
Proof.: Part (i) follows as a direct consequence of [1, Theorem A]. Part (2) follows from a standard argument due to Greenberg [1, Proposition 3.10], see [1, p.l288, proof of Theorem 5.7, ll. 9-15]. Part (3) is a direct consequence of (i) and (2).
**Remark 3.3**.: _We note tbat the conventions used here are the same as those in [1, section 2]. Note tbat the ring \(\mathcal{O}\) is chosen to be the valuation ring of \(F_{\mathfrak{p}}\). It is easy to see tbat the definition of the \(\lambda\)-invariant \(\lambda_{p}(f)\) is independent of tbe choice of field \(F\), and thus tbe valuation ring \(\mathcal{O}\). Also, we note tbat the \(\mu\)-invariant vanishes for tbe Selmer group over \(\mathcal{O}\), if and only if it vanishes after base-change by any valuation ring \(\mathcal{O}^{\prime}/\mathcal{O}\)._
## 4. Congruence relations between Selmer groups
We recall some results of Greenberg and Vatsal [1], and apply these results to study the structure of Selmer groups of ordinary Galois representations. Let \(g\) be a Hecke newform of weight \(2\) on \(\Gamma_{0}(N_{g})\). Here, \(N_{g}\) is the level of \(g\). Assume that the following conditions are satisfied
1. \(p\nmid N_{g}\),
2. \(g\) is ordinary at \(p\),
3. \(\bar{\rho}:=\bar{\rho}_{g}\) is absolutely irreducible,
4. \(g\) has optimal level, i.e., \(N_{g}\) is the prime to \(p\) part of the Artin conductor of \(\bar{\rho}\).
Let \(f\) be a Hecke new cuspform of weight \(2\) on such that \(\bar{\rho}_{f}\simeq\bar{\rho}_{g}\). The understanding here is that the field \(F\) is chosen so that all the Fourier coefficients of both \(f\) and \(g\) are contained in \(F\). Remark 3.3 establishes that one may choose any large field \(F\) when studying the \(\lambda\)-invariant vanishing of the \(\mu\)-invariant. Let \(N_{f}\) be the level of \(f\). We assume that \(p\nmid N_{f}\). Note that by Theorem 2.1, \(N_{f}\in\mathcal{S}(\bar{\rho})\). We note that \(N_{g}\) divides \(N_{f}\). We note that \(f\) has ordinary reduction at \(p\) (cf. [1, Lemma 3.3]). Let \(\epsilon_{f}\) be the nebentype of \(f\), and \(\bar{\epsilon}_{f}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\kappa^{\times}\) its mod-\(\varpi\) reduction. We find that \(\det\bar{\rho}_{g}=\bar{\chi}\), and \(\det\bar{\rho}_{g}=\bar{\chi}\bar{\epsilon}_{f}\) where \(\bar{\chi}\) is the mod-\(p\) cyclotomic character. Since \(\bar{\rho}_{f}\simeq\bar{\rho}_{g}\), we find that \(\bar{\epsilon}_{f}=1\). The principal units in \(\mathcal{O}^{\times}\) are the units that reduce to \(1\) modulo \((\varpi)\). It is clear that if a principal unit is a root of unity, then it is equal to \(1\). Since \(\epsilon_{f}\) is a finite order character, \(\bar{\epsilon}_{f}=1\) implies that \(\epsilon_{f}=1\). Therefore, \(f\) is a weight \(2\) cuspform on \(\Gamma_{0}(N_{f})\).
**Theorem 4.1** (Greenberg-Vatsal [1]).: _With respect to above notation, assume tbat \(\mu_{p}(g)=0\). Then, \(\mu_{p}(f)=0\), and_
\[\lambda_{p}(f)-\lambda_{p}(g)=\sum_{\ell\mid N_{f}}\left(\sigma_{\ell}(g)- \sigma_{\ell}(f)\right),\]
_where tbe sum ranges over tbe primes \(\ell\) dividing \(N_{f}\)._
We study the numbers \(\left(\sigma_{\ell}(g)-\sigma_{\ell}(f)\right)\) for primes \(\ell|N_{f}.\) Let \(A\) be a local \(\mathcal{O}\)-algebra, with maximal ideal \(\mathfrak{m}_{A}\), and residue field \(A/\mathfrak{m}_{A}\) isomorphic to \(\kappa\). Fix an \(\mathcal{O}\)-algebra isomorphism \(\varphi_{0}:A/\mathfrak{m}_{A}\to\kappa\). Let \(\varphi:A\to\kappa\) be the map obtained upon composing the reduction modulo \(\mathfrak{m}_{A}\) map, with \(\varphi_{0}\). We set
\[\widehat{\mathrm{GL}}_{2}(A):=\left\{\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\mathrm{GL}_{2}(A)\mid\left(\begin{array}{cc} \varphi(a)&\varphi(b)\\ \varphi(c)&\varphi(d)\end{array}\right)=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)\right\}.\]
For \(n\geq 1\), the map \(\varphi:\mathcal{O}/\varpi^{n}\to\kappa\) is the reduction modulo \(\varpi\) map.
**Definition 4.2**.: _Associated with \(g\), let \(\Pi_{g}\) be tbe set of prime numbers \(\ell\) such that the following conditions are satisfied:_
1. \(\ell\nmid N_{g}p\)_,_
2. \(\ell\not\equiv\pm 1\mod p\) _and_ \(\ell^{p-1}\not\equiv 1\mod p^{2}\)_,_
3. \(\bar{\rho}_{g}(\sigma_{\ell})=\left(\begin{array}{cc}\ell&0\\ 0&1\end{array}\right)\) _for a suitable choice of basis for the residual representation._
_Let \(\Omega_{g}\) be the set of prime numbers such tbat_
1. \(\ell\nmid N_{g}p\)_,_
2. \(\ell\not\equiv\pm 1\mod p\)_,_
3. \(\bar{\rho}_{g}(\sigma_{\ell})=\left(\begin{array}{cc}-\ell&0\\ 0&-1\end{array}\right)\) _for a suitable choice of basis for the residual representation._
In section 5, we shall introduce further assumptions on the image of \(\bar{\rho}_{g}\), so that the sets \(\Pi_{g}\) and \(\Omega_{g}\) have positive density. For \(\ell\in\Pi_{g}\) (resp. \(\Omega_{g}\)), we note that \(\rho_{g}\) is unramified at \(\ell\). Since \(\bar{\rho}_{f}\simeq\bar{\rho}_{g}\), we find that \(\bar{\rho}_{f}\) is unramified at \(\ell\) as well. Thus, \(\rho_{f}(\mathrm{I}_{\ell})\) is contained in \(\widehat{\mathrm{GL}}_{2}(\mathcal{O})\), which is a pro-\(p\) group. Hence, \(\rho_{f|\ell}\) factors through \(\mathrm{Gal}(\mathbb{Q}_{\ell}^{p-tr}/\mathbb{Q}_{\ell})\), where \(\mathbb{Q}_{\ell}^{p-tr}\subset\bar{\mathbb{Q}}_{\ell}\) is the maximal tamely ramified extension of \(\mathbb{Q}_{\ell}\) with pro-\(p\) inertia. The Galois group \(\mathrm{Gal}(\mathbb{Q}_{\ell}^{p-tr}/\mathbb{Q}_{\ell})\) is generated by a frobenius \(\sigma_{\ell}\), and the pro-\(p\) tame inertia generator \(\tau_{\ell}\), subject to the relation \(\sigma_{\ell}\tau_{\ell}\sigma_{\ell}^{-1}=\tau_{\ell}^{\ell}\) (cf. for instance [2, p.l23, ll.l5-16]).
**Lemma 4.3**.: _Let \(\ell\in\Pi_{g}\) be a prime such tbat \(\rho_{f}\) is ramified at \(\ell\). Then, up to conjugation by a matrix in \(\mathrm{GL}_{2}(\mathcal{O})\),_
\[\rho_{f}(\sigma_{\ell})=\left(\begin{array}{cc}\ell x&\\ &x\end{array}\right)\text{ and }\rho_{f}(\tau_{\ell})=\left(\begin{array}{cc}1&y \\ &1\end{array}\right),\]
_where \(x\equiv 1\mod\varpi\) and \(y\equiv 0\mod\varpi\)._
Proof.: We choose an integral basis for \(T_{f}\) so that \(\bar{\rho}_{f}(\sigma_{\ell})=\left(\begin{array}{cc}\ell&0\\ 0&1\end{array}\right)\). We show that there is a matrix \(A\in\mathrm{GL}_{2}(\mathcal{O})\) so that \(\rho:=A\rho_{f}A^{-1}\) satisfies the above conditions. In fact, the matrix \(A\) can be chosen to be in \(\widehat{\mathrm{GL}}_{2}(\mathcal{O})\).
For ease of notation, we set \(\rho^{\prime}:=\rho_{f}\) and \(\rho_{n}^{\prime}\) denote the \(\mod\varpi^{n}\) reduction of \(\rho^{\prime}\). We shall inductively specify matrices \(A_{n}\in\widehat{\mathrm{GL}}_{2}(\mathcal{O}/\varpi^{n})\) such that \(A_{m}\equiv A_{n}\mod\varpi^{n}\) for all integers \(m>n\). We shall then let \(A\) denote the matrix \((A_{n})\) in the inverse limit
\[\widehat{\mathrm{GL}}_{2}(\mathcal{O})=\varprojlim_{\ell\in\Pi_{g}}\widehat{ \mathrm{GL}}_{2}(\mathcal{O}/\varpi^{n}).\]
Let us set \(\rho_{n}:=A_{n}\rho_{n}^{\prime}A_{n}^{-1}\). The matrices \(A_{n}\) shall have the property that there exist \(x_{n},y_{n},z_{n}\in\mathcal{O}/\varpi^{n}\) such that \(\rho_{n}(\sigma_{\ell})=\left(\begin{array}{cc}\ell x_{n}&\\ &z_{n}\end{array}\right)\) and \(\rho_{n}(\tau_{\ell})=\left(\begin{array}{cc}1&y_{n}\\ &1\end{array}\right)\), where \(x_{n},z_{n}\equiv 1\mod\varpi\) and \(y_{n}\equiv 0\mod\varpi\). Since \(A_{m}\equiv A_{n}\mod\varpi^{n}\) for all \(m>n\), it shall then follow that \(x_{m}\equiv x_{n}\mod\varpi^{n}\), \(y_{m}\equiv y_{n}\mod\varpi^{n}\) and \(z_{m}\equiv z_{n}\mod\varpi^{n}\). Setting \(x,y,z\) to denote the inverse limits \((x_{n})\), \((y_{n})\) and \((z_{n})\) respectively, we find that
\[\rho(\sigma_{\ell})=\left(\begin{array}{cc}\ell x&\\ &z\end{array}\right)\text{ and }\rho(\tau_{\ell})=\left(\begin{array}{cc}1&y \\ &1\end{array}\right).\]
The relation \(\sigma_{\ell}\tau_{\ell}\sigma_{\ell}^{-1}=\tau_{\ell}^{\ell}\) then implies that \(x=z\).
Suppose that for some \(n\geq 1\),
\[\rho_{n}(\sigma_{\ell})=\left(\begin{array}{cc}\ell x_{n}&\\ &z_{n}\end{array}\right)\text{ and }\rho(\tau_{\ell})=\left(\begin{array}{cc}1&y _{n}\\ &1\end{array}\right).\]
It suffices to lift \(A_{n}\) to \(A_{n+1}\in\widehat{\mathrm{GL}}_{2}(\mathcal{O}/\varpi^{n+1})\), so that
\[\rho_{n+1}(\sigma_{\ell})=\left(\begin{array}{cc}\ell x_{n+1}&\\ &z_{n+1}\end{array}\right)\text{ and }\rho_{n+1}(\tau_{\ell})=\left( \begin{array}{cc}1&y_{n+1}\\ &1\end{array}\right),\]
where \(x_{n+1},y_{n+1},z_{n+1}\) lift \(x_{n},y_{n}\) and \(z_{n}\) respectively. Let \(B_{n+1}\in\widehat{\mathrm{GL}}_{2}(\mathcal{O}/\varpi^{n+1})\) be a lift of \(A_{n}\), and set \(r_{n+1}:=B_{n+1}\rho_{n+1}B_{n+1}^{-1}\). Note that \(\rho_{n}=r_{n+1}\mod\varpi^{n}\). Therefore, we may write
\[r_{n+1}(\sigma_{\ell})=\left(\begin{array}{cc}\ell\tilde{x}_{n}&\\ &\tilde{z}_{n}\end{array}\right)+\varpi^{n}\left(\begin{array}{cc}a&b\\ c&d\end{array}\right);\] \[r_{n+1}(\tau_{\ell})=\left(\begin{array}{cc}1&\widetilde{y}_{n }\\ 0&1\end{array}\right)+\varpi^{n}\left(\begin{array}{cc}e&f\\ g&h\end{array}\right).\]
Let \(B:=\left(\mathrm{Id}+\varpi^{n}\left(\begin{array}{cc}a^{\prime}&b^{\prime} \\ c^{\prime}&d^{\prime}\end{array}\right)\right)\in\widehat{\mathrm{GL}}_{2}( \mathcal{O}/\varpi^{n+1})\), we set \(r_{n+1}^{\prime}:=Br_{n+1}B^{-1}\). Note that since \(\varpi^{2n}=0\) in \(\mathcal{O}/\varpi^{n+1}\), \(B^{-1}=\left(\mathrm{Id}-\varpi^{n}\left(\begin{array}{cc}a^{\prime}&b^{ \prime}\\ c^{\prime}&d^{\prime}\end{array}\right)\right)\). Let \(B^{\prime}:=\left(\begin{array}{cc}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{array}\right)\), write \(B:=\mathrm{Id}+\varpi^{n}B^{\prime}\). Given a matrix \(M\in\mathrm{GL}_{2}(\mathcal{O}/\varpi^{n+1})\), we find that
\[BMB^{-1} =(\mathrm{Id}+\varpi^{n}B^{\prime})M(\mathrm{Id}-\varpi^{n}B^{ \prime})\] \[=M+\varpi^{n}[B^{\prime},M],\]
where \([B^{\prime},M]:=B^{\prime}M-MB^{\prime}\). Note that \(\varpi^{n}[B^{\prime},M]\) is determined by the reduction of \(M\) modulo \(\varpi\). In particular, if \(M\equiv\mathrm{Id}\mod\varpi\), then, \([B^{\prime},M]=0\), and \(BMB^{-1}=M\). On the other hand, if \(M\equiv\left(\begin{array}{cc}\ell&\\ &1\end{array}\right)\mod\varpi\), then,
\[BMB^{-1}=M+\varpi^{n}\left(\begin{array}{cc}&(1-\ell)b^{\prime}\\ (\ell-1)c^{\prime}&\end{array}\right).\]
From the above computations, we find that
\[r^{\prime}_{n+1}(\sigma_{\ell}) =\left(\begin{array}{cc}\ell\tilde{x}_{n}&\\ &\tilde{z}_{n}\end{array}\right)+\varpi^{n}\left(\begin{array}{cc}a&b+(1- \ell)b^{\prime}\\ c+(\ell-1)c^{\prime}&d\end{array}\right);\] \[r^{\prime}_{n+1}(\tau_{\ell}) =\left(\begin{array}{cc}1&\widetilde{y}_{n}\\ 0&1\end{array}\right)+\varpi^{n}\left(\begin{array}{cc}e&f\\ g&h\end{array}\right).\]
Since \(\ell\not\equiv 1\mod p\), we may choose \(B\) so that \(r^{\prime}_{n+1}(\sigma_{\ell})\) is diagonal. In greater detail, \(b^{\prime}:=\frac{b}{(\ell-1)}\) and \(c^{\prime}:=\frac{c}{(1-\ell)}\), and set \(a^{\prime}=0\) and \(d^{\prime}=0\). Thus, we write
\[r^{\prime}_{n+1}(\sigma_{\ell})=\left(\begin{array}{cc}\ell x_{n+1}&\\ &z_{n+1}\end{array}\right).\]
We find that
\[r^{\prime}_{n+1}(\sigma_{\ell}\tau_{\ell}\sigma_{\ell}^{-1})= \left(\begin{array}{cc}\ell x_{n+1}&\\ &z_{n+1}\end{array}\right)\left(\begin{array}{cc}1+\varpi^{n}e&\widetilde{y} _{n}+\varpi^{n}f\\ \varpi^{n}g&1+\varpi^{n}h\end{array}\right)\left(\begin{array}{cc}\ell x_{n+ 1}&\\ &z_{n+1}\end{array}\right)^{-1},\] \[= \left(\begin{array}{cc}1+\varpi^{n}e&\ell x_{n+1}z_{n+1}^{-1} \left(\widetilde{y}_{n}+\varpi^{n}f\right)\\ \varpi^{n}\ell^{-1}x_{n+1}z_{n+1}^{-1}g&1+\varpi^{n}h\end{array}\right),\] \[= \left(\begin{array}{cc}1+\varpi^{n}e&\ell x_{n+1}z_{n+1}^{-1} \widetilde{y}_{n}+\varpi^{n}\ell f\\ \varpi^{n}\ell^{-1}g&1+\varpi^{n}h\end{array}\right),\] \[r^{\prime}_{n+1}(\tau_{\ell})^{\ell}= \left(\begin{array}{cc}1+\varpi^{n}\ell e&\ell\widetilde{y}_{n} +\varpi^{n}\ell f\\ \varpi^{n}\ell g&1+\varpi^{n}\ell h\end{array}\right).\]
Since \(\ell\not\equiv\pm 1\mod p\), we find that \(e=g=h=0\). Hence, we find that
\[r^{\prime}_{n+1}(\sigma_{\ell})=\left(\begin{array}{cc}\ell x_{n+1}&\\ &z_{n+1}\end{array}\right);\] \[r^{\prime}_{n+1}(\tau_{\ell})=\left(\begin{array}{cc}1&y_{n+1} \\ &1\end{array}\right).\]
We set \(A_{n+1}:=BB_{n+1}\), we note that \(A_{n}\equiv B_{n+1}\mod\varpi^{n}\), and \(B\equiv\mathrm{Id}\mod\varpi^{n}\). With respect to this choice, \(\rho_{n+1}=r^{\prime}_{n+1}\), and \(A_{n}=A_{n+1}\mod\varpi^{n}\). This completes the inductive lifting argument. By previous remarks in the proof, this is enough to establish the result.
**Lemma 4.4**.: _Let \(\ell\nmid N_{g}p\) be a prime that divides \(N_{f}\). Then, the following assertions bold._
_(1) If \(\ell\in\Pi_{g}\), then, \(\sigma_{\ell}(g)=1\) and \(\sigma_{\ell}(f)=0\)._
_(2) If \(\ell\in\Omega_{g}\), then, \(\sigma_{\ell}(g)=0\) and \(\sigma_{\ell}(f)=0\)._
Proof.: We begin by proving part (i). Since \(\rho_{g}\) is unramified at \(\ell\), we find that \(V^{\prime}_{g}=V_{g}\). Since \(\bar{\rho}_{g}(\sigma_{\ell})=\left(\begin{array}{cc}\ell&\\ &1\end{array}\right)\), it follows that \(\widetilde{P}_{\ell}(g;X)=(1-X)(1-\ell X)\). Since \(\ell\not\equiv 1\mod p\), it follows that \(\ell^{-1}\) is a root of \(\widetilde{P}_{\ell}(g;X)\) with multiplicity \(1\). Therefore, \(d_{\ell}(g)=1\). On the other hand, it follows from Lemma 4.3 that \(V^{\prime}_{f}\) is the trivial \(\mathrm{G}_{\ell}\)-module, we find that \(\widetilde{P}_{\ell}(f;X)=1-X\), and \(\ell^{-1}\) is not a root of \(\widetilde{P}_{\ell}(f;X)\). Therefore, \(d_{\ell}(f)=0\). Since \(\ell^{p-1}\not\equiv 1\mod p^{2}\), it follows that that \(s_{\ell}=1\). Since \(\sigma_{\ell}(\cdot)=s_{\ell}d_{\ell}(\cdot)\), we find that \(\sigma_{\ell}(g)=1\) and \(\sigma_{\ell}(f)=0\). The second assertion can be argued in the same way.
Let \(Q=\{q_{1},\ldots,q_{n}\}\) be a set of primes contained in \(\Pi_{g}\) and \(Q^{\prime}=\{\ell_{1},\ldots,\ell_{r}\}\). For \((n,r)\in\mathbb{Z}_{\geq 0}^{2}\), we set \(\mathfrak{T}(n,r)\) be the collection of all sets \(\Sigma=Q\cup Q^{\prime}\), where \(Q=\{q_{1},\ldots,q_{n}\}\) (resp. \(Q^{\prime}=\{\ell_{1},\ldots,\ell_{r}\}\)) is a subset of \(\Pi_{g}\) (resp. \(\Omega_{g}\)). The understanding is that when \(n=0\) (resp. \(r=0\)), the set \(Q\) (resp. \(Q^{\prime}\)) is empty. When we write \(\Sigma=\{q_{1},\ldots,q_{n},\ell_{1},\ldots,\ell_{r}\}\), we shall implicitly mean that \(\{q_{1},\ldots,q_{n}\}\) is contained in \(\Pi_{g}\) and \(\{\ell_{1},\ldots,\ell_{r}\}\) is contained in \(\Omega_{g}\).
**Definition 4.5**.: _For \(\Sigma\in\mathfrak{T}(n,r)\), set \(N_{\Sigma}:=\prod_{i=1}^{n}q_{i}\times\prod_{j=1}^{r}\ell_{j}\). Let \(\mathcal{F}(\Sigma)\) be the set of newforms \(f\) of 'weight \(2\) such that_
_(1) \(\bar{\rho}_{f}\simeq\bar{\rho}_{g}\),_
_(2) \(N_{f}=N_{g}N_{\Sigma}\)._
**Proposition 4.6**.: _Let \(g\) be a Hecke newform of optimal weigbt \(k=2\) and optimal level. Then, for \((n,r)\) such tbat \(n>0\) or \(r>0\), and \(\Sigma\in\mathfrak{T}(n,r)\), the set \(\mathcal{F}(\Sigma)\) is nonempty._
Proof.: The result is a direct consequence of Theorem 2.3.
**Lemma 4.7**.: _Let \(f\in\mathcal{F}(\Sigma)\), and let \(\ell\neq p\) be a prime wicb divides \(N_{g}\). Then, \(\sigma_{\ell}(f)=\sigma_{\ell}(g)\)._
Proof.: It suffices for us to show that \(d_{\ell}(f)=d_{\ell}(g)\). Let \(h\in\{g,f\}\), recall that \(V_{h}^{\prime}:=(V_{h})_{1_{\ell}}\). Note that as a modules over the inertia group \(\mathrm{I}_{\ell}\), \(V_{h}\) and \(A_{h}[\varpi]\) are self dual. Since \(\mathrm{ord}_{\ell}(N_{f})=\mathrm{ord}_{\ell}(N_{g})\), and \(N_{g}\) is the prime to \(p\) part of the Artin-conductor of \(\bar{\rho}\), it follows that
\[\dim A_{h}[\varpi]_{1_{\ell}}=\dim(V_{h})_{1_{\ell}}, \tag{4.l}\]
(cf. [1, proof of Lemma 4.l.2]). Recall that \(P_{\ell}(h;X):=\det\left(\operatorname{Id}-\sigma_{\ell}X\mid V_{h}^{\prime}\right)\) and that \(\widetilde{P}_{\ell}(h;X)\) is the mod-\(\varpi\) reduction of \(P_{\ell}(h;X)\). Set \(T_{h}^{\prime}:=(T_{h})_{1_{\ell}}\). We identify \(A_{h}[\varpi]\) with \(T_{h}/\varpi T_{h}\), and thus \(A_{h}[\varpi]_{1_{\ell}}\) with \(T_{h}^{\prime}/\varpi T_{h}^{\prime}\). The equality (4.l) implies that \(T_{h}^{\prime}\) is torsion free. We identify \(T_{h}^{\prime}\otimes_{\mathcal{O}}K\) with \(V_{h}^{\prime}\), and since \(T_{h}^{\prime}\) is torsion free, we find that \(T_{h}^{\prime}\) is an \(\mathcal{O}\)-lattice in \(V_{h}^{\prime}\), and that \(P_{\ell}(h;X)=\det\left(\operatorname{Id}-\sigma_{\ell}X\mid T_{h}^{\prime}\right)\). Therefore, the mod-\(\varpi\) reduction of \(P_{\ell}(h;X)\) is given by
\[\widetilde{P}_{\ell}(h;X)= \det\left(\operatorname{Id}-\sigma_{\ell}X\mid\left(T_{h}^{ \prime}/\varpi T_{h}^{\prime}\right)\right),\] \[= \det\left(\operatorname{Id}-\sigma_{\ell}X\mid A_{h}[\varpi]_{1_ {\ell}}\right),\]
Since \(A_{g}[\varpi]\simeq A_{f}[\varpi]\) as \(\mathrm{G}_{\ell}\)-modules, we find that \(A_{g}[\varpi]_{1_{\ell}}\simeq A_{f}[\varpi]_{1_{\ell}}\) as \(\mathrm{G}_{\ell}/\operatorname{I}_{\ell}\)-modules. It thus follows that \(\widetilde{P}_{\ell}(f;X)=\widetilde{P}_{\ell}(g;X)\), from which we deduce that \(d_{\ell}(f)=d_{\ell}(g)\).
**Theorem 4.8**.: _Let \(g\) be a Hecke newform of optimal level \(N_{g}\), trivial nebentype and weigbt \(2\). Assume that the following conditions are satisfied_
1. \(\bar{\rho}_{g}\) _is absolutely irreducible,_
2. \(g\) _is_ \(p\)_-ordinary and_ \(p\nmid N_{g}\)_,_
3. \(g\) _has optimal level,_
4. \(\mu_{p}(g)=0\)_._
_Let \(\Sigma\in\mathfrak{T}(n,r)\) and \(f\in\mathcal{F}(\Sigma)\). Then, the following assertions hold_
1. \(f\) _has good ordinary reduction at_ \(p\)_,_
2. \(\mu_{p}(f)=0\)_,_
3. \(\lambda_{p}(f)=\lambda_{p}(g)+n\)
Proof.: As noted earlier, that \(f\) has ordinary reduction at \(p\) follows from [13, Lemma 3.3]. From Theorem 4.1, we find that \(\mu_{p}(f)=0\) and
\[\lambda_{p}(f)=\lambda_{p}(g)+\sum_{i=1}^{n}\left(\sigma_{q_{i}}(g)-\sigma_{q_{i }}(f)\right)+\sum_{j=1}^{r}\left(\sigma_{\ell_{j}}(g)-\sigma_{\ell_{j}}(f) \right)+\sum_{\ell\mid N_{g}}\left(\sigma_{\ell}(g)-\sigma_{\ell}(f)\right).\]
For \(\ell|N_{g}\), it follows from Lemma 4.7 that
\[\sum_{\ell\mid N_{g}}\left(\sigma_{\ell}(g)-\sigma_{\ell}(f)\right)=0.\]
It follows from (l) of Lemma 4.4 that
\[\left(\sigma_{q_{i}}(g)-\sigma_{q_{i}}(f)\right)=1\]
and it follows from (2) of Lemma 4.4 that
\[\left(\sigma_{\ell_{j}}(g)-\sigma_{\ell_{j}}(f)\right)=0.\]
It therefore follows that
\[\lambda_{p}(f)=\lambda_{p}(g)+n.\]
## 5. Constructing Galois representations with prescribed \(\lambda\)-invariant
Throughout this section, \(p\geq 5\). We introduce our assumptions. Let \(g\) be a normalized newform of weight \(2\) on \(\Gamma_{0}(N_{g})\). Throughout this section, we assume the following conditions.
1. The residue field \(\kappa=\mathcal{O}/\varpi\) is \(\mathbb{F}_{p}\), i.e., \(f(\mathfrak{p}/p)=1\) where \(\mathfrak{p}\) is the prime above \(p\) prescribed by \(\iota_{p}\).
2. The Galois representation \(\bar{\rho}_{g}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\operatorname {GL}_{2}(\mathbb{F}_{p})\) is surjective.
3. The modular form \(g\) is \(p\)-ordinary and \(p\nmid N_{g}\),
4. \(g\) has optimal level,
5. \(\mu_{p}(g)=0\).
We show that the sets \(\Pi_{g}\) and \(\Omega_{g}\) both have positive density. Furthermore, we estimate these densities. We let \(Y\) (resp. \(Y^{\prime}\)) be the subset of \(\operatorname{GL}_{2}(\mathbb{F}_{p})\) consisting of semisimple matrices conjugate to \(\left(\begin{array}{cc}a&\\ &1\end{array}\right)\) (resp. \(\left(\begin{array}{cc}a&\\ &-1\end{array}\right)\)), where \(a\neq\pm 1\). It is easy to see that
\[\#Y=\#Y^{\prime}=(p-3)\frac{\#\operatorname{GL}_{2}(\mathbb{F}_{p})}{\# \operatorname{T}(\mathbb{F}_{p})},\]
where \(\operatorname{T}\) denotes the diagonal torus. Therefore, we find that
\[\frac{\#Y}{\#\operatorname{GL}_{2}(\mathbb{F}_{p})}=\frac{\#Y^{\prime}}{\# \operatorname{GL}_{2}(\mathbb{F}_{p})}=\frac{(p-3)}{(p-1)^{2}}. \tag{5.l}\]
Let \(\bar{\rho}\) denote the residual representation \(\bar{\rho}_{g}\), and let \(\mathbb{Q}(\bar{\rho})\) be the Galois extension of \(\mathbb{Q}\) which is the fixed field of the kernel of \(\bar{\rho}\). We refer to \(\mathbb{Q}(\bar{\rho})\) as the field _cut out by \(\bar{\rho}\)_. We set \(G\) to denote the Galois group \(\operatorname{Gal}(\mathbb{Q}(\bar{\rho})/\mathbb{Q})\). The residual representation induces an
isomorphism \(\Phi:G\xrightarrow{\sim}\operatorname{GL}_{2}(\mathbb{F}_{p})\). Let \(Z\) (resp. \(Z^{\prime}\)) denote \(\Phi^{-1}(Y)\) (resp. \(\Phi^{-1}(Y^{\prime})\)). It follows from (5.1) that
\[\frac{\#Z}{\#G}=\frac{\#Z^{\prime}}{\#G}=\frac{(p-3)}{(p-1)^{2}}. \tag{5.2}\]
**Lemma 5.1**.: _Let \(\ell\nmid N_{g}p\) be a prime. The following assertions bold_
1. \(\sigma_{\ell}\in Z\) _if and only if_ \(\bar{\rho}(\sigma_{\ell})=\left(\begin{array}{c}\ell\\ \quad 1\end{array}\right)\) _up to conjugation and_ \(\ell\not\equiv\pm 1\mod p\)_;_
2. \(\sigma_{\ell}\in Z^{\prime}\) _if and only if_ \(\bar{\rho}(\sigma_{\ell})=\left(\begin{array}{c}-\ell\\ \quad\ -1\end{array}\right)\) _up to conjugation and_ \(\ell\not\equiv\pm 1\mod p\)_._
Proof.: Suppose that \(\sigma_{\ell}\in Z\) (resp. \(\sigma_{\ell}\in Z^{\prime}\)). Then, since the weight of \(g\) is \(2\) and has trivial nebentype, we find that \(\det\bar{\rho}(\sigma_{\ell})=\ell\). It follows that \(\bar{\rho}(\sigma_{\ell})=\left(\begin{array}{c}\ell\\ \quad 1\end{array}\right)\) (resp. \(\bar{\rho}(\sigma_{\ell})=\left(\begin{array}{c}-\ell\\ \quad\ -1\end{array}\right)\)) and \(\ell\not\equiv\pm 1\mod p\).
**Proposition 5.2**.: _Let \(\ell\nmid N_{g}p\) be a prime. Then, \(\sigma_{\ell}\in Z^{\prime}\) if and only if \(\ell\in\Omega_{g}\). As a consequence, it follows tbat \(\Omega_{g}\) has positive density equal to \(\frac{(p-3)}{(p-1)^{2}}\)._
Proof.: We find that \(\sigma_{\ell}\in Z^{\prime}\) if and only if \(\bar{\rho}(\sigma_{\ell})=\left(\begin{array}{c}\ell\\ \quad\ -1\end{array}\right)\) up to conjugation and \(\ell\not\equiv\pm 1\mod p\). In other words, \(\sigma_{\ell}\in Z^{\prime}\) if and only if \(\ell\in\Omega_{g}\). By the Chebotarev density theorem, \(\Omega_{g}\) has positive density equal to \(\frac{\#Z^{\prime}}{\#G}=\frac{(p-3)}{(p-1)^{2}}\).
Recall that \(\mathbb{Q}_{1}\) is the \(\mathbb{Z}/p\mathbb{Z}\)-extension of \(\mathbb{Q}\) contained in \(\mathbb{Q}_{\mathrm{cyc}}\). Let \(\mathcal{L}/\mathbb{Q}\) be the compositum \(\mathbb{Q}(\bar{\rho})\cdot\mathbb{Q}_{1}\).
**Lemma 5.3**.: _The extensions \(\mathbb{Q}(\bar{\rho})\) and \(\mathbb{Q}_{1}\) are linearly disjoint._
Proof.: Let \(E:=\mathbb{Q}(\bar{\rho})\cap\mathbb{Q}_{1}\) and \(N:=\operatorname{Gal}(\mathbb{Q}(\bar{\rho})/E)\). Note that \(N\) is a normal subgroup of \(G\simeq\operatorname{GL}_{2}(\mathbb{F}_{p})\) and that \([G:N]\) divides \(p\). For \(p\geq 5\), the group \(\operatorname{PSL}_{2}(\mathbb{F}_{p})\) is simple. It is easy to see that \(\operatorname{GL}_{2}(\mathbb{F}_{p})\) does not contain an index \(p\) normal subgroup. Therefore, \(N=G\) and thus, \(\mathbb{Q}(\bar{\rho})\) and \(\mathbb{Q}_{1}\) are linearly disjoint.
Let \(\Gamma_{1}:=\operatorname{Gal}(\mathbb{Q}_{1}/\mathbb{Q})\) and we find that \(\operatorname{Gal}(\mathcal{L}/\mathbb{Q})\simeq G\times\Gamma_{1}\). Let \(W\) be the product \(Z\times(\Gamma_{1}\backslash\{0\})\).
**Proposition 5.4**.: _Let \(\ell\nmid N_{g}p\) be a prime. Then, \(\sigma_{\ell}\in W\) if and only if \(\ell\in\Pi_{g}\). As a result, \(\Pi_{g}\) has density equal to \(\frac{(p-3)}{p(p-1)}\)._
Proof.: Note that the prime \(\ell\) in nonsplit in \(\mathbb{Q}_{1}\) if and only if \(\ell^{p-1}\not\equiv 1\mod p^{2}\). The result is therefore a direct consequence of Lemma 5.1. By the Chebotarev density theorem, \(\Pi_{g}\) has density equal to
\[\frac{\#Z\times(\#\Gamma_{1}-1)}{\#G\times\#\Gamma_{1}}=\frac{(p-3)}{p(p-1)}.\]
Proof of Theorem l.l.: The Theorem is a direct consequence of Proposition 5.2, Proposition 5.4 and Theorem 4.8.
**Theorem 5.5**.: _Let \(p\geq 5\) be a prime and let \(g\) be a normalized newform of weight \(2\) on \(\Gamma_{0}(N_{g})\) satisfying all of tbe conditions of Theorem 1.1. Furthermore, assume tbat \(\lambda_{p}(g)\leq 1\). Then, for any set of primes \(\{\ell_{1},\ldots,\ell_{r}\}\subset\Omega_{g}\), there is a Hecke newform of weight \(2\) of level \(N_{f}=N_{g}\ell_{1}\ldots\ell_{r}\) such that_
1. \(f\) _has good ordinary reduction at_ \(p\)_,_
2. \(\bar{\rho}_{g}\simeq\bar{\rho}_{f}\)_,_
3. \(\mu_{p}(f)=0\)_,_
4. \(\operatorname{rank}^{\operatorname{BK}}(f)=\lambda_{p}(g)\)_._
Proof.: It follows from Theorem 1.1 that there exists \(f\) satisfying the first three assertions, and such that \(\lambda_{p}(f)=\lambda_{p}(g)\). Proposition 3.2 implies that
\[\operatorname{rank}^{\operatorname{BK}}(f)=\lambda_{p}(f)=\lambda_{p}(g).\]
This proves the last assertion.
Proof of Theorem 1.2.: It suffices to show that there is a non-CM Hecke newform \(g\) of weight \(2\) on \(\Gamma_{0}(N_{g})\) and a density \(1\) set of primes \(\Sigma\) such that for all \(p\in\Sigma\),
1. \(p\geq 5\),
2. The residue field \(\kappa=\mathcal{O}/\varpi\) is \(\mathbb{F}_{p}\), i.e., \(f(\mathfrak{p}/p)=1\).
3. The Galois representation \(\bar{\rho}_{g}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\operatorname {GL}_{2}(\mathbb{F}_{p})\) is surjective.
4. The modular form \(g\) is \(p\)-ordinary and \(p\nmid N_{g}\),
5. \(g\) has optimal level, i.e., \(N_{g}\) is the prime to \(p\) part of the Artin conductor of the residual representation,
6. \(\mu_{p}(g)=0\) and \(\lambda_{p}(g)=0\).
It then follows from Theorem 1.1 that for \(p\in\Sigma\), there are infinitely many Hecke newforms of weight \(2\) such that
1. \(f\) has good ordinary reduction at \(p\),
2. \(\bar{\rho}_{f}\) is surjective,
3. \(\mu_{p}(f)=0\),
4. \(\lambda_{p}(f)=n\).
Let \(E_{/\mathbb{Q}}\) be any non-CM elliptic curve with Mordell-Weil rank \(0\), and let \(g\) be the associated Hecke newform. Consider the following observations.
* The field of Fourier coefficients of \(g\) is \(\mathbb{Q}\), since it is associated to an elliptic curve over \(\mathbb{Q}\). In particular, it follows that the residue field \(\kappa\) is isomorphic to \(\mathbb{F}_{p}\) for all primes \(p\).
* The set of primes at which \(E\) has good ordinary reduction has density \(1\), by a result of Serre [10].
* Serre's open image theorem [10] shows that for all but finitely many primes \(p\), the residual representation \(\bar{\rho}_{g}:\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\to\operatorname {GL}_{2}(\mathbb{F}_{p})\) is surjective.
* If at a given prime \(p\), \(N_{g}\) is not optimal, then there must be a \(p\)-congruence between \(g\) and a newform \(h\) of weight \(2\) and strictly smaller level. This follows from Ribet's level lowering theorem. There are only finitely many newforms of weight \(2\) with level strictly less than \(N_{g}\). Also, there are only finitely many primes \(p\) for which two newforms are \(p\)-congruent. Therefore, for all but finitely many primes \(p\), the level \(N_{g}\) is optimal.
* Finally, it follows from a theorem of Greenberg [10, Theorems 4.I, 5.I] that for a density \(1\) set of primes \(p\), \(\mu_{p}(g)=\lambda_{p}(g)=0\).
Therefore, there is a set of primes \(\Sigma\) of density \(1\) such that the above conditions are satisfied. This completes the proof.
|
2301.08152 | Neutrino oscillations in vortex and twisting magnetic fields | The behavior of the neutrino flux in vortex and twisting magnetic fields is
considered within the left-right symmetric model. By way of illustration of the
magnetic fields we discuss the magnetic fields of the coupled sunspots (CS's)
which are the sources of the future solar flares. It is expected that the
neutrinos have such multipole moments as the charge radius, the magnetic and
anapole moments. The evolution equation in the Schrodinger-like form is found
and all magnetic-induced resonance conversions are analyzed. It is demonstrated
that in the case of the high-energy flares the sizeable depletion of the
$\nu_{eL}$ neutrinos caused by their resonance absorptions takes place.
Possibilities of observations of this phenomena are investigated at neutrino
telescopes whose work is based on the reaction of the coherent elastic
neutrino-nucleus scattering. | O. M. Boyarkin, I. O. Boyarkina | 2023-01-19T16:14:27Z | http://arxiv.org/abs/2301.08152v3 | # Neutrino oscillations in vortex and twisting magnetic fields
###### Abstract
The behavior of the neutrino flux in vortex and twisting magnetic fields is considered within the left-right symmetric model. By way of illustration of the magnetic fields we discuss the magnetic fields of the coupled sunspots (CS's) which are the sources of the future solar flares. It is expected that the neutrinos have such multipole moments as the charge radius, the magnetic and anapole moments. The evolution equation in the Schrodinger-like form is found and all magnetic-induced resonance conversions are analyzed. It is demonstrated that in the case of the super flares one may detect the depletion of the \(\nu_{eL}\) neutrinos caused by their resonance absorptions when they travel through the CS magnetic fields. Observations of this phenomena could be carried out at neutrino telescopes of the next generation whose work is based on the reaction of the coherent elastic neutrino-nucleus scattering.
PACS number(s): 12.60.Cn, 14.60.Pg, 96.60.Kx, 95.85.Qx, 96.60.Rd.
Keys words: Solar flares, flare forecasting, neutrino oscillations, magnetic and anapole moments, charge neutrino radius, neutrino telescopes, coherent elastic neutrino-nucleus scattering, RED-100.
## 1 Introduction
Interaction of neutrinos with external electromagnetic fields is defined by the multipole moments (MM's) which are caused by the radiative corrections. The neutrino MM's have been drawing considerable attention of physicists for many years. However, any evidences
in favor of nonzero neutrino MM's both from laboratory experiments with ground-based neutrino sources and from observations of astrophysical neutrino fluxes were absent. It should be stressed that until recently all that has been measured was the measurement of the upper bounds on the MM values. The situation was reversed after the XENON collaboration presented results of the search for new physics with low-energy electronic recoil data obtained with the XENON1T detector [1]. One of the possible explanations of their results allows the presence of a sizable neutrino magnetic moment having the value of the order of the existing laboratory bounds. It should be stressed that due to smallness of the MM's the electromagnetic interaction of a neutrino becomes essential in the case of intensive fields only. The examples of such fields are the Sun's magnetic fields. In that case of special interest are the magnetic fields of the sunspots being the sources of the future solar flares (SF's). The energy generated during the SF could be as large as \(10^{28}-10^{33}\) erg. It is believed that the magnetic field is the main energy source of the SF's [2, 3]. During the periods of the high activity of the Sun, the magnetic flux \(\sim 10^{24}\) G \(\cdot\) cm\({}^{2}\)[4] is thrown up from the solar interior and accumulates within the sunspots. In so doing, big sunspots of opposite polarity could be paired forming, the so-called, coupled sunspots (CS's). Then the process of magnetic energy storage begins. The length of the initial SF stage is extended from several to dozens of hours. The data concerning centimeter radiation above a spot is indicative of the gas heating up to the temperatures of a coronal order leading to a high value of solar plasma conductivity. Therefore, the longitudinal electric current \(J_{z}\) might be large enough in a region above sunspots. In Ref. [5] it was shown that when the magnetic field of newly emerged sunspot \(B_{cs}\) takes the value 2000 G, \(J_{z}\) can reach \((0.7-4)\times 10^{12}\) A. Therefore, for the CS's which magnetic fields could increase up to the values of \(10^{6}\) G, \(J_{z}\) could be \(10^{14}\) A and above. For the big CS's with \(R_{s}\simeq 10^{8}\) cm the electric current density could be as large as \(10^{-1}\) A/cm\({}^{2}\). The more powerful the SF is, the greater the magnetic field strength of the CS's will be. For example, in the case of the super-SF's [6], which energy could be of order of \(10^{36}\) erg, \(B_{cs}\) may reach the values of \(10^{8}\) G.
It is clear that the high-power SF's are very destructive when they are focussed on the Earth as it was in 1859 (the Carrington Flare [7]). It might be worth pointing out that the flare events are also at work in other Sun-like stars (first-generation stars). Flares at these stars are also dangerous for crew members of interplanetary spaceships. Consequently, for our increasingly technologically dependent society the SF forecasting has great practical value. Moreover, the study of the SF's helps to elucidate the structure and evolution of the Universe as well. There are special cosmic projects [8] which are focussed on investigation of the flares happening at the Sun-like stars. For example, by now the Kepler mission [9] surveying the \(\sim 10^{5}\) stars has accumulated a great deal of data concerning the large flares with energies of order of \(10^{33}\) erg.
For the most part, the methods of the SF forecasting are based on measurements of the magnetic fields of the active regions in the solar photosphere being made with \(\gamma\)-telescopes observing the Sun continuously (see, for example [10]). Worthy of mention is the recent breakthrough in the SF predictions made by a team of Japanese physicists [11].
They presented, the so-called the kappa-scheme, a physics-based model forecasting the large SF's through a critical condition of magnetohydrodynamic instability, triggered by magnetic reconnection. The group tested the method using observations of the Sun from 2008 to 2019. In most cases, the method correctly identifies which regions will produce large SF's within the next 20 hours, The method also provides the exact location where each SF will begin and limits on how powerful it will be.
This paper is a continuation of the works [12, 13, 14] in which the correlation between the SF's and behavior of the neutrino beam was discussed. It is clear that when the electron neutrinos beam passing through the magnetic field of the CS's will change its composition and we could detect this changing, then the problem of the SF's forecasting with a help of neutrino telescopes will be resolved. In the three flavor basis this problem was discussed in Ref.[14] for the Dirac neutrinos. However in that work it was assumed that the nondiagonal neutrino anapole moments are equal to zero and the neutrino charge radios was ignored. Contrary to that work our consideration carries more general character, namely, we take into account all multipole moments describing the neutrino interaction with magnetic fields and do not make any constraints on the anapole moments. Our analysis will be fulfilled both for the Majorana and Dirac neutrinos. In the next Section we discus the general form of the electromagnetic interaction of Majorana and Dirac neutrinos and address the phenomenology of the neutrino multipole moments in laboratory experiments. Over the course of the work we assume that the multipole moments have the values being close to the experimental limits. In the third Section we obtain the evolution equation for the neutrino beams in two flavor approximation and find all the possible resonance conversions. In the forth Section we do the same working in three flavor basis. Finally, in Section 5, some conclusions are drawn.
## 2 Electromagnetic neutrino properties
In the one-photon approximation, the effective interaction Hamiltonian satisfying the demands both of the Lorentz and of the electromagnetic gauge invariance is determined by the following expression [15, 16]
\[{\cal H}^{(\nu)}_{em}(x)=\sum_{i,f}\overline{\nu}_{i}(x)\{i\sigma_{\mu\lambda }q^{\lambda}[F^{if}_{M}(q^{2})+iF^{if}_{E}(q^{2})\gamma_{5}]+(\gamma_{\mu}-q_{ \mu}q^{\lambda}\gamma_{\lambda}/q^{2})[F^{if}_{Q}(q^{2})+\]
\[+F^{if}_{A}(q^{2})q^{2}\gamma_{5}]\}\nu_{f}(x)A^{\mu}(x), \tag{1}\]
where \(q_{\mu}=p^{\prime}_{\mu}-p_{\mu}\) is the transferred 4-momentum, while \(F^{if}_{Q},F^{if}_{M},F^{if}_{E},\) and \(F^{if}_{A}\) are the real charge, dipole magnetic, dipole electric, and anapole neutrino form factors. The form-factors with \(i=f\) (\(i\neq f\)) are named "diagonal" ("off-diagonal" or "transition") ones. In the static limit (\(q^{2}=0\)), \(F^{if}_{M}(q^{2})\), \(F^{if}_{E}(q^{2})\) and \(F^{if}_{A}(q^{2})\) determine the dipole magnetic, dipole electric and anapole moments, respectively. The second term in the expansion of
the \(F_{Q}^{if}(q^{2})\) in series of powers of \(q^{2}\) is connected with the neutrino charge radius
\[<r_{if}^{2}>=6\frac{dF_{Q}^{if}(q^{2})}{dq^{2}}\bigg{|}_{q^{2}=0}. \tag{2}\]
In what follows, amongst the neutrino electromagnetic characteristics, we shall be interested in the dipole magnetic moments (DMM), the anapole moments (AM) and the charge radii (NCR).
For the first time the behavior of the neutrino endowed by the DMM in the external magnetic field was discussed in Ref. [17]. Since then many works appeared in which the problems of the solar neutrinos were investigated with inclusion of the DMM [18, 19, 20, 21] (see, for Review [22]). It should be recorded that examination of the effects produced by the neutrino DMM's could help to find out the neutrino nature (Dirac or Majorana). The Dirac neutrinos may have both the diagonal and off-diagonal DMM's while the Majorana neutrinos could possess only the off-diagonal DMM's with a property \(\mu_{ll^{\prime}}=-\mu_{l^{\prime}l}\).
The exhibiting of neutrino DMM's are being searched in the reactors (MUNU, TEXONO and GEMMA) [23, 24, 25], the accelerators (LSND) [26, 27], and solar (Super-Kamiokande and Borexino) [28, 29] experiments. The current best sensitivity limits on the diagonal DMM's obtained in laboratory measurements are as follows
\[\mu_{ee}^{exp}\leq 2.9\times 10^{-11}\mu_{B},\qquad 90\%\ C.L.\qquad\mbox{[ GEMMA]} \tag{3}\]
\[\mu_{\mu\mu}^{exp}\leq 6.8\times 10^{-10}\mu_{B},\qquad 90\%\ C.L.\qquad\mbox{[ LSND]} \tag{4}\]
For the \(\tau\)-neutrino, the limits on \(\mu_{\tau\tau}\) are less limitative (see, for example [30]), and the current upper bound on that is \(3.9\times 10^{-7}\mu_{B}\).
Astrophysical and cosmological arguments are more limitative. For example, in Ref.[31] it was demonstrated that the absence of high-energy events in the SN1987A neutrino signal leads to the inequality \(\mu_{\nu_{e}\nu_{e}}\leq 10^{-12}\mu_{B}\) at 90% C.L. Cooling rates of red giants [32] results a comparable limit \(\mu_{\nu_{e}\nu_{e}}\leq 3\times 10^{-12}\mu_{B}\) at 90% C.L., whereas analysis of cooling rates of white dwarfs [33] puts a bound of \(\mu_{\nu_{e}\nu_{e}}\leq 10^{-11}\mu_{B}\) at 90% C.L. It should be stressed that what is measured in real experiments is the effective DMM \(\mu_{\nu_{l}\nu_{l}}^{exp}\) whose value is a rather composite function of the transition magnetic moments. Moreover, the dipole electric transition moments, if these quantities do not vanish, could give the contribution to \(\mu_{\nu_{l}\nu_{l}}^{exp}\) as well. We emphasize that the reliable bounds on transit DMM's could be obtained only from detailed studying of the processes with the partial lepton flavor violation. At present, in the Majorana neutrino case the global fit of the reactor and solar neutrino data gave the result [34]
\[\mu_{12},\mu_{13},\mu_{23}\leq 1.8\times 10^{-10}\mu_{B}. \tag{5}\]
Even though a neutrino has the electric charge being equal to zero, the neutrino could possesses superposition of two charge distributions of opposite signs, which is featured by an electric form factor. Then the second term in the expansion of this form factor in series of powers of \(q^{2}\) is connected with the NCR. The NCR influences the processes of
the neutrino scattering on charged particles. The limits on the NCR's could be received from the studying the elastic neutrino-electron scattering. For example, investigation of this process at the TEXONO experiment results in the following bounds on the NCR [35]
\[-2.1\times 10^{-32}\ {\rm cm}^{2}\leq(<r_{\nu_{e}}^{2}>)\leq 3.3\times 10^{-32} \ {\rm cm}^{2}. \tag{4}\]
Investigation of coherent elastic neutrino-nucleus scattering at the TEXONO ([36]), LSND ([37]) and BNL-E734 ([38]) experiments allowed to obtain the bounds on the diagonal NCR's
\[-4.2\times 10^{-32}\ {\rm cm}^{2}\leq(<r_{\nu_{e}}^{2}>)\leq 6.6\times 10^{-32} \ {\rm cm}^{2},\qquad\mbox{[TEXONO]}\]
\[-5.94\times 10^{-32}\ {\rm cm}^{2}\leq(<r_{\nu_{e}}^{2}>)\leq 8.28\times 10^{-3 2}\ {\rm cm}^{2},\qquad\mbox{[LSND]}\]
\[-5.7\times 10^{-32}\ {\rm cm}^{2}\leq(<r_{\nu_{\mu}}^{2}>)\leq 1.1\times 10^{-3 2}\ {\rm cm}^{2},\qquad\mbox{[BNL-E734]}.\]
In its turn the bounds on the transition NCR's
\[\left|<r_{\nu_{e}\nu_{\mu}}^{2}>\right|\leq 28\times 10^{-32}{\rm cm}^{2}, \qquad\left|<r_{\nu_{e}\nu_{\tau}}^{2}>\right|\leq 30\times 10^{-32}{\rm cm}^{2}, \left.\rule{0.0pt}{14.226378pt}\right\} \tag{5}\]
were gotten from analysis of the COHERENT data on CENNS [39].
The NCR affects both on astrophysics and on cosmology. For example, in the case when the Dirac neutrino has the charge radios, in the \(e^{+}e^{-}\) annihilations the right-handed neutrino-antineutrino pairs could be produced. This process could influence primordial Big-Bang Nucleosynthesis and the energy release of a core-collapse supernova.
The AM of 1/2-spin Dirac particle was introduced in the work [40] for a \(T\)-invariant interaction which does not conserve \(P\)-parity and \(C\)-parity, individually. Later in order to describe this kind of interaction a more general characteristic, the toroid dipole moment (TDM) [41], was entered. It was shown that the TDM is a general case of the AM and at the mass-shell of the viewed particle the both moments coincide. The neutrino toroid interaction are manifested in scattering of the neutrinos with charged particles. In so doing, the interaction saves the neutrino helicity and gives an extra contribution, as a part of the radiative corrections. In this regards, the AM is similar to the NCR. Both quantities preserve the helicity in coherent neutrino collisions, but have various nature. They define the axial-vector (AM) and the vector (NCR) contact interactions with an external electromagnetic field, respectively. From the viewpoint of determining the NCR and AM the low-energy scattering processes are of special interest (see, for example, Refs. [42, 43]).
The both neutrino interactions may have very interesting consequences in various media. The possible role of the AM in studying the neutrino oscillations was first specified in Refs. [44]). A point that should be also mentioned is Ref. [45] where the behavior of neutrinos endowed with the AM in a vortex magnetic field was considered upon discussing the correlation between the electron neutrino flux and the solar flare events.
Since phenomenology of the AM is analogous to that of the NCR, the linkage between these quantities must exist. In the SM for a zero-mass neutrino, the value of the AM \(a_{\nu}\) is connected with the NCR through the simple relation (see, for example, [46])
\[a^{\prime}_{\nu}=\frac{1}{6}<r_{\nu}^{2}> \tag{6}\]
(the dimensionality of the AM in CGS system is "\({\rm length}^{2}\times{\rm charge}\)", that is to say, \(a_{\nu}=ea^{\prime}_{\nu}\)[40]). However even in the SM with the massive neutrinos this relationship is violated [47]. It breaks down in the case of the SM extensions as well [47]. Mention should be also made of the relation
\[a_{\nu_{e}}\simeq e\frac{\sqrt{2}G_{F}}{\pi^{2}}=8.5\times 10^{-13}\mu_{B} \lambda_{e}, \tag{7}\]
(\(\lambda_{e}\) is an electron Compton wavelength), which is widely met in literature (see, for example, [48]). It appears to be very convenient for comparison of interactions caused by nonzero values of the AM and the DMM with external magnetic field.
## 3 Two-flavor approximation
In the SM the dipole magnetic moment (DMM) of the neutrino appears to be proportional to the neutrino mass [49]
\[\mu_{\nu}=10^{-19}\mu_{B}\Big{(}\frac{m_{\nu}}{{\rm eV}}\Big{)}, \tag{8}\]
and, as a result, cannot bring to any observable effects in real fields. Therefore, when one employs the values of the neutrino DMM's which are close to the upper experimental bounds, then one should fall outside the scope of the SM. To obtain the large value of the neutrino DMM the SM extension must involve the right-handed charged currents and/or singly-charged Higgs bosons. As an example of such SM extension we shall utilize the left-right symmetric model (LRM) based on the \(SU(2)_{R}\times SU(2)_{L}\times U(1)_{B-L}\) gauge group [50, 51, 52]. In the LRM the Higgs sector content defines the neutrino nature. If it contains the bi-doublet \(\Phi(1/2,1/2,0)\) and two triplets \(\Delta_{L}(1,0,2)\), \(\Delta_{R}(0,1,2)\)[53] (in brackets the values of \(S_{L}^{W},S_{R}^{W}\) and \(B-L\) are given, \(S_{L}^{W}\) (\(S_{R}^{W}\)) is the weak left (right) isospin while \(B\) and \(L\) are the baryon and lepton numbers, respectively), then the neutrino represents a Majorana particle. For the neutrino to have a Dirac nature, the Higgs sector must consist of the bi-doublet \(\Phi(1/2,1/2,0)\) and two doublets \(\chi_{L}(1/2,0,1)\), \(\chi_{R}(0,1/2,1)\)[54].
In the LRM the contributions to the neutrino DMM are caused both by gauge bosons \(W^{\pm}\), \(W^{\prime\pm}\) and by singly charged Higgs bosons \(h^{(\pm)}\), \(\tilde{\delta}^{(\pm)}\)[55, 56, 57]. Since the masses of \(W^{\prime\pm}\) and \(h^{(\pm)}\) are at the TeV scale [58], then one may neglect their contributions to the neutrino DMM. Alternatively, the \(\tilde{\delta}^{(\pm)}\) boson does not interact with quarks, and as a consequence, the more firm data for obtaining the bounds on the \(m_{\tilde{\delta}}\) come from investigation of the electroweak processes. For example, data of LEP experiments (ALEPH, DELPHI,
L3, and OPAL) yield the bound \(m_{H^{+}}>80\) GeV [58]. The interaction between neutrino and \(\tilde{\delta}^{(\pm)}\) boson is determined by the Lagrangian [57]
\[{\cal L}_{\tilde{\delta}}=\frac{f_{ll^{\prime}}}{\sqrt{2}}\overline{l}^{c}(x)(1 -\gamma_{5})\nu_{l^{\prime}}(x)\tilde{\delta}^{+}(x), \tag{9}\]
where \(f_{ll^{\prime}}\) are triplet Yukawa coupling constants (TYCC), \(l,l^{\prime}=e,\mu,\tau\) and the upper index \(c\) means the charge conjugation operation. This interaction effects changing of the matter potential on the value
\[V^{\tilde{\delta}}_{ll^{\prime}}=-\frac{f_{el}f_{el^{\prime}}}{m_{\tilde{ \delta}}}n_{e}, \tag{10}\]
(\(n_{e}\) is an electron density of a matter under consideration). The analysis shows that \(V^{\tilde{\delta}}_{ll^{\prime}}\) could change the SM prediction on the value of \({\rm few}\times 10\%\)[12]. In what follows, for the sake of simplicity, we shall assume that only the diagonal TYCC are different from zero.
As for the magnetic field of the Sun, we reason that it is nonhomogeneous and vortex. We also assume that it exhibits the geometrical phase \(\Phi(z)\) (twisting)
\[B_{x}\pm iB_{y}=B_{\perp}e^{\pm i\Phi(z)}. \tag{11}\]
We notice that both for the Sun and for the Sun-like stars the reason of twisting is differential rotation rates of their components and the global convection of the plasma fluid. It should be noted that configurations of the solar magnetic field implying twisting nature are already being discussed in the astrophysical literature for a long time (see, for example [59]). In Ref.[60] the phase \(\Phi\) was introduced for the solar neutrino description for the first time. Subsequently, in Ref. [61] an account of this phase was demonstrated. It should be remarked the works [62, 63, 20] which were devoted to the effects on neutrino behavior in the twisting magnetic fields. For example, in Ref.[20] the neutrino beam traveling in the twisting magnetic fields of the solar convective zone was considered and some new effects (changing the energy level scheme, changing the resonances location, appearing the new resonances, merging the resonances and so on) were predicted. Assuming that the magnitude of the twist frequency \(\dot{\Phi}\) is characterized by the curvature radius \(r_{0}\) of the magnetic field lines, \(\dot{\Phi}\sim 1/r_{0}\), while \(r_{0}\) has the order of \(10\%\) of the solar radius, the authors came to the following conclusion. To ensure that these new effects will be observed the value of \(\dot{\Phi}\) in the convective zone should have the order of \(10^{-15}\) eV.
Since we are going to take into account the interaction of the neutrinos with the electromagnetic fields, the neutrino system under consideration must include both the left-handed and right-handed neutrinos. By virtue of the fact that the right-handed Majorana neutrinos are not sterile and interact as the right-handed Dirac antineutrinos, we shall denote them as \(\overline{\nu}_{lR}\). In order to stress the sterility of right-handed Dirac neutrinos we shall use for them the notation \(\nu_{lR}\). So, in two-flavor approximation the Majorana neutrino system will be described by the function \(\psi^{MT}=(\nu_{eL},\nu_{\kappa L},\overline{\nu}_{eR},\overline{\nu}_{\kappa R})\) while in the Dirac neutrino case we deal with the function \(\psi^{DT}=(\nu_{eL},\nu_{\kappa L},\nu_{eR},\nu_{\kappa R})\). In what follows to be specific, we shall reason \(\kappa=\mu\).
In order to facilitate the evolution equation for the solar neutrinos we pass into the reference frame rotating with the same angular velocity as the transverse magnetic field. The matrix of the transition to the new reference frame has the form
\[S=\left(\begin{array}{cccc}e^{i\Phi/2}&0&0&0\\ 0&e^{i\Phi/2}&0&0\\ 0&0&e^{-i\Phi/2}&0\\ 0&0&0&e^{-i\Phi/2}\end{array}\right). \tag{12}\]
In this reference frame for the Majorana neutrino the evolution equation will look like
\[i\frac{d}{dz}\left(\begin{array}{c}\nu_{eL}\\ \nu_{\mu L}\\ \overline{\nu}_{eR}\\ \overline{\nu}_{\mu R}\end{array}\right)=\left(\mathcal{H}_{0}^{M}+\mathcal{ H}_{int}^{M}\right)\left(\begin{array}{c}\nu_{eL}\\ \nu_{\mu L}\\ \overline{\nu}_{eR}\\ \overline{\nu}_{\mu R}\end{array}\right), \tag{13}\]
where
\[\mathcal{H}_{0}^{M}=\left(\begin{array}{cccc}-\Delta^{12}c_{2\theta}&\Delta ^{12}s_{2\theta}&0&0\\ \Delta^{12}s_{2\theta}&\Delta^{12}c_{2\theta}&0&0\\ 0&0&-\Delta^{12}c_{2\theta}&\Delta^{12}s_{2\theta}\\ 0&0&\Delta^{12}s_{2\theta}&\Delta^{12}c_{2\theta}\end{array}\right)\]
\[\mathcal{H}_{int}^{M}=\left(\begin{array}{cccc}V_{eL}^{\prime}+\mathcal{A}_{ ee}^{L}-\dot{\Phi}/2&\mathcal{A}_{e\mu}^{L}&0&\mu_{e\mu}B_{\perp}\\ \mathcal{A}_{\mu e}^{L}&V_{\mu L}+\mathcal{A}_{\mu\mu}^{L}-\dot{\Phi}/2&-\mu_{ e\mu}B_{\perp}&0\\ 0&-\mu_{e\mu}B_{\perp}&-V_{eL}^{\prime}+\mathcal{A}_{ee}^{R}+\dot{\Phi}/2& \mathcal{A}_{e\mu}^{R}\\ \mu_{e\mu}B_{\perp}&0&\mathcal{A}_{\mu e}^{R}&-V_{\mu L}+\mathcal{A}_{\mu\mu}^ {R}+\dot{\Phi}/2\end{array}\right), \tag{14}\]
the free Hamiltonian \(\mathcal{H}_{0}^{M}\) describes oscillations in vacuum, while the interaction Hamiltonian \(\mathcal{H}_{int}^{M}\) covers interaction with medium, \(V_{eL}^{\prime}\) (\(V_{\mu L}\)) is a matter potential describing interaction of the \(\nu_{eL}\) (\(\nu_{\mu L}\)) neutrinos with dense matter,
\[V_{eL}^{\prime}=V_{eL}+V_{ee}^{\delta},\qquad V_{eL}=\sqrt{2}G_{F}(n_{e}-n_{n }/2),\qquad V_{\mu L}=V_{\tau L}=-\sqrt{2}G_{F}n_{n}/2,\]
\[\Delta^{12}=\frac{m_{1}^{2}-m_{2}^{2}}{4E},\qquad\mathcal{A}_{ll^{\prime}}^{L }=\left\{e[1-\delta_{ll^{\prime}}]\frac{<r_{\nu_{lL}\nu_{lL}}^{2}>}{6}+a_{\nu _{lL}\nu_{lL}}\right\}[\text{rot }\mathbf{H}(z)]_{z},\]
\[\cos 2\theta=c_{2\theta},\ \ \sin 2\theta=s_{2\theta},\qquad\mathcal{A}_{ll^{ \prime}}^{R}=\left\{e[1-\delta_{ll^{\prime}}]\frac{<r_{\overline{\nu}_{lR} \overline{\nu}_{lR}}^{2}>}{6}-a_{\overline{\nu}_{lR}\overline{\nu}_{lR}} \right\}[\text{rot }\mathbf{H}(z)]_{z},\]
\[m_{1}=m_{e}\cos\theta-m_{\mu}\sin\theta,\qquad m_{2}=-m_{e}\sin\theta+m_{\mu} \cos\theta,\]
\(\theta\) is a neutrino mixing angle in vacuum, \(m_{1}\) and \(m_{2}\) are mass eigenstates, \(\dot{\Phi}\) is the twisting frequency, and \(n_{n}\) is neutron density. When writing \(\mathcal{H}_{int}^{M}\) we have taken into account that the toroid interaction is different from zero in the presence of the inhomogeneous vortex magnetic field. In a concrete experimental situation this field could be realized owing to Maxwell's equations as the displacement and conduction currents. We can consider the situation with the solar flares (SF's) as an example. The commonly accepted model of
this solar phenomenon is the magnetic reconnection model [3]. According to this model, a variable electric field induced by magnetic field variations of the coupled sunspots (CS's) appears at the SF initial phase. This field causes the conduction current which takes the form of a current layer directed along limiting strength line being common for the CS's. So, in this case the neutrinos are influenced by both the displacement current and the conduction current.
For the Dirac neutrino flux traveling through the solar medium we have
\[{\cal H}_{0}^{D}={\cal H}_{0}^{M},\qquad{\cal H}_{int}^{D}=\pmatrix{V_{eL}+{ \cal A}_{ee}^{DL}-\dot{\Phi}/2&{\cal A}_{e\mu}^{DL}&\mu_{ee}B_{\perp}&\mu_{e\mu} B_{\perp}\cr{\cal A}_{\mu e}^{DL}&V_{\mu L}+{\cal A}_{\mu\mu}^{DL}-\dot{\Phi}/2&\mu_{e \mu}B_{\perp}&\mu_{\mu\mu}B_{\perp}\cr\mu_{ee}B_{\perp}&\mu_{e\mu}B_{\perp}&\dot {\Phi}/2&0\cr\mu_{e\mu}B_{\perp}&\mu_{\mu\mu}B_{\perp}&0&\dot{\Phi}/2\cr}, \tag{15}\]
where
\[{\cal A}_{ll^{\prime}}^{DL}=\Big{\{}\frac{e<r_{\nu_{lL}\nu_{l^{\prime}L}}^{2 }>}{6}+a_{\nu_{lL}\nu_{l^{\prime}L}}\Big{\}}[{\rm rot}\ {\bf H}(z)]_{z},\]
and we have neglected contribution to the matter potential coming from the singly charged Higgs boson.
Our next task is to investigate the resonance conversions of the neutrino beam which travels in the region of the CS's being the source of the solar flares. Remember, that for the resonance conversion to take place, there is a need to comply with the following requirements: (i) the resonance condition must be fulfilled; (ii) the resonance width must be nonzero; (iii) the neutrino beam must pass a distance comparable with the oscillation length.
In order to find the exact expressions for the resonance conversion probabilities we must choose the definite coordinate functions for the description of quantities \(V_{eL}\), \(B_{\perp}\), \(\dot{\Phi}\) and solve Eq.(14). Then, with the help of the found functions \(\nu_{l}(z)\), we could determine all resonance conversion probabilities. Of course we shall be dealing with numerical solution and, as a result, the physical implications will be far from transparent. Moreover, in the most general case some of the resonance transitions may be forbidden. Therefore, first we must establish which of these transitions are allowed and which are forbidden. Further we shall follow generally accepted scheme (see, for example, [21, 64]), namely, we shall believe that all resonance regions are well separated what allows us to consider them as independent ones. As far as the twisting is concerned, amongst existing the twisting models (see, for example [65]) we choose the simple model proposed in Ref. [66]
\[\Phi(z)=\frac{\alpha}{L_{mf}}z, \tag{16}\]
where \(\alpha\) is a constant and \(L_{mf}\) is a distance on which the magnetic field exists.
We start with resonant conversions of the electron neutrinos in the Majorana neutrino case. Here the \(\nu_{eL}\) may exhibit two resonance conversions. The \(\nu_{eL}\rightarrow\nu_{\mu L}\) (Micheev-Smirnov-Wolfenstein -- MSW [67, 68]) resonance is the first one. The corresponding
resonance condition, the transition width and the oscillation length are defined by the expressions
\[\Sigma_{\nu_{eL}\nu_{\mu L}}=-2\Delta^{12}c_{2\theta}+V^{\prime}_{eL}-V_{\mu L}+ \mathcal{A}^{L}_{ee}-\mathcal{A}^{L}_{\mu\mu}=0, \tag{17}\]
\[\Gamma_{\nu_{eL}\nu_{\mu L}}\simeq\frac{\sqrt{2}(\Delta^{12}s_{2\theta}+ \mathcal{A}^{L}_{e\mu})}{G_{F}}, \tag{18}\]
\[L_{\nu_{eL}\nu_{\mu L}}=\frac{2\pi}{\sqrt{\Sigma^{2}_{\nu_{eL}\nu_{\mu L}}+( \Delta^{12}s_{2\theta}+\mathcal{A}^{L}_{e\mu})^{2}}}. \tag{19}\]
From Eqs.(18) and (19) it follows that the oscillation length achieves maximum value at the resonance and the relation
\[\Gamma_{\nu_{eL}\nu_{\mu L}}=\frac{2\sqrt{2}\pi}{G_{F}[L_{\nu_{eL}\nu_{\mu L}} ]_{max}} \tag{20}\]
takes place. With a help of the relations (17)-(19) one could obtain the probability of the \(\nu_{eL}\rightarrow\nu_{\mu L}\) resonance transition. In the most simple case, when the neutrino system consists only from \(\nu_{eL}\) and \(\nu_{\mu L}\) while the Hamiltonian is not a distance function, this quantity is defined by the expression
\[P_{\nu_{eL}\nu_{\mu L}}(z)=\sin^{2}2\theta_{m}\sin^{2}\Bigg{(}\frac{z}{L_{\nu_ {eL}\nu_{\mu L}}}\Bigg{)}, \tag{21}\]
where
\[\sin^{2}2\theta_{m}=\frac{(\Delta^{12}s_{2\theta}+\mathcal{A}^{L}_{e\mu})^{2} }{\Sigma^{2}_{\nu_{eL}\nu_{\mu L}}+(\Delta^{12}s_{2\theta}+\mathcal{A}^{L}_{e \mu})^{2}} \tag{22}\]
and \(\theta_{m}\) is a mixing angle in a medium. In order to include contributions from the lowest-energy but most numerous \(pp\)-neutrino flux, we put \(E_{\nu}=0.4\) MeV. Next, taking into account
\[\Delta m^{2}_{12}=7.37\times 10^{-5}\ {\rm eV}^{2},\hskip 28.452756pt\sin^{2} \theta=\sin^{2}\theta_{12}=0.297 \tag{23}\]
we get \(2\Delta^{12}c_{2\theta}\simeq 8\times 10^{-11}\ {\rm eV}\). Then the neutrino flux passing through the region of this resonance must be reduced by about a factor of two as it was verified by experiments [69]. Since the maximum value of the oscillation length has the order of \(\sim 3.5\times 10^{7}\ {\rm cm}\), this resonance transition is fulfilled before the convective zone. Consequently, it has no bearing on the SF's which take place in the solar atmosphere. To put it another way, in the case of the MSW resonance the quantities \(\mathcal{A}^{L}_{ee}\), \(\mathcal{A}^{L}_{\mu\mu}\) and \(\mathcal{A}^{L}_{e\mu}\) do not play any role.
Further we shall consider the \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) resonance. The relations being pertinent to this resonance are as follows
\[\Sigma_{\nu_{eL}\overline{\nu}_{\mu R}}=-2\Delta^{12}c_{2\theta}+V^{\prime}_{eL }+V_{\mu L}+\mathcal{A}^{L}_{ee}-\mathcal{A}^{R}_{\mu\mu}-\dot{\Phi}=0 \tag{24}\]
\[\Gamma_{\nu_{eL}\overline{\nu}_{\mu R}}\simeq\frac{\sqrt{2}(\mu_{e\mu}B_{\perp })}{G_{F}}, \tag{25}\]
\[L_{\nu_{eL}\overline{\nu}_{\mu R}}\simeq\frac{2\pi}{\sqrt{\Sigma_{\nu_{eL} \overline{\nu}_{\mu R}}^{2}+(\mu_{e\mu}B_{\perp})^{2}}}. \tag{26}\]
In the solar atmosphere, the terms \(V^{\prime}_{eL}\) and \(V_{\mu L}\) in Eq. (24) are more less than \(\Delta^{12}c_{2\theta}\) and do not play any part. Analogously the quantity \((a_{\nu_{eL}\nu_{eL}}+a_{\overline{\nu}_{\mu R}\overline{\nu}_{\mu R}})[\)rot \({\bf H}(z)]_{z}\) appears to be also small compared with \(\Delta^{12}c_{2\theta}\). For example, in the best case, when the currents producing the inhomogeneous vortex magnetic field reach the values of \(10^{-1}\) A/cm\({}^{2}\), for the CS's the quantity \((a_{\nu_{eL}\nu_{eL}}+a_{\overline{\nu}_{\mu R}\overline{\nu}_{\mu R}})[\)rot \({\bf H}(z)]_{z}\) has the order of \(10^{-30}\) eV. Therefore, the resonance \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) may occur only at the cost of magnetic field twisting, that is, when the relation
\[2\Delta^{12}c_{2\theta}+\dot{\Phi}\simeq 0. \tag{27}\]
will be fulfilled.
We determine the values of the parameter \(\alpha\) which provide the fulfilment of Eq.(27) for different solar neutrinos. Assuming \(\mu_{e\mu}=\mu_{ee}\), \(B_{\perp}=10^{5}\) G and using for \(\mu_{ee}\) its upper bound \(2.9\times 10^{-11}\mu_{B}\) we obtain
\[-\alpha=\left\{\begin{array}{ll}10^{4},&\quad\mbox{for $E_{\nu}=0.1$ MeV ($pp-$ neutrinos)},\\ 10^{2},&\quad\mbox{for $E_{\nu}=10$ MeV (${}^{8}B-$ neutrinos)}.\end{array}\right. \tag{28}\]
When the magnetic field over the CS's reaches the value of \(10^{8}\) G (as it could take place for the super flare case [6]), the above mentioned values of \(|\alpha|\) decrease by a factor of \(10^{3}\). So, we see that under the specific conditions the \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) resonance may exist in the Sun's conditions. Because the resonance condition (27) does not depend on \(n_{e}\) and \(n_{n}\), then the resonance \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) may take place both in the chromosphere and in the corona.
Let us introduce the quantity which characterizes the weakening of the electron neutrino beam
\[\eta_{\nu_{eL}\overline{\nu}_{\mu R}}=\frac{N_{i}-N_{f}}{N_{i}},\]
where \(N_{i}\) and \(N_{f}\) are numbers of the \(\nu_{eL}\) neutrinos before and after the passage of the \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) resonance, respectively. In order to estimate the value of \(\eta_{\nu_{eL}\overline{\nu}_{\mu R}}\) we should concretize the dependence on distance of the quantities \(n_{e}(z),n_{n}(z)\), \(B_{\perp}(z)\) and solve Eq.(14). However, to roughly estimate this quantity, it will suffice to compare the resonance widths \(\Gamma_{\nu_{eL}\nu_{\mu L}}\) and \(\Gamma_{\nu_{eL}\overline{\nu}_{\mu R}}\), while taking into account the value of \(\eta_{\nu_{eL}\nu_{\mu L}}\). Calculations result in
\[\eta_{\nu_{eL}\overline{\nu}_{\mu R}}\simeq\left\{\begin{array}{ll}2\times 1 0^{-4},&\quad\mbox{when $\mu_{e\mu}=(\mu_{ee})_{upper}=2.9\times 10^{-11} \mu_{B}$, \ \ $B_{\perp}=10^{5}$ $G$},\\ 0.12,&\quad\mbox{when $\mu_{e\mu}=(\mu_{\mu\mu})_{upper}=6.8\times 10^{-10} \mu_{B}$, \ $B_{\perp}=10^{7}$ $G$}.\end{array}\right. \tag{29}\]
It should be noted that all the magnetic-induced resonances have the resonance widths which are completely determined by the quantity \(\mu_{\nu_{l}\nu_{l^{\prime}}}B_{\perp}\). So, the foregoing estimations remain valid for such resonance conversions.
It is clear that observing the electron neutrino beam which goes through the CS's during the initial stage of the SF we have a chance to detect its weakening. Such observations could be done at neutrino detectors based on coherent elastic (anti)neutrino-atomic nucleus scattering. This type of low-energy (anti)neutrino interaction was predicted in 1974[70, 71] and was recently discovered by COHERENT Collaboration [72]. It was shown that neutrinos and antineutrinos of all types can elastically coherently interact with all nucleons of the nucleus by means of a neutral current, provided that the momentum transferred to the nucleus is small. The cross section of such a process is relatively large, it is more than two orders of magnitude (for heavy nuclei) larger than the cross section of other known processes of interaction of low-energy neutrinos. Such detectors are already being used for monitoring the operation of a nuclear reactor in the on-line regime. Examples are found in Russian Emission Detector-100 (RED-100) at Kalininskaya nuclear power plant [73]. Installed at a distance of 19 meters from a nuclear reactor, where the reactor antineutrino flux reaches the values \(1.35\times 10^{13}\) cm\({}^{-2}\) c\({}^{-1}\), RED-100 should record 3300 antineutrino events per day. Moreover, in the future, it is planned to scale the detector by a factor of 10 to the mass of the sensitive volume of the order of 1 ton (RED-1000)[74]. This will make it possible to register 33,000 events per day. Therefore, for example, when RED-1000 will be used for detection of the solar \(pp\)-neutrinos, then it could detect about 2000 neutrino events per day.
In order to determinate the survival probability of electron neutrinos we should find all neutrino functions and calculate probabilities of permitted resonance transitions. In that case we can also conveniently eliminate the MSW resonance from consideration. Let us assume that the electron neutrino beam has endured the MSW resonance before it enters the magnetic field of the CS's. To put it another way, we shall deal with the beam which has been weakened at the cost of the MSW resonance. Therefore, the survival probability for the Majorana electron neutrinos is given by the expression
\[{\cal P}_{\nu_{eL}\nu_{eL}}=1-{\cal P}_{\nu_{eL}\overline{\nu}_{\mu R}}. \tag{30}\]
The oscillations picture examined above will be incomplete, if we don't take into consideration the oscillation transitions of the \(\nu_{\mu L}\) neutrinos, which were produced in the convective zone due to the MSW resonance. In the magnetic field of the CS's they could undergo one more resonance conversion, namely, the \(\nu_{\mu L}\rightarrow\overline{\nu}_{eR}\) resonance. The resonance condition and the maximal value of the oscillation length for this resonance are determined by the expressions
\[\Sigma_{\nu_{\mu L}\overline{\nu}_{eR}}=2\Delta^{12}c_{2\theta}+V^{\prime}_{eL }+V_{\mu L}+{\cal A}^{L}_{\mu\mu}-{\cal A}^{R}_{ee}-\dot{\Phi}=0 \tag{31}\]
\[(L_{\nu_{\mu L}\overline{\nu}_{eR}})_{max}\simeq\frac{2\pi}{\mu_{ee}B_{\perp}}. \tag{32}\]
It is clear that this resonance could take place only when the value of \(2\Delta^{12}c_{2\omega}\) will be compensated by the magnetic field twisting. Comparing the expression (31) with an analogous one for the \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) resonance we make sure that they are mutually exclusive.
Really, the fulfilment of the \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) resonance condition will take place at negative values of \(\dot{\Phi}\), while the \(\nu_{\mu L}\rightarrow\overline{\nu}_{eR}\) resonance condition demands positive values of \(\dot{\Phi}\).
From the obtained equations for the resonance conditions, the oscillation lengths and the resonance widths we see that the contributions coming from the AM and the CNR can be safely neglected when the neutrino has the Majorana nature.
Further we proceed to the Dirac neutrino case. Here the electron neutrinos could undergo three following resonance conversions
\[\nu_{eL}\rightarrow\nu_{\mu L},\qquad\nu_{eL}\rightarrow\nu_{eR},\qquad\nu_{eL }\rightarrow\nu_{\mu R}.\]
The \(\nu_{eL}\rightarrow\nu_{\mu L}\) resonance is of little interest. As in the Majorana neutrino case it occurs before the convective zone.
The resonance condition and the maximal value of the oscillation length for the \(\nu_{eL}\rightarrow\nu_{eR}\) resonance are given by the expressions
\[\Sigma^{D}_{\nu_{eL}\nu_{eR}}=V_{eL}+{\cal A}^{DL}_{ee}-\dot{\Phi}=0. \tag{33}\]
\[(L_{\nu_{eL}\nu_{eR}})_{max}\simeq\frac{2\pi}{\mu_{ee}B_{\perp}}. \tag{34}\]
The situation, when the term proportional to [rot \({\bf H}(z)]_{z}\) is negligibly small compared to \(\dot{\Phi}\) and the resonance condition reduces to
\[V_{eL}\simeq\dot{\Phi}, \tag{35}\]
is not realistic. Genuinely, in order to satisfy Eq. (33) it is necessary that the twisting magnetic field exists over the distance being much bigger than the solar radius. On the other hand, as we have already seen, the quantity proportional to [rot \({\bf H}(z)]_{z}\) could reach values of \(10^{-30}\) eV and being negative it could compensate the term of \(V_{eL}\) in Eq.(33). In doing so the \(\nu_{eL}\rightarrow\nu_{eR}\) resonance may take place only in the corona.
We are coming now to the \(\nu_{eL}\rightarrow\nu_{\mu R}\) resonance. In this case the pertinent expressions are as follows
\[\Sigma^{D}_{\nu_{eL}\nu_{\mu R}}=-2\Delta^{12}c_{2\theta}+V_{eL}+{\cal A}^{DL }_{ee}-\dot{\Phi}=0, \tag{36}\]
\[(L_{\nu_{eL}\nu_{\mu R}})_{max}\simeq\frac{2\pi}{\mu_{e\mu}B_{\perp}}. \tag{37}\]
Having compared the foregoing expressions with (24)-(26) one may come to the conclusion that the conditions of observing the \(\nu_{eL}\rightarrow\nu_{\mu R}\) resonance are only slightly different from each other in both Dirac and Majorana cases. Then, considering this resonance in the region of the CS's we may argue, as in the Majorana case, that \(\nu_{eL}\rightarrow\nu_{\mu R}\) resonance may also occur only at the cost of the magnetic field. The value of \(\Delta^{12}c_{2\theta}\simeq 10^{-12}\) eV entering into the resonance condition (36) could be compensated by the twisting frequency \(\dot{\Phi}\) only.
As for the \(\nu_{\mu L}\) neutrinos produced in the MSW resonance, they could undergo the following resonance conversions \(\nu_{\mu L}\rightarrow\nu_{eR}\) and \(\nu_{\mu L}\rightarrow\nu_{\mu R}\). Their resonance conditions coincide and will look like
\[\Sigma^{D}_{\nu_{\mu L}\nu_{eR}}=\Sigma^{D}_{\nu_{\mu L}\nu_{\mu R}}=2\Delta^{ 12}c_{2\theta}+V_{\mu L}+{\cal A}^{L}_{\mu\mu}-\dot{\Phi}=0. \tag{38}\]
We see that the obtained expressions have practically the same form as the \(\nu_{\mu L}\rightarrow\overline{\nu}_{eR}\) resonance condition. Therefore, one may state that the \(\nu_{\mu L}\rightarrow\nu_{eR}\) and \(\nu_{\mu L}\rightarrow\nu_{\mu R}\) resonances exhibit the identical behavior with the \(\nu_{\mu L}\rightarrow\overline{\nu}_{eR}\) resonance. As a result, if the \(\nu_{\mu L}\rightarrow\nu_{eR}\) and \(\nu_{\mu L}\rightarrow\nu_{\mu R}\) resonances are allowed then the \(\nu_{eL}\rightarrow\nu_{\mu R}\) resonance will be forbidden, and conversely.
From the foregoing equations follows that the AM and the NCR should be taken into account for the Dirac neutrino case. Now the survival probability of the electron neutrinos is defined by the expression
\[{\cal P}_{\nu_{eL}\nu_{eL}}=1-({\cal P}_{\nu_{eL}\nu_{eR}}+{\cal P}_{\nu_{eL}\nu _{\mu R}}), \tag{39}\]
where the contribution of the MSW-resonance has been eliminated for reasons expounded above.
## 4 Three-neutrino generations
Let us consider the manner in which the inclusion of the third neutrino generation will influence the oscillations picture. For the Majorana neutrinos in the flavor basis the evolution equation will look like
\[i\frac{d}{dz}\begin{pmatrix}\nu_{eL}\\ \nu_{\mu L}\\ \nu_{\tau L}\\ \overline{\nu}_{eR}\\ \overline{\nu}_{\mu R}\\ \overline{\nu}_{\tau R}\end{pmatrix}=\begin{pmatrix}\mathcal{H}_{0}^{M}+ \mathcal{H}_{int}^{M}\\ \hline\mathcal{H}_{0
\[{\cal U}=\pmatrix{{\cal D}&0\cr 0&{\cal D}},\qquad{\cal D}=\exp(i\lambda_{7}\psi) \exp(i\lambda_{5}\phi)\exp(i\lambda_{2}\omega),\]
the \(\lambda\)'s are Gell-Mann matrices corresponding to the spin-one matrices of the \(SO(3)\) group, \(\psi=\theta_{23},\ \phi=\theta_{13},\ \omega=\theta_{12},\,s_{\psi}=\sin\psi,\ c_{ \psi}=\cos\psi\), and so on. We remind that the current values on the oscillation angles are [75]
\[\sin^{2}\theta_{12}\simeq 0.297,\qquad\sin^{2}\theta_{13}\simeq 0.0215,\qquad \sin^{2}\theta_{23}\simeq 0.425.\]
Even though we work with the three component neutrino wave function \(\Psi^{T}=(\nu_{eL},\nu_{\mu L},\nu_{\tau L})\), the analysis of the neutrino system behavior represents cumbersome process [76]. On the other hand, one could simplify the problem at the cost of the change-over to a new basis. Let us demand that in the new basis
\[\Psi^{\prime M}=\pmatrix{\nu_{1L}^{M}\cr\nu_{2L}^{M}\cr\nu_{3L}^{M}\cr\overline {\nu}_{1R}^{M}\cr\overline{\nu}_{2R}^{M}\cr\overline{\nu}_{3R}^{M}},\]
which we call a "hatched" basis, the Hamiltonian \({\cal H}_{0}^{M}\) will depend on the angle \(\omega\) only, while the Hamiltonian \({\cal H}_{int}^{M}\) depends on the angles \(\phi\) and \(\psi\). In so doing, when in this basis the angles \(\psi\) and \(\phi\) tend to zero, our results must be converted into those obtained within the two flavor approximation (FA). The hatched basis is connected with the flavor one in the following manner
\[\Psi^{\prime M}={\cal U}^{\prime}\pmatrix{\nu_{eL}\cr\nu_{\mu L}\cr\overline {\nu}_{\tau L}\cr\overline{\nu}_{eR}\cr\overline{\nu}_{\mu R}\cr\overline{\nu} _{\tau R}\cr}, \tag{41}\]
where
\[{\cal U}^{\prime}=\pmatrix{{\cal D}^{\prime}&0\cr 0&{\cal D}^{\prime}},\qquad{ \cal D}^{\prime}=\exp(-i\lambda_{5}\phi)\exp(-i\lambda_{7}\psi)=\pmatrix{c_{ \phi}&-s_{\phi}s_{\psi}&-s_{\phi}c_{\psi}\cr 0&c_{\psi}&-s_{\psi}\cr s_{\phi}&c_{\phi}s_{ \psi}&c_{\phi}c_{\psi}\cr}.\]
Since the angle \(\phi\) is much less than the angles \(\psi\) and \(\omega\) then in the new basis the \(\nu_{1L}^{M}\) (\(\overline{\nu}_{1R}^{M}\)) state is predominantly the \(\nu_{eL}\) (\(\overline{\nu}_{eR}\)) one. Moreover, the relations
\[\nu_{1L}^{M}\ \Big{|}_{\phi=0}=\nu_{eL},\qquad\overline{\nu}_{1R}^{M}\ \Big{|}_{\phi=0}=\overline{\nu}_{eR} \tag{42}\]
take place.
Inasmuch as the experimental bounds on the charge radiuses and the anapole moments of all neutrino types are of the same order, for the sake of simplicity we shall assume
\[{\cal A}_{ee}^{L,R}={\cal A}_{\mu\mu}^{L,R}={\cal A}_{\tau\tau}^{L,R}={\cal A}_{ ll}^{L,R},\qquad{\cal A}_{e\mu}^{L,R}={\cal A}_{e\tau}^{L,R}={\cal A}_{\mu\tau}^{L,R}={ \cal A}_{ll^{\prime}}^{L,R}. \tag{43}\]
Then in the hatched basis after the passage to the reference frame which rotates with the same velocity as the magnetic field the Hamiltonians will look like
\[{\cal H}_{0}^{\prime M}=\pmatrix{-\Delta^{12}c_{2\omega}&\Delta^{12}s_{2\omega} &0&0&0&0\cr\Delta^{12}s_{2\omega}&\Delta^{12}c_{2\omega}&0&0&0&0\cr 0&0&\Delta^{31}+ \Delta^{32}&0&0&0\cr 0&0&0&-\Delta^{12}c_{2\omega}&\Delta^{12}s_{2\omega}&0\cr 0 &0&0&\Delta^{12}s_{2\omega}&\Delta^{12}c_{2\omega}&0\cr 0&0&0&0&0&\Delta^{31}+ \Delta^{32}}, \tag{44}\]
\[{\cal H}_{int}^{\prime M}=\pmatrix{\Lambda_{11}-\dot{\Phi}/2&\Lambda_{12}& \Lambda_{13}&0&\mu_{12}B_{\perp}&\mu_{13}B_{\perp}\cr\Lambda_{12}&\Lambda_{2 2}-\dot{\Phi}/2&\Lambda_{23}&-\mu_{12}B_{\perp}&0&-\mu_{23}B_{\perp}\cr \Lambda_{13}&\Lambda_{23}&\Lambda_{33}-\dot{\Phi}/2&-\mu_{13}B_{\perp}&\mu_{ 23}B_{\perp}&0\cr 0&-\mu_{12}B_{\perp}&-\mu_{13}B_{\perp}&\overline{ \Lambda}_{11}+\dot{\Phi}/2&\overline{\Lambda}_{12}&\overline{\Lambda}_{13}\cr \mu_{12}B_{\perp}&0&\mu_{23}B_{\perp}&\overline{\Lambda}_{12}&\overline{ \Lambda}_{22}+\dot{\Phi}/2&\overline{\Lambda}_{23}\cr\mu_{13}B_{\perp}&-\mu_{ 23}B_{\perp}&0&\overline{\Lambda}_{13}&\overline{\Lambda}_{23}&\overline{ \Lambda}_{33}+\dot{\Phi}/2}, \tag{45}\]
where
\[\mu_{12}=\mu_{e\mu}c_{\psi}c_{\phi}+\mu_{e\tau}s_{\psi}c_{\phi}+\mu_{\mu\tau} s_{\phi},\qquad\mu_{13}=\mu_{e\mu}s_{\psi}-\mu_{e\tau}c_{\psi},\]
\[\mu_{23}=-\mu_{e\mu}c_{\psi}s_{\phi}-\mu_{e\tau}s_{\psi}s_{\phi}+\mu_{\mu\tau} c_{\phi},\]
\[\Lambda_{11}=(V_{eL}^{\prime}-V_{\mu L})c_{\Phi}^{2}+V_{\mu L}+{\cal A}_{ll}^{L }-2{\cal A}_{ll^{\prime}}^{L}[c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})-s_{\Phi}^{2} c_{\psi}s_{\psi}],\]
\[\Lambda_{22}=V_{\mu L}+{\cal A}_{ll}^{L}-2{\cal A}_{ll^{\prime}}^{L}s_{\psi}c _{\psi},\]
\[\Lambda_{33}=V_{\mu L}+{\cal A}_{ll}^{L}+2{\cal A}_{ll^{\prime}}^{L}[c_{\Phi}^ {2}s_{\psi}c_{\psi}+c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})],\]
\[\Lambda_{12}=\Lambda_{21}={\cal A}_{ll^{\prime}}^{L}(c_{\psi}-s_{\psi})[c_{ \Phi}+s_{\Phi}(c_{\psi}+s_{\psi})],\]
\[\Lambda_{13}=\Lambda_{31}=(V_{eL}^{\prime}-V_{\mu L})c_{\Phi}s_{\Phi}-2{\cal A }_{ll^{\prime}}^{L}[c_{\Phi}s_{\Phi}c_{\psi}-(c_{\psi}+s_{\psi})(c_{\Phi}^{2}- s_{\Phi}^{2})],\]
\[\Lambda_{23}=\Lambda_{32}={\cal A}_{ll^{\prime}}^{L}(c_{\psi}-s_{\psi})[s_{ \Phi}+c_{\Phi}(c_{\psi}+s_{\psi})],\]
\[\overline{\Lambda}_{ik}=\Lambda_{ik}\Big{(}V_{eL}^{\prime}\rightarrow-V_{eL}^{ \prime},V_{\mu L}\rightarrow-V_{\mu L},{\cal A}_{ll^{\prime}}^{L}\rightarrow{ \cal A}_{ll^{\prime}}^{R},{\cal A}_{ll}^{L}\rightarrow{\cal A}_{ll}^{R} \Big{)},\qquad i,k=1,2,3.\]
For the Dirac neutrinos in the basis \(\Psi^{\prime DT}=(\nu_{1L}^{D},\nu_{2L}^{D},\nu_{3L}^{D},\nu_{1R}^{D},\nu_{2R} ^{D},\nu_{3R}^{D})\) the free Hamiltonian \({\cal H}_{0}^{\prime D}\) coincides with \({\cal H}_{0}^{\prime M}\) while the interaction Hamiltonian takes the form
\[{\cal H}_{int}^{\prime D}=\pmatrix{\Lambda_{11}^{D}-\dot{\Phi}/2&\Lambda_{12}^{D }&\Lambda_{13}^{D}&\mu_{ee}^{\prime}B_{\perp}&\mu_{e\mu}^{\prime}B_{\perp}&\mu _{e\tau}^{\prime}B_{\perp}\cr\Lambda_{12}^{D}&\Lambda_{22}^{D}-\dot{\Phi}/2& \Lambda_{23}^{D}&\mu_{\mu e}^{\prime}B_{\perp}&\mu_{\mu\mu}^{\prime}B_{\perp}& \mu_{\mu\tau}^{\prime}B_{\perp}\cr\Lambda_{13}^{D}&\Lambda_{23}^{D}&\Lambda_{3 3}^{D}-\dot{\Phi}/2&\mu_{\tau e}^{\prime}B_{\perp}&\mu_{\tau\tau}^{\prime}B_{ \perp}&\mu_{\tau\tau}^{\prime}B_{\perp}\cr\mu_{ee}^{\prime}B_{\perp}&\mu_{e \mu}^{\prime}B_{\perp}&\mu_{e\tau}^{\prime}B_{\perp}&\dot{\Phi}/2&0&0\cr\mu_{ \mu e}^{\prime}B_{\perp}&\mu_{\mu\mu}^{\prime}B_{\perp}&\mu_{\mu\tau}^{\prime}B_{ \perp}&0&\dot{\Phi}/2&0\cr\mu_{\tau e}^{\prime}B_{\perp}&\mu_{\tau\mu}^{\prime}B_{ \perp}&\mu_{\tau\tau}^{\prime}B_{\perp}&0&0&\dot{\Phi}/2\cr}, \tag{46}\]
where
\[\Lambda^{D}_{ik}=\Lambda_{ik}(V^{\tilde{\delta}}_{ee}\to 0,{\cal A}^{L}_{ll^{ \prime}}\rightarrow{\cal A}^{DL}_{ll^{\prime}}),\]
\[\pmatrix{\mu^{\prime}_{ee}B_{\perp}&\mu^{\prime}_{e\mu}B_{\perp}&\mu^{\prime}_{ e\tau}B_{\perp}\cr\mu^{\prime}_{e\mu}B_{\perp}&\mu^{\prime}_{\mu\mu}B_{\perp}&\mu^{ \prime}_{\mu\tau}B_{\perp}\cr\mu^{\prime}_{e\tau}B_{\perp}&\mu^{\prime}_{\mu \tau}B_{\perp}&\mu^{\prime}_{\tau\tau}B_{\perp}}={\cal D}^{\prime}\pmatrix{\mu _{ee}B_{\perp}&\mu_{e\mu}B_{\perp}&\mu_{e\tau}B_{\perp}\cr\mu_{e\mu}B_{\perp}& \mu_{\mu\mu}B_{\perp}&\mu_{\mu\tau}B_{\perp}\cr\mu_{e\tau}B_{\perp}&\mu_{\mu \tau}B_{\perp}&\mu_{\tau\tau}B_{\perp}}{\cal D}^{\prime-1}.\]
Since in the Dirac neutrino case the masses of all singly charged Higgs bosons lay at the TeV scale, then in the expression for \({\cal H}^{\prime D}_{int}\) we have neglected their contributions. We should also focus our attention on the fact that in the solar atmosphere all elements of \({\cal H}^{\prime M,D}_{int}\) are much more less than the ones of \({\cal H}^{\prime M,D}_{0}\). So, the perturbation theory may be applied.
Now we proceed to the investigation of the resonance transitions in the neutrino system under study. Assuming the Majorana neutrino nature we start our discussion from the \(\nu^{M}_{1L}\rightarrow\nu^{M}_{2L}\) transition. The resonance condition and the maximal value of the oscillation length are as follows
\[\Sigma_{\nu_{1L}\nu_{2L}}=-2\Delta^{12}c_{2\omega}+(V^{\prime}_{eL}-V_{\mu L})c ^{2}_{\Phi}-2{\cal A}^{L}_{ll^{\prime}}[c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})-(1 +s^{2}_{\Phi})c_{\psi}s_{\psi}]=0, \tag{47}\]
\[(L_{\nu^{M}_{1L}\nu^{M}_{2L}})_{max}=\frac{2\pi}{\Delta^{12}s_{2\omega}+{\cal A }^{L}_{ll^{\prime}}(c_{\psi}-s_{\psi})[c_{\phi}+s_{\phi}(c_{\psi}+s_{\psi})]}. \tag{48}\]
Corresponding expressions for the \(\nu^{D}_{1L}\rightarrow\nu^{D}_{2L}\) transition follow from Eqs.(47),(48) under the replacement \(V^{\tilde{\delta}}_{ee}\to 0\) and \({\cal A}^{L}_{ll^{\prime}}\rightarrow{\cal A}^{DL}_{ll^{\prime}}\). When \(\psi=\phi=0\) the expressions (47) and (48) convert to the resonance condition and the oscillation length for the MSW resonance in two FA (recall that we have set \({\cal A}^{L}_{ee}={\cal A}^{L}_{\mu\mu}\)). That allows us to believe the \(\nu^{M}_{1L}\rightarrow\nu^{M}_{2L}\)-resonance as an analog of the \(\nu^{M}_{eL}\rightarrow\nu^{M}_{\mu L}\) resonance in the two FA. Moreover, by virtue of the fact
\[\frac{|\Sigma_{\nu_{1L}\nu_{2L}}-\Sigma_{\nu_{eL}\nu_{\mu L}}|}{2\Delta^{12}c_ {2\omega}}\ll 1,\]
both resonances are characterized by the identical formulas. However, for the reasons stated above, this resonance is of no interest for us and we pass to considering the \(\nu^{M}_{1L}\rightarrow\nu^{M}_{3L}\) and \(\nu^{D}_{1L}\rightarrow\nu^{D}_{3L}\) resonances. In the Hamiltonians \({\cal H}^{\prime M}\) and \({\cal H}^{\prime D}\) the quantity \(\Sigma=\Delta^{31}+\Delta^{32}\) is present. Since it offers the dominant term then the \(\nu^{M}_{3L}\) and \(\nu^{D}_{3L}\) states are decoupled from the remaining ones (except the \(\overline{\nu}^{M}_{3R}\) and \(\nu^{D}_{3R}\) states). As a result the \(\nu^{M}_{1L}\rightarrow\nu^{M}_{3L}\) and \(\nu^{D}_{1L}\rightarrow\nu^{D}_{3L}\) oscillations controlled by the \(\Sigma\)-term could be simply averaged out in the final survival probability for neutrinos of any flavor.
Further we pass to discussion of the magnetic-induced resonances which could take place in the regions of the CS's. Let us begin with the \(\nu^{M}_{1L}\rightarrow\overline{\nu}^{M}_{3R}\) and \(\nu^{D}_{1L}\rightarrow\nu^{D}_{3R}\) resonances. They are also controlled by the \(\Sigma\)-term and, as a result, these resonances appear to be forbidden.
The \(\nu^{M}_{1L}\rightarrow\overline{\nu}^{M}_{1R}\) and \(\nu^{D}_{1L}\rightarrow\nu^{D}_{1R}\) resonances are the next subject of our investigation. In the Majorana neutrino case the resonance width is equal to zero and, as a result, the
\(\nu_{1L}^{M}\rightarrow\overline{\nu}_{1R}^{M}\) resonance is not observed. For the Dirac neutrino the resonance condition and the maximum value of the oscillation length are defined by the following expressions
\[(V_{eL}-V_{\mu L})c_{\Phi}^{2}+V_{\mu L}+{\cal A}_{ll}^{DL}-2{\cal A}_{ll^{ \prime}}^{DL}[c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})-s_{\Phi}^{2}c_{\psi}s_{\psi}] -\dot{\Phi}=0, \tag{49}\]
\[(L_{\nu_{1L}^{D}\nu_{1R}^{D}})_{max}\simeq\frac{2\pi}{\mu_{ee}^{\prime}B_{\perp}}. \tag{50}\]
When \(\phi=0\) the obtained expressions convert into the corresponding ones for the \(\nu_{eL}^{D}\rightarrow\nu_{eR}^{D}\) in two FA (see Eqs.(33) and (34)). That allows us to consider the \(\nu_{1L}^{D}\rightarrow\nu_{1R}^{D}\) resonance as an analog of the \(\nu_{eL}^{D}\rightarrow\nu_{eR}^{D}\) resonance in two FA. Comparing the resonance condition (49) with the analogous expression (33) obtained in two FA, we see that they differ from one another by the quantity being proportional to \(\sin\phi\). Therefore, when the condition (33) is fulfilled, then the same is true for the condition (49). So, the \(\nu_{1L}^{D}\rightarrow\nu_{1R}^{D}\) resonance may be in existence in the solar corona.
In what follows we shall deal with the \(\nu_{1L}^{M}\rightarrow\overline{\nu}_{2R}^{M}\) and \(\nu_{1L}^{D}\rightarrow\nu_{2R}^{D}\) resonances. For the former the resonance condition and the maximum value of the oscillation length are as follows
\[-2\Delta^{12}c_{2\omega}+(V_{eL}^{\prime}-V_{\mu L})c_{\Phi}^{2}+2V_{\mu L}+{ \cal A}_{ll}^{L}-{\cal A}_{ll}^{R}+2{\cal A}_{ll^{\prime}}^{R}s_{\psi}c_{\psi}- 2{\cal A}_{ll^{\prime}}^{L}[c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})-s_{\Phi}^{2}c_{ \psi}s_{\psi}]-\dot{\Phi}=0, \tag{51}\]
\[(L_{\nu_{1L}^{M}\overline{\nu}_{1R}^{M}})_{max}\simeq\frac{2\pi}{\mu_{12}B_{ \perp}}, \tag{52}\]
while for the latter the corresponding expressions will look like
\[-2\Delta^{12}c_{2\omega}+(V_{eL}-V_{\mu L})c_{\Phi}^{2}+V_{\mu L}+{\cal A}_{ll }^{DL}-2{\cal A}_{ll^{\prime}}^{DL}[c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})-s_{ \Phi}^{2}c_{\psi}s_{\psi}]-\dot{\Phi}=0, \tag{53}\]
\[(L_{\nu_{1}^{D}\overline{\nu}_{2}^{D}})_{max}\simeq\frac{2\pi}{\mu_{e\mu}^{ \prime}B_{\perp}}. \tag{54}\]
Since at \(\phi=\psi=0\) the foregoing expressions convert to the corresponding ones obtained in two FA one may conclude that the investigated resonances should be considered as the analogs of the \(\nu_{eL}^{M}\rightarrow\overline{\nu}_{\mu R}^{M}\) and \(\nu_{eL}^{D}\rightarrow\nu_{\mu R}^{D}\) resonances. The resonance condition (51) differers from the analogous one arrived in two FA on the quantities which are proportional to \({\cal A}_{ll^{\prime}}^{L}\) and \({\cal A}_{ll^{\prime}}^{R}\). These quantities are so much less compared with others that we could neglect them. Therefore, the resonance condition (51) is reduced to Eq.(24). As for the expression (53), that is reduced to the analogous expression of the two FA (36) when \(\phi=0\). So, the \(\nu_{1L}^{M}\rightarrow\overline{\nu}_{2R}^{M}\) and \(\nu_{1L}^{D}\rightarrow\nu_{2R}^{D}\) resonances may also occur only at the cost of magnetic field twisting only.
Further we also consider all possible resonance transitions of the \(\nu_{2L}^{M}\) and \(\nu_{2L}^{D}\) states. The resonance condition and the maximal oscillation length for the \(\nu_{2L}^{M}\rightarrow\overline{\nu}_{1R}^{M}\) transition will look like
\[2\Delta^{12}c_{2\omega}+(V_{eL}^{\prime}-V_{\mu L})c_{\Phi}^{2}+2V_{\mu L}+{ \cal A}_{ll}^{L}-{\cal A}_{ll}^{R}-2{\cal A}_{ll^{\prime}}^{L}s_{\psi}c_{\psi} +2{\cal A}_{ll^{\prime}}^{R}[c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})-s_{\Phi}^{2}c_ {\psi}s_{\psi}]-\dot{\Phi}=0, \tag{55}\]
\[(L_{\nu_{2L}^{M}\overline{\nu}^{M}_{1R}})_{max}\simeq\frac{2\pi}{\mu_{12}B_{\perp}}. \tag{56}\]
The corresponding expressions for the \(\nu_{2L}^{D}\rightarrow\nu_{1R}^{D}\) resonance are determined by the following way
\[2\Delta^{12}c_{2\omega}+V_{\mu L}+{\cal A}_{ll}^{DL}-2{\cal A}_{ll^{\prime}}^{ DL}s_{\psi}c_{\psi}-\dot{\Phi}=0, \tag{57}\]
\[(L_{\nu_{2L}^{D}\nu_{1R}^{D}})_{max}\simeq\frac{2\pi}{\mu^{\prime}_{e\mu}B_{ \perp}}. \tag{58}\]
Setting \(\psi=\phi=0\) in the expressions obtained we arrive at the resonance conditions and the maximal oscillation lengths for the \(\nu_{\mu L}^{M}\rightarrow\overline{\nu}_{eR}^{M}\) and \(\nu_{\mu L}^{D}\rightarrow\nu_{eR}^{D}\) transitions, what permits us to consider these resonances as the analogs of the \(\nu_{\mu L}^{M}\rightarrow\overline{\nu}_{eR}^{M}\) and \(\nu_{\mu L}^{D}\rightarrow\nu_{eR}^{D}\) in the two FA. It is clear that the fulfilment (55) and (57) may take place only at
\[2\Delta^{12}c_{2\omega}\simeq\dot{\Phi}. \tag{59}\]
Now we proceed to the treatment of the \(\nu_{2L}^{M}\rightarrow\overline{\nu}_{2R}^{M}\) and \(\nu_{2L}^{D}\rightarrow\nu_{2R}^{D}\) transitions. As far as the \(\nu_{2L}^{M}\rightarrow\overline{\nu}_{2R}^{M}\) transition is concerned, that appears to be forbidden. The resonance condition and the maximal oscillation length for the \(\nu_{2L}^{D}\rightarrow\nu_{2R}^{D}\) transition are defined by the expressions
\[V_{\mu L}+{\cal A}_{ll}^{DL}-2{\cal A}_{ll^{\prime}}^{DL}s_{\psi}c_{\psi}-\dot {\Phi}=0, \tag{60}\]
\[(L_{\nu_{2L}^{D}\nu_{2R}^{D}})_{max}\simeq\frac{2\pi}{\mu^{\prime}_{\mu\mu}B_ {\perp}}. \tag{61}\]
When \(\psi=\phi=0\) the obtained formulae convert into the resonance condition and the maximal oscillation length for the \(\nu_{\mu L}^{D}\rightarrow\nu_{\mu R}^{D}\) resonance, that is, \(\nu_{2L}^{D}\rightarrow\nu_{2R}^{D}\) resonance is the analog of \(\nu_{\mu L}^{D}\rightarrow\nu_{\mu R}^{D}\) resonance in two FA. The fulfilment of (60) may be provided only when
\[\dot{\Phi}\approx 0,\qquad V_{\mu L}\simeq-{\cal A}_{ll}^{DL}+2{\cal A}_{ll^{ \prime}}^{DL}s_{\psi}c_{\psi} \tag{62}\]
that is real in the Sun's conditions.
The contribution to the considered oscillation picture from the third neutrino generation may come only from the \(\nu_{3L}^{D}\rightarrow\nu_{3R}^{D}\) resonance. The expressions being pertinent to this resonance will look like
\[V_{\mu L}+{\cal A}_{ll}^{DL}+2{\cal A}_{ll^{\prime}}^{DL}[c_{\Phi}^{2}s_{\psi} c_{\psi}+c_{\Phi}s_{\Phi}(c_{\psi}+s_{\psi})]-\dot{\Phi}=0, \tag{63}\]
\[(L_{\nu_{3L}^{D}\nu_{3R}^{D}})_{max}\simeq\frac{2\pi}{\mu^{\prime}_{\tau\tau} B_{\perp}}. \tag{64}\]
It is clear that in the resonance condition the matter potential \(V_{\mu L}\) may be compensated only by the terms which are proportional to rot \({\bf B}\).
Again we see that the influence of the AM and the NCR on the oscillation picture appears to be significant for the Dirac neutrino case only.
In the neutrino physics, quantities which are measured in experiments should be represented in the flavor basis. Therefore, in the expressions for the resonance transition
probabilities, we should pass from the hatched basis to the flavor one. Let us assume that solving the evolution equation both for the Majorana and for the Dirac neutrinos we have determined all the transition probabilities \({\cal P}^{M,D}(\nu_{iL}\rightarrow\nu_{kR})\) (\(i,k=1,2,3\)). Then, taking into consideration the flavor content of the \(\psi^{\prime M,D}\) states we could find the probabilities of the transitions between any flavor states. For example, the \(\nu_{\mu L}^{D}\rightarrow\nu_{\tau R}^{D}\) transition probability is as follows
\[{\cal P}^{D}(\nu_{\mu L}\rightarrow\nu_{\tau R})=s_{\phi}^{2}\Big{[}s_{\phi}^{ 2}c_{\psi}^{2}s_{\psi}^{2}{\cal P}^{D}(\nu_{1L}\rightarrow\nu_{1R})+s_{\psi}^{ 4}{\cal P}^{D}(\nu_{1L}\rightarrow\nu_{2R})+c_{\psi}^{4}{\cal P}^{D}(\nu_{2L} \rightarrow\nu_{1R})\Big{]}+\]
\[+c_{\psi}^{2}s_{\psi}^{2}\Big{[}{\cal P}^{D}(\nu_{2L}\rightarrow\nu_{2R})+c_{ \phi}^{4}{\cal P}^{D}(\nu_{3L}\rightarrow\nu_{3R})\Big{]}. \tag{65}\]
As for the electron neutrino survival probabilities are concerned, they are given by the expressions
\[{\cal P}^{M}(\nu_{eL}\rightarrow\nu_{eL})=1-[{\cal P}^{M}(\nu_{eL}\rightarrow \overline{\nu}_{eR})+{\cal P}^{M}(\nu_{eL}\rightarrow\overline{\nu}_{\mu R})+ {\cal P}^{M}(\nu_{eL}\rightarrow\overline{\nu}_{\tau R})]+\]
\[=1-c_{\phi}^{2}{\cal P}^{M}(\nu_{1L}\rightarrow\overline{\nu}_{2R}) \tag{66}\]
in the Majorana neutrino case. and
\[{\cal P}^{D}(\nu_{eL}\rightarrow\nu_{eL})=1-\{c_{\phi}^{2}[{\cal P}^{D}(\nu_{ 1L}\rightarrow\nu_{1R})+{\cal P}^{D}(\nu_{1L}\rightarrow\nu_{2R})]+s_{\phi}^{ 2}{\cal P}^{D}(\nu_{3L}\rightarrow\nu_{3R}\} \tag{67}\]
in the Dirac neutrino case. It is easy to check that when \(\psi=\phi=0\) the expressions (66) and (67) convert into the corresponding ones obtained in the two FA while \({\cal P}^{D}(\nu_{\mu L}\rightarrow\nu_{\tau R})\) becomes equal to zero.
## 5 Conclusions
The behavior of neutrinos, endowed by such multipole moments as the charge radius, the magnetic and anapole moments, in intensive magnetic fields has been explored within the left-right symmetric model. It was assumed that the magnetic fields are vortex, nonhomogeneous and have twisting nature. For the geometrical phase \(\Phi(z)=\arctan(B_{y}/B_{x})\) connected with magnetic field twisting \(\dot{\Phi}(z)\) the simple model \(\Phi=\exp[\alpha z/L_{mf}]\) (\(L_{mf}\) is the distance at which the magnetic field exists) has been used. As the examples of such magnetic fields we have covered fields of the coupled sunspots (CS's) being the sources of the future solar flares. The investigations have been carried out both for the Majorana and for the Dirac neutrinos. In the first stage we have discussed the behavior of the neutrino beam in two flavor approximation (FA). The evolution equation has been written in the Schrodinger-like form and all the possible magnetic-induced resonance conversions have been found. Further the problem has been investigated in three FA. In order to lighten the analysis of the resonance conversions and make the results physically more transparent we have passed from the flavor basis to the new one (hatched basis). In the new basis the free Hamiltonian \({\cal H}_{0}\) depends on the \(\theta_{12}\) angle while the interaction Hamiltonian \({\cal H}_{int}\) depends on the \(\theta_{23}\) and \(\theta_{13}\) angles. The resonance conditions, the transition
widths and the oscillation lengths of all magnetic-induced resonances have been found. The obtained expressions are distinguished from the corresponding ones obtained in the two FA only slightly. This situation is caused by choosing the hatched basis in such a way that one state is predominantly the \(\nu_{eL}\)-state while the rest of two are mixings of \(\nu_{\mu L}\)- and \(\nu_{\tau L}\)-states. Taking into account the flavor content of the hatched states we have expressed the electron neutrino survival probability in terms of the probabilities of the transitions between hatched states.
Under description of the neutrino oscillations the NCR phenomenology is analogous to that of the AM. In the Majorana neutrino case only nondiagonal elements of the NCR are different from zero while the AM has both the nonzero diagonal and nonzero nondiagonal elements. However in the Sun conditions these MM's do not exert a marked influence on the values of the oscillation parameters. On the other hand, when the neutrinos have the Dirac nature the nonzero diagonal elements of the NCR and the AM could lead to the appearance of new resonances. Using the upper bounds on the NCR
\[|<r^{2}>|=few\times 10^{-32}\ {\rm cm}^{2}\]
and the values of the current producing the CS's magnetic field
\[j=10^{-1}\ {\rm A/cm}^{2},\]
one may get for the contribution connected with these MM's the value of the same order as the corona matter potential \(\sim 10^{-30}\) eV. Therefore, the resonances initiated by the AM and the NCR may take place in the solar corona. In so doing introducing the NCR changes the resonance position and in specific cases could cause the vanishing of the resonance.
For all the magnetic-induced resonances the oscillation width depends on the quantity \(\mu_{ll^{\prime}}B_{\perp}\) which, in its turn, determines the weakening of the electron neutrino beams \(\eta_{\nu_{eL}\nu_{xR}}\). For example, when \(\mu_{ll^{\prime}}=6.8\times 10^{-10}\mu_{B}\) and \(B_{\perp}=10^{8}\) G we have \(\eta_{\nu_{eL}\nu_{xR}}\simeq 1.2\). So in the case of the super solar flares we have a good chance to detect the weakening of the electron neutrino beam caused by the resonance conversions \(\nu_{eL}\rightarrow\overline{\nu}_{\mu R}\) and \(\nu_{eL}\rightarrow\overline{\nu}_{\tau R}\) (Majorana neutrino case) or \(\nu_{eL}\rightarrow\nu_{eR}\), \(\nu_{eL}\rightarrow\nu_{\mu R}\) and \(\nu_{eL}\rightarrow\nu_{\tau R}\) (Dirac neutrino case). It should be stressed that in the Dirac neutrino case all magnetic-induced resonances transfer active neutrinos into sterile ones while in the Majorana neutrino case we deal with active neutrinos only. Decreasing of the electron neutrino flux which passes through the magnetic field region during the initial solar flare stage could be detected at the neutrino detectors of the next generation whose work is based on the reaction of the coherent elastic neutrino-nucleus scattering.
It should be stressed that the flares could take place in Sun-like stars as well. In that case the super-flares present a severe hazard to astronauts. Therefore, the problem of the flare forecasting is actual for the cosmic flights as well. Obviously that terrestrial neutrino detectors will be useless when flying outside the solar system. The problem can be solved with the help of a detector similar in design to the RED-100 installed on a spacecraft. This detector can operate in the mode on "disappearance" of electron neutrinos with a certain wavelength.
It might be worth pointing out the connection between our results and the observations of decreasing the \(\beta\)-decay rates of some elements during the initial stage of the solar flare [78, 79]. According to Refs. [80, 81] this phenomena is caused by the depletion of the solar electron neutrinos (the hypothesis of the \(\nu_{eL}\)-induced \(\beta\) decays). Then one may state that decreasing the \(\beta\)-decay rates is the experimental confirmation of the resonance conversions of the \(\nu_{eL}\) neutrinos when they pass through the CS's magnetic fields.
## Acknowledgments
This work is partially supported by the grant of Belorussian Ministry of Education No 20211660
|
2302.11153 | Remarks on the Daugavet Property for Complex Banach Spaces | In this article, we study the Daugavet property and the diametral diameter
two properties in complex Banach spaces. The characterizations for both
Daugavet and $\Delta$-points are revisited in the context of complex Banach
spaces. We also provide relationships between some variants of alternative
convexity and smoothness, nonsquareness, and the Daugavet property. As a
consequence, every strongly locally uniformly alternatively convex or smooth
(sluacs) Banach space does not contain $\Delta$-points from the fact that such
spaces are locally uniformly nonsquare. We also study the convex diametral
local diameter two property (convex-DLD2P) and the polynomial Daugavet property
in the vector-valued function space $A(K, X)$. From an explicit computation of
the polynomial Daugavetian index of $A(K, X)$, we show that the space $A(K, X)$
has the polynomial Daugavet property if and only if either the base algebra $A$
or the range space $X$ has the polynomial Daugavet property. Consequently, we
obtain that the polynomial Daugavet property, the Daugavet property, the
diameteral diameter two properties, and the property ($\mathcal{D}$) are
equivalent for infinite-dimensional uniform algebras. | Han Ju Lee, Hyung-Joon Tag | 2023-02-22T05:30:03Z | http://arxiv.org/abs/2302.11153v4 | # Remark on the Daugavet property for complex Banach spaces
###### Abstract.
In this article, we study the Daugavet property and the diametral diameter two properties in complex Banach spaces. The characterizations for both Daugavet and \(\Delta\)-points are revisited in the context of complex Banach spaces. We also provide relationships between some variants of alternative convexity and smoothness, nonsquareness, and the Daugavet property. As a consequence, every strongly locally uniformly alternatively convex or smooth (sluacs) Banach space does not contain \(\Delta\)-points from the fact that such spaces are locally uniformly nonsquare. We also study the convex diametral local diameter two property (convex-DLD2P) and the polynomial Daugavet property in the vector-valued function space \(A(K,X)\). From an explicit computation of the polynomial Daugavetian index of \(A(K,X)\), we show that the space \(A(K,X)\) has the polynomial Daugavet property if and only if either the base algebra \(A\) or the range space \(X\) has the polynomial Daugavet property. Consequently, we obtain that the polynomial Daugavet property, Daugavet property, diameter diameter two properties, and property (\(\mathcal{D}\)) are equivalent for infinite-dimensional uniform algebras.
Key words and phrases:Daugavet points, \(\Delta\)-points, alternative convexity or smoothness, nonsquareness, polynomial Daugavet property 2010 Mathematics Subject Classification: Primary 46B20; Secondary 46B04, 46E40, 46J10 The first author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology [NRF-2020R1A2C1A01010377]. The second author was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology [NRF-2020R1A2C1A01010377].
We call this equation the _Daugavet equation_. The space \(C(K)\), where \(K\) does not have isolated points, \(L_{1}(\mu)\), and \(L_{\infty}(\mu)\) with a nonatomic measure \(\mu\) are classical examples with the Daugavet property. The infinite-dimensional uniform algebras also have the Daugavet property if and only if their Shilov boundaries do not have isolated points [27, 38]. Moreover, the Daugavet property in Musielak-Orlicz spaces [24], in Lipschitz-free spaces [17], and in rearrangement-invariant Banach function lattices [3, 21] have been examined. It is well-known that every slices of \(B_{X}\) with diameter two if \(X\) has the Daugavet property, which tells us that Banach spaces with this property are on the opposite spectrum to the Radon-Nikodym property.
The following characterization allows us to observe the Daugavet property with slices.
**Lemma 1.1**.: _[_22_, Lemma 2.2]_ _The following are equivalent._
1. _A Banach space_ \((X,\|\cdot\|)\) _has the Daugavet property,_
2. _For every slice_ \(S=S(x^{*},\epsilon)\) _where_ \(x^{*}\in S_{X^{*}}\)_, every_ \(x\in S_{X}\) _and every_ \(\epsilon>0\)_, there exists_ \(y\in S_{X}\cap S\) _such that_ \(\|x+y\|>2-\epsilon\)_,_
3. _For every weak_\({}^{*}\)_-slice_ \(S^{*}=S(x,\epsilon)\) _where_ \(x\in S_{X}\)_, every_ \(x^{*}\in S_{X^{*}}\) _and every_ \(\epsilon>0\)_, there exists_ \(y^{*}\in S_{X^{*}}\cap S^{*}\) _such that_ \(\|x^{*}+y^{*}\|>2-\epsilon\)_,_
Later, the diametral diameter two properties (diametral D2Ps), the property (\(\mathcal{D}\)), and the convex diameter local diameter two property have gained attention [1, 37] by many researchers. They are known to be weaker than the Daugavet property.
**Definition 1.2**.:
1. _A Banach space_ \(X\) _has the property (_\(\mathcal{D}\)_) if for every rank-one, norm-one projection_ \(P:X\to X\) _satisfies_ \(\|I-P\|=2\)_._
2. _A Banach space_ \(X\) _has the diametral local diameter two property (DLD2P) if for every slice_ \(S\) _of the unit ball, every_ \(x\in S\cap S_{X}\)_, and every_ \(\epsilon>0\) _there exists_ \(y\in B_{X}\) _such that_ \(\|x-y\|\geq 2-\epsilon\)_._
3. _A Banach space_ \(X\) _has the diametral diameter two property (DD2P) if for every nonempty weakly open subset_ \(W\) _of the unit ball, every_ \(x\in W\cap S_{X}\)_, and every_ \(\epsilon>0\)_, there exists_ \(y\in B_{X}\) _such that_ \(\|x-y\|\geq 2-\epsilon\)_._
4. _A Banach space_ \(X\) _has the convex diametral local diameter two property (convex-DLD2P) if_ \(\overline{conv}\Delta_{X}=B_{X}\)_._
The first known example that possesses the property (\(\mathcal{D}\)) is a certain subspace of \(L_{1}\) constructed with martingales [4]. Later on, this space is shown to have the Daugavet property [23]. In view of [15], every rank-one projection \(P\) on a Banach space \(X\) with the DLD2P satisfies \(\|I-P\|\geq 2\), and so the DLD2P implies the property (\(\mathcal{D}\)). As a matter of fact, the property (\(\mathcal{D}\)) was thought to be equivalent to the DLD2P. However, since a scalar multiple of a projection is not a projection [1], the validity of the equivalence is not clear up to now. The implication (iii) \(\implies\) (ii) holds because every slice is a weakly open subset of the unit ball.
The DLD2P and the Daugavet property can be also considered from a local perspective by using \(\Delta\)-points and Daugavet points. Let \(\Delta_{\epsilon}(x)=\{y\in B_{X}:\|x-y\|\geq 2-\epsilon\}\).
**Definition 1.3**.:
1. _A point_ \(x\in S_{X}\) _is a_ \(\Delta\)_-point if_ \(x\in\overline{conv}\Delta_{\epsilon}(x)\) _for every_ \(\epsilon>0\)__
2. _A point_ \(x\in S_{X}\) _is a Daugavet point if_ \(B_{X}=\overline{conv}\Delta_{\epsilon}(x)\) _for every_ \(\epsilon>0\)_._
Notice that the set \(\Delta_{\epsilon}(x)\) is defined independently of the scalar fields \(\mathbb{F}=\mathbb{R}\) or \(\mathbb{C}\) on Banach spaces. Hence we may use the same definitions for \(\Delta\)-points and Daugavets points for complex Banach spaces. It is well-known that a real Banach space \(X\) has the Daugavet property (resp. the DLD2P) if and only if every point on the unit sphere is a Daugavet point (resp. a \(\Delta\)-point).
We mention that many recent results on the Daugavet property, the diametral D2Ps, Daugavet points, and \(\Delta\)-points have been mostly revolved around _real_ Banach spaces. But there
are several results concerning these concepts in complex Banach spaces; see [18, 32]. For a real Banach space \(X\), it is well-known that \(\Delta\)-points are connected to certain behaviors of slices of the unit ball and of rank-one projections.
**Theorem 1.4**.: _[_1_]_ _Let \(X\) be a real Banach space. Then the following statements are equivalent._
1. \(x\in S_{X}\) _is a_ \(\Delta\)_-point._
2. _For every slice_ \(S\) _of_ \(B_{X}\) _with_ \(x\in S\cap S_{X}\) _and_ \(\epsilon>0\)_, there exists_ \(y\in S\) _such that_ \(\|x-y\|\geq 2-\epsilon\)_._
3. _For every rank-1 projection_ \(P=x^{*}\otimes x\) _with_ \(x^{*}x=1\)_, we have_ \(\|I-P\|\geq 2\)_._
Even though the complex analogue of this relationship may be well-known to the specialists, we will state and prove them for completeness. In addition, while the Daugavet property for complex Banach spaces can be examined through rank-one real-linear operators [19], it has not been known whether we can examine the DLD2P in a similar spirit with rank-one real-projections. We also study this here.
Since a denting point is always contained in a slice of arbitrarily small diameter, such a point cannot be either a \(\Delta\)-point or a Daugavet point. This implies that any (locally) uniformly rotund real Banach spaces cannot have \(\Delta\)-points. Recently, identifying the Banach spaces that do not contain these points has been an active research topic. For example, it is shown in [2] that every uniformly nonsquare real Banach space does not have \(\Delta\)-points. Furthermore, a locally uniformly nonsquare real Banach space does not have \(\Delta\)-points [25]. We will examine strongly locally uniformly alternatively convex or smooth (sluacs) Banach spaces in this article. We mention that alternative convexity or smoothness are related to the anti-Daugavet property and the nonsquareness property [16, 36].
Banach spaces that satisfy the Daugavet equation for weakly compact polynomials are also studied in [7, 8, 30]. For Banach spaces \(X,Y\), let \(\mathcal{L}(^{k}X;Y)\) be the space of bounded \(k\)-linear mappings from \(X\) to \(Y\) and let \(\Delta_{k}:X\to X^{k}\) be a diagonal mapping defined by
\[\Delta_{k}(x)=\underbrace{(x,x,\ldots,x)}_{\text{$k$ times}}.\]
A mapping is called a bounded \(k\)_-homogeneous polynomial_ from \(X\) to \(Y\) if it is the composition of \(\Delta_{k}\) with an element in \(\mathcal{L}(^{k}X;Y)\). We denote the set of all bounded \(k\)-homogenous polynomials from \(X\) to \(Y\) by \(\mathcal{P}(^{k}X;Y)\). A _polynomial_ is a finite sum of bounded homogenous polynomials from \(X\) to \(Y\). We also denote the set of all polynomials from \(X\) to \(Y\) by \(\mathcal{P}(X;Y)\) and the set of all scalar-valued continuous polynomials by \(\mathcal{P}(X)\). We endow the space \(\mathcal{P}(X;X)\) (resp. \(\mathcal{P}(X)\)) with the norm \(\|P\|=\sup_{x\in B_{X}}\|Px\|_{X}\) (resp. \(\|P\|=\sup_{x\in B_{X}}|Px|\)).
We say a polynomial \(P\in\mathcal{P}(X;Y)\) is weakly compact if \(P(B_{X})\) is a relatively weakly compact subset of \(Y\). A Banach space \(X\) is said to have the _polynomial Daugavet property_ if for every weakly compact polynomial \(P\in\mathcal{P}(X,X)\) satisfies
\[\|I+P\|=1+\|P\|.\]
If \(X\) has the polynomial Daugavet property, then the space also has the Daugavet property. It is also well-known that the polynomial Daugavet property can be described in terms of scalar-valued polynomials.
**Theorem 1.5**.: _[_7_, Corollary 2.2]_ _Let \(X\) be a real or complex Banach space. Then the following statements are equivalent:_
1. \(X\) _has the polynomial Daugavet property._
_._
2. _For every_ \(p\in\mathcal{P}(X)\) _with_ \(\|p\|=1\)_, every_ \(x_{0}\in S_{X}\)_, and every_ \(\epsilon>0\)_, there exists_ \(\omega\in S_{\mathbb{C}}\) _and_ \(y\in B_{X}\) _such that_ \(\text{Re}\,p(y)>1-\epsilon\) _and_ \(\|x_{0}+\omega y\|>2-\epsilon\)_._
3. _For every_ \(p\in\mathcal{P}(X)\) _and every_ \(x_{0}\in X\)_, the polynomial_ \(p\otimes x_{0}\) _satisfies the Daugavet equation._
In this article, we will look at the polynomial Daugavet property in a function space \(A(K,X)\) over the base algebra \(A\), which will be defined later. This class of function spaces includes uniform algebras and the space of Banach space-valued continuous functions on a compact Hausdorff space. The Daugavet property and the diametral D2Ps of the vector-valued function spaces \(A(K,X)\) are studied in [27]. From the same article, assuming the uniform convexity of the range space \(X\) and \(A\otimes X\subset A(K,X)\), it is shown that the space \(A(K,X)\) has the Daugavet property if and only if its base algebra also has the Daugavet property. It is also shown in [6] that if \(X\) has the Daugavet property then \(A(K,X)\) has the Daugavet property. Here we attempt to find a necessary and sufficient condition for \(A(K,X)\) with the polynomial Daugavet property.
The article consists of three parts. In section 2, we revisit well-known facts about \(\Delta\)-points and Daugavet points in the context of complex Banach spaces. Like the Daugavet property, the DLD2P can be also analyzed by using rank-one real-projections (Theorem 2.2). In Section 3, we examine the relationship between alternative convexity or smoothness and nonsquareness. From the fact that strongly locally uniformly alternatively convex or smooth (sluacs) Banach spaces are locally uniformly nonsquare (Proposition 3.8.(i)), every sluacs Banach space does not have a \(\Delta\)-point (Theorem 3.4). In section 4, we study the polynomial Daugavet property of the space \(A(K,X)\). Here we explicitly compute the polynomial Daugavetian index of the space \(A(K,X)\) (Theorem 4.11). The space \(A(K,X)\) has a bicontractive projection if the Shilov boundary of the base algebra \(A\) has isolated points (Proposition 4.9). As a consequence, we will show that \(A(K,X)\) has the polynomial Daugavet property if and only if either the base algebra \(A\) or the range space \(X\) has the polynomial Daugavet property (Corollary 4.12).
## 2. Delta-points and Daugavet points in complex Banach spaces
In this section, we study \(\Delta\)-points and Daugavet points for complex Banach spaces. Although one may find that a certain portion of the proofs are similar to the real case, we include them in this article for completeness. However, we mention that the complex scalar field \(\mathbb{C}\) provides something more, namely, a tool to analyze the Daugavet property and the DLD2P for complex Banach spaces through rank-one real-linear operators and rank-one real-projections, respectively. We recall the following useful lemma:
**Lemma 2.1**.: _[_15_, Lemma 1.4]_ _Let \(x^{*}\in S_{X^{*}}\), \(\epsilon>0\). Then for every \(x\in S(x^{*},\epsilon)\) and every \(\delta\in(0,\epsilon)\) there exists \(y^{*}\in S_{X^{*}}\) such that \(x\in S(y^{*},\delta)\) and \(S(y^{*},\delta)\subset S(x^{*},\epsilon)\)._
**Theorem 2.2**.: _Let \(X\) be a complex Banach space. Then the following statements are equivalent._
1. \(x\in S_{X}\) _is a_ \(\Delta\)_-point._
2. _For every slice_ \(S\) _of_ \(B_{X}\) _with_ \(x\in S\cap S_{X}\) _and_ \(\epsilon>0\)_, there exists_ \(y\in S\) _such that_ \(\|x-y\|\geq 2-\epsilon\)_._
3. _For every rank-1 projection_ \(P=x^{*}\otimes x\) _with_ \(x^{*}x=1\)_, we have_ \(\|I-P\|\geq 2\)_._
4. _For every rank-1 real-projection_ \(P=\text{Re}\,x^{*}\otimes x\) _with_ \(x^{*}x=1\)_, we have_ \(\|I-P\|\geq 2\)_._
Proof.: The implications (i) \(\iff\) (ii) and (ii) \(\iff\) (iv) come from modifying the proofs given in [33, Proposition 1.4.5] for Daugavet points and [15].
(i) \(\implies\) (ii): Assume to the contrary that there exist a slice \(S=S(x^{*},\alpha)\) containing \(x\) and \(\alpha>0\) such that \(\|x-y\|<2-\alpha\) for every \(y\in S\). This implies that \(S\cap\Delta_{\alpha}(x)=\emptyset\). Since \(x\) is a \(\Delta\)-point and \(x\in S\), we see that \(S\cap\overline{conv}\Delta_{\alpha}(x)\neq\emptyset\). Choose \(y\in S\) such that \(\operatorname{Re}x^{*}y>1-\alpha+\delta\) for sufficiently small \(\delta>0\). Then there exist \(y_{1},y_{2},\ldots,y_{n}\in\Delta_{\alpha}(x)\) such that
\[\operatorname{Re}x^{*}y-\frac{1}{n}\sum_{i=1}^{n}\operatorname{Re}x^{*}y_{i} \leq\left\|y-\frac{1}{n}\sum_{i=1}^{n}y_{i}\right\|<\delta.\]
From the fact that \(y_{i}\)'s are not in the slice \(S\), we have
\[1-\alpha<\operatorname{Re}x^{*}y-\delta=\operatorname{Re}x^{*}y -\frac{1}{n}\sum_{i=1}^{n}\operatorname{Re}x^{*}y_{i}+\frac{1}{n}\sum_{i=1}^{ n}\operatorname{Re}x^{*}y_{i}-\delta < \frac{1}{n}\sum_{i=1}^{n}\operatorname{Re}x^{*}y_{i}\] \[< 1-\alpha,\]
which leads to contradiction.
(ii) \(\implies\) (i): Suppose that \(x\notin\overline{conv}\Delta_{\epsilon}(x)\) for some \(\epsilon>0\). Notice that a singleton \(\{x\}\) is convex as well as \(\overline{conv}\Delta_{\epsilon}(x)\). Moreover, the set \(\{x\}\) is compact. So by Hahn-Banach separation theorem, there exist \(x^{*}\in S_{X^{*}}\) and \(\alpha>0\) such that for every \(z\in\overline{conv}\Delta_{\epsilon}(x)\), we have \(\operatorname{Re}x^{*}z<\alpha<\operatorname{Re}x^{*}x\leq 1\). Hence, we see that \(z\notin S(x^{*},1-\alpha)\) for every \(z\in\overline{conv}\Delta_{\epsilon}(x)\), in particular, for every \(z\in\Delta_{\epsilon}(x)\). This leads to a contradiction from our assumption (ii).
(iii) \(\implies\) (ii): Consider a slice \(S(x^{*},\delta)\), an element \(x\in S(x^{*},\delta)\) and \(\epsilon>0\). Then there exist \(\delta_{1}>0\) such that \(\frac{\sqrt{2\delta_{1}}}{1-\delta_{1}}<\frac{\epsilon}{4}\) and a bounded linear functional \(y^{*}\in S_{X^{*}}\) such that \(x\in S(y^{*},\delta_{1})\subset S(x^{*},\delta)\).
Consider a rank-one projection \(P:y\mapsto y^{*}y\frac{x}{y^{*}x}\). Then by the assumption (iii), for every \(\beta<\frac{\epsilon}{2}\), there exists \(y\in S_{X}\) such that
\[\|y-Py\|=\left\|y-y^{*}y\frac{x}{y^{*}x}\right\|=\left\|\gamma y-\operatorname {Re}y^{*}(\gamma y)\frac{x}{y^{*}x}\right\|>2-\beta, \tag{1}\]
where \(\gamma\in\mathbb{T}\) such that \(|y^{*}y|=\gamma y^{*}y=y^{*}(\gamma y)=\operatorname{Re}y^{*}(\gamma y)\).
Moreover, we also see that \(\gamma y\in S(y^{*},\delta_{1})\). Let \(\tilde{y}=\gamma y\). Then by (1), we obtain
\[\|\tilde{y}-x\| = \left\|\gamma y-\operatorname{Re}y^{*}(\gamma y)\frac{x}{y^{*}x} +\operatorname{Re}y^{*}(\gamma y)\frac{x}{y^{*}x}-x\right\|\] \[\geq \left\|\gamma y-\operatorname{Re}y^{*}(\gamma y)\frac{x}{y^{*}x} \right\|-\left\|x-\operatorname{Re}y^{*}(\gamma y)\frac{x}{y^{*}x}\right\|\] \[> 2-\beta-\left|1-\frac{\operatorname{Re}y^{*}(\gamma y)}{y^{*}x} \right|.\]
Since \(|y^{*}x|\geq\operatorname{Re}y^{*}x>1-\delta_{1}\), we can see that
\[\left|1-\frac{\operatorname{Re}y^{*}(\gamma y)}{y^{*}x}\right|=\frac{|y^{*}x- \operatorname{Re}y^{*}(\gamma y)|}{|y^{*}x|}\leq\frac{|1-y^{*}x|+(1- \operatorname{Re}y^{*}(\gamma y))}{1-\delta_{1}}\]
and
\[(\operatorname{Im}y^{*}x)^{2}=|y^{*}x|^{2}-(\operatorname{Re}y^{*}x)^{2}<1-(1 -\delta_{1})^{2}<\delta_{1},\]
Hence, we have
\[|1-y^{*}x|=\sqrt{(1-\operatorname{Re}y^{*}x)^{2}+(\operatorname{ Im}y^{*}x)^{2}} < \sqrt{(1-\operatorname{Re}y^{*}x)^{2}+\delta_{1}}\] \[< \sqrt{\delta_{1}^{2}+\delta_{1}}<\sqrt{2\delta_{1}}.\]
These consequently show that
\[\|\tilde{y}-x\|>2-\beta-\frac{2\sqrt{2\delta_{1}}}{1-\delta_{1}}>2-\epsilon.\]
(ii) \(\implies\) (iii): Every rank-one projection is of the form \(P=x^{*}\otimes x\), where \(\|x^{*}\|\geq 1,\|x\|=1\), and \(x^{*}x=1\). Define an operator \(T\in L(X)\) by \(T(y)=\frac{x^{*}y}{\|x^{*}\|}\cdot x\) and consider a slice \(S=\{y\in B_{X}:\text{Re}\;\frac{x^{*}}{\|x^{*}\|}y\geq 1-\frac{\epsilon}{2}\}\) containing \(x\). Since \(\left|\frac{x^{*}}{\|x^{*}\|}y\right|\geq\text{Re}\;\frac{x^{*}}{\|x^{*}\|}y>1 -\frac{\epsilon}{2}\), we know that \(\left(\text{Im}\;\frac{x^{*}}{\|x^{*}\|}y\right)^{2}=\left|\frac{x^{*}}{\|x^ {*}\|}y\right|^{2}-\left(\text{Re}\frac{x^{*}}{\|x^{*}\|}y\right)^{2}<1-(1- \frac{\epsilon}{2})^{2}\). Then
\[\left|1-\frac{x^{*}}{\|x^{*}\|}y\right|=\sqrt{\left(1-\text{Re}\;\frac{x^{*}} {\|x^{*}\|}y\right)^{2}+\left(\text{Im}\;\frac{x^{*}}{\|x^{*}\|}y\right)^{2} }<\sqrt{\frac{\epsilon^{2}}{4}+1-\left(1-\frac{\epsilon}{2}\right)^{2}}<\sqrt {\epsilon}.\]
Moreover, we see that
\[\|(I-T)y\|\geq\|y-x\|-\left\|x-\frac{x^{*}}{\|x^{*}\|}y\cdot x\right\| > 2-\frac{\epsilon}{2}-\left|1-\frac{x^{*}}{\|x^{*}\|}y\right|\] \[> 2-\frac{\epsilon}{2}-\sqrt{\epsilon}.\]
Hence, \(\|I-T\|\geq 2\).
Now define a function \(\varphi(\lambda)=\|I-\lambda T\|\) where \(\lambda\in[0,\infty)\). It is easy to show that the function \(\varphi\) is a convex function on \([0,\infty)\). Also, \(\varphi(0)=1\) and \(\varphi(1)=\|I-T\|\geq 2\). Furthermore,
\[0<\frac{\varphi(1)-\varphi(0)}{1-0}\leq\frac{\varphi(s)-\varphi(1)}{s-1}\; \;\text{for all}\;\;s\geq 1.\]
So, for every \(s\geq 1\) we see that \(\varphi(s)>\varphi(1)\geq 2\). In particular, if \(s=\|x^{*}\|\geq 1\) we obtain that \(\varphi(\|x^{*}\|)=\|I-\|x^{*}\|\cdot T\|=\|I-P\|\geq 2\), which proves (ii) \(\implies\) (iii).
Hence we can verify the relationship between the \(\Delta\)-points and the DLD2P as well as the space with bad projections [15] for complex Banach spaces.
**Corollary 2.3**.: _Let \(X\) be a complex Banach space. The following statements are equivalent:_
1. _The space_ \(X\) _has the DLD2P._
2. _Every point on the unit sphere_ \(S_{X}\) _is a_ \(\Delta\)_-point._
3. _Every rank-one projection_ \(P\) _has_ \(\|I-P\|\geq 2\)_, i.e._ \(X\) _is the space with bad projections._
4. _Every rank-one real-projection_ \(P\) _has_ \(\|I-P\|\geq 2\)_._
The following statement about Daugavet points is a compilation of well-known results, but we include them for completeness.
**Theorem 2.4**.: _Let \(X\) be a complex Banach spaces. Then the following statements are equivalent._
1. \(x\in S_{X}\) _is a Daugavet point._
2. _For every slice_ \(S\) _of_ \(B_{X}\) _and_ \(\epsilon>0\)_, there exists_ \(y\in S\) _such that_ \(\|x-y\|\geq 2-\epsilon\)_._
3. _For every rank-one operator_ \(T=x^{*}\otimes x\) _satisfies_ \(\|I-T\|=1+\|T\|\)_._
4. _For every rank-one operator_ \(T=x^{*}\otimes x\) _with norm-one satisfies_ \(\|I-T\|=2\)_._
5. _For every rank-one real-linear operator_ \(T=\text{Re}\,x^{*}\otimes x\) _satisfies_ \(\|I-T\|=1+\|T\|\)_._
6. _For every rank-one real-linear operator_ \(T=\text{Re}\,x^{*}\otimes x\) _with norm-one satisfies_ \(\|I-T\|=2\)
Proof.: One may see the proof for the real case in [33, Proposition 1.4.5].
(i) \(\implies\) (ii): Assume to the contrary that there exist a slice \(S=S(x^{*},\alpha)\) containing \(x\) and \(\alpha>0\) such that \(\|x-y\|<2-\epsilon\) for every \(y\in S\). This implies that \(S\cap\Delta_{\epsilon}(x)=\emptyset\). Since \(x\) is a Daugavet point and \(x\in S\), we see that \(S\subset\overline{conv}\Delta_{\epsilon}(x)\). Choose \(y\in S\) such that \(\operatorname{Re}x^{*}y>1-\alpha+\delta\) for sufficiently small \(\delta>0\). Then there exist \(y_{1},y_{2},\ldots,y_{n}\in\Delta_{\epsilon}(x)\) such that
\[\operatorname{Re}x^{*}y-\frac{1}{n}\sum_{i=1}^{n}\operatorname{Re}x^{*}y_{i} \leq\left\|y-\frac{1}{n}\sum_{i=1}^{n}y_{i}\right\|<\delta.\]
From the fact that \(y_{i}\)'s are not in the slice \(S\), we have
\[1-\alpha<\operatorname{Re}x^{*}y-\delta=\operatorname{Re}x^{*}y- \frac{1}{n}\sum_{i=1}^{n}\operatorname{Re}x^{*}y_{i}+\frac{1}{n}\sum_{i=1}^{n }\operatorname{Re}x^{*}y_{i}-\delta < \frac{1}{n}\sum_{i=1}^{n}\operatorname{Re}x^{*}y_{i}\] \[< 1-\alpha,\]
which leads to contradiction.
(ii) \(\implies\) (i): Suppose that \(x\in B_{X}\neq\overline{conv}\Delta_{\epsilon}(x)\) for some \(\epsilon>0\). Notice that a singleton \(\{x\}\) is convex as well as \(\overline{conv}\Delta_{\epsilon}(x)\). Moreover, the set \(\{x\}\) is compact. So by Hahn-Banach separation theorem, there exist \(x^{*}\in S_{X^{*}}\) and \(\alpha>0\) such that for every \(z\in\overline{conv}\Delta_{\epsilon}(x)\), we have \(\operatorname{Re}x^{*}(z)<\alpha<\operatorname{Re}x^{*}y\leq 1\). Hence, we see that \(z\notin S(x^{*},1-\alpha)\) for every \(z\in\overline{conv}\Delta_{\epsilon}(x)\). This leads to a contradiction from our assumption (ii).
(iii) \(\implies\) (iv) is clear, and (iv) \(\implies\) (iii) comes from the fact that for Daugavet property considering rank-one operators with norm-one is enough [37]. The equivalence (ii) \(\iff\) (v) \(\iff\) (vi) is a well-known result in [1].
(iv) \(\implies\) (ii): Let \(x^{*}\in S_{X^{*}}\) and \(S=\{y\in B_{X}:\operatorname{Re}x^{*}y>1-\frac{\epsilon}{2}\}\) containing \(x\in S_{X}\). Let \(T\in L(X)\) be a rank-one operator with norm one of the form \(T(y)=x^{*}y\cdot x\) where \(\|x^{*}\|=\|x\|=1\). By the assumption (iv), for every \(\epsilon>0\), there exists \(y\in S_{X}\) such that \(\|y-Ty\|=\|y-x^{*}y\cdot x\|>\frac{\epsilon}{2}\). Notice that \(|x^{*}y|=\operatorname{Re}x^{*}(\gamma y)\) for some \(\gamma\in\mathbb{T}\), and so
\[1+\operatorname{Re}x^{*}(\gamma y)\geq\|y-x^{*}y\cdot x\|>2-\frac{\epsilon}{2}.\]
Hence \(\gamma y\in S\). Let \(\tilde{y}=\gamma y\). Then we have
\[\|\tilde{y}-x\|\geq\|y-\operatorname{Re}x^{*}(\gamma y)\cdot x\|-1+ \operatorname{Re}x^{*}(\gamma y)>2-\epsilon.\]
Therefore, we see that (ii) holds.
(ii) \(\implies\) (iv): Every rank-one operator with norm-one is of the form \(T=x^{*}\otimes x\) where \(\|x^{*}\|=\|x\|=1\). Let \(S=\{y\in B_{X}:\operatorname{Re}x^{*}y>1-\frac{\epsilon}{2}\}\) be a slice containing \(x\). By the assumption (ii), there exists \(y\in B_{X}\) such that \(\|x-y\|>2-\frac{\epsilon}{2}\). Notice that \(1\geq|x^{*}y|^{2}=(\operatorname{Re}x^{*}y)^{2}+(\operatorname{Im}x^{*}y)^{2}\). Hence we have
\[|1-x^{*}y|=\sqrt{(1-\operatorname{Re}x^{*}y)^{2}+(\operatorname{Im}x^{*}y)^{2} }<\sqrt{\frac{\epsilon^{2}}{4}+1-\left(1-\frac{\epsilon}{2}\right)^{2}}< \sqrt{\epsilon}.\]
Moreover,
\[\|(I-T)y\|=\|y-x^{*}yx\|\geq\|y-x\|-\|x-x^{*}y\cdot x\| > 2-\frac{\epsilon}{2}-|1-x^{*}y|\] \[> 2-\frac{\epsilon}{2}-\sqrt{\epsilon}>2-\epsilon.\]
Since \(\epsilon>0\) is arbitrary, we obtain \(\|I-T\|\geq 2\). Then (iv) holds immediately from the fact that \(\|I-T\|\leq 1+\|T\|=2\)
**Corollary 2.5**.: _Let \(X\) be a complex Banach space. Then the following statements are equivalent:_
1. _The space_ \(X\) _has the Daugavet property._
2. _Every point on the unit sphere_ \(S_{X}\) _is a Daugavet point._
Now, we make a similar observation on rank-one, norm-one projections.
**Proposition 2.6**.: _Let \(X\) be a complex Banach space. The following statements are equivalent:_
1. _Every rank-one projection_ \(P=x^{*}\otimes x\) _of norm-one on_ \(X\) _satisfies_ \(\|I-P\|=2\)_._
2. _Every rank-one real-projection_ \(P=\operatorname{Re}x^{*}\otimes x\) _of norm-one on_ \(X\) _satisfies_ \(\|I-P\|=2\)_._
Proof.: (i) \(\implies\) (ii): Let \(P=\operatorname{Re}x^{*}\otimes x\), where \(x^{*}\in S_{X^{*}}\) and \(x\in S_{X}\) satisfying \(\operatorname{Re}x^{*}x=x^{*}x=1\). Then for every \(\epsilon>0\), there exists \(y\in S_{X}\) such that \(\|y-x^{*}y\cdot x\|\geq 2-\epsilon\) in view of (i). Now take \(\gamma\in\mathbb{T}\) such that \(|x^{*}y|=\gamma x^{*}y\). This implies that \(x^{*}(\gamma y)=\operatorname{Re}x^{*}(\gamma y)\). Hence we see that
\[\|I-P\|\geq\|\gamma y-\operatorname{Re}x^{*}(\gamma y)\cdot x\|=\|\gamma(y-x^ {*}y\cdot x)\|=\|y-x^{*}y\cdot x\|\geq 2-\epsilon.\]
Since \(\epsilon>0\) is arbitrary, we obtain \(\|I-P\|=2\).
(ii) \(\implies\) (i): Let \(P=x^{*}\otimes x\), where \(x^{*}\in S_{X^{*}}\) and \(x\in S_{X}\) satisfying \(x^{*}x=1\). From (ii), we see that for every \(\epsilon>0\), there exists \(y\in S_{X}\) such that \(\|y-\operatorname{Re}x^{*}y\cdot x\|\geq 2-\frac{\epsilon}{2}\). Then we have \(|\operatorname{Re}x^{*}y|\geq 1-\frac{\epsilon}{2}\). Notice that
\[(\operatorname{Im}x^{*}y)^{2}=|x^{*}y|^{2}-(\operatorname{Re}x^{*}y)^{2}<1- \left(1-\frac{\epsilon}{2}\right)^{2}<\epsilon.\]
Hence, for \(y\in S_{X}\), we obtain
\[\|I-P\|\geq\|y-x^{*}y\cdot x\|\geq\|y-\operatorname{Re}x^{*}y\cdot x\|-| \operatorname{Im}x^{*}y|\geq 2-2\epsilon.\]
Therefore, since \(\epsilon>0\) is arbitrary, we obtain \(\|I-P\|=2\).
## 3. Alternative convexity or smoothness, nonsquareness, and the Daugavet property
In this section, we study the relationship between alternative convexity or smoothness, nonsquareness, and the Daugavet property. First, we recall various nonsquareness properties, in the sense of James [36, 16]. The uniform nonsquareness has been examined on both real and complex Banach spaces via Jordan-von Neumann constants [26].
**Definition 3.1**.:
1. _A Banach space_ \(X\) _is uniformly nonsquare (UNSQ) if there exists_ \(\delta>0\) _such that for every_ \(x,y\in S_{X}\)_,_ \(\min\{\|x\pm y\|\}\leq 2-\delta\)_._
2. _A Banach space_ \(X\) _is locally uniformly nonsquare (LUNSQ) if for every_ \(x\in S_{X}\)_, there exists_ \(\delta>0\) _such that_ \(\min\{\|x\pm y\|\}\leq 2-\delta\) _for every_ \(y\in S_{X}\)_._
3. _A Banach space_ \(X\) _is nonsquare (NSQ) if for every_ \(x,y\in S_{X}\)_,_ \(\min\{\|x\pm y\|\}<2\)_._
Here we call each point \(x\in S_{X}\) in (ii) a _locally uniformly nonsquare point (or uniformly non-\(\ell_{1}^{2}\) point)_. We have the following implication for these classes:
\[\text{UNSQ}\ \ \implies\ \ \text{LUNSQ}\ \ \implies\ \ \text{NSQ}.\]
It has been recently shown that UNSQ real Banach spaces do not have \(\Delta\)-points [2] at all. We extend this result for the class of LUNSQ Banach spaces on the complex scalar fields. Let us start with an improvement of Theorem 2.2. For the proof on the real case, we refer to [17].
**Lemma 3.2**.: _Let \(X\) be a complex Banach space and \(x\in S_{X}\) be a \(\Delta\)-point. For every \(\epsilon>0\), \(\frac{\alpha}{1-\alpha}<\epsilon\), and every slice \(S=S(x^{*},\alpha)\) containing \(x\), there exists a slice \(S(z^{*},\alpha_{1})\) of \(B_{X}\) such that \(S(z^{*},\alpha_{1})\subset S(x^{*},\alpha)\) and \(\|x-y\|>2-\epsilon\) for all \(y\in S(z^{*},\alpha_{1})\)._
Proof.: Let \(x^{*}\in S_{X^{*}}\) and \(S=S(x^{*},\alpha)\) be a slice containing \(x\). First choose \(\eta>0\) such that \(\eta<\min\left\{1-\frac{1-\alpha}{\operatorname{Re}x^{*}(x)},\epsilon-\frac{ \alpha}{1-\alpha}\right\}\). Since \(x\in S_{X}\) is a \(\Delta\)-point, for a projection \(P(y)=\frac{\operatorname{Re}x^{*}y}{\operatorname{Re}x^{*}x}\cdot x\) we have \(\|I-P\|\geq 2\) by Theorem 2.2. Then there exists \(y^{*}\in S_{X^{*}}\) such that \(\|y^{*}-P^{*}y^{*}\|\geq 2-\eta\). Now define \(z^{*}=\frac{P^{*}y^{*}-y^{*}}{\|P^{*}y^{*}-y^{*}\|}\in S_{X^{*}}\) and \(\alpha_{1}=1-\frac{2-\eta}{\|P^{*}y^{*}-y^{*}\|}\) where \(P^{*}y^{*}=\frac{y^{*}x}{\operatorname{Re}x^{*}x}\cdot\operatorname{Re}x^{*}\). For every \(y\in S(z^{*},\alpha_{1})\) notice that
\[\frac{\operatorname{Re}x^{*}y}{\operatorname{Re}x^{*}x}\cdot\operatorname{Re} y^{*}x-\operatorname{Re}y^{*}y=\operatorname{Re}z^{*}y\cdot\|P^{*}y^{*}-y^{*} \|>2-\eta.\]
Hence we see that \(\frac{\operatorname{Re}x^{*}y}{\operatorname{Re}x^{*}x}\cdot\operatorname{Re} y^{*}x>1-\eta\). Since \(\operatorname{Re}y^{*}x\) cannot be zero, without loss of generality, assume that \(\operatorname{Re}y^{*}x>0\). Then we have \(\operatorname{Re}x^{*}y>(1-\eta)\cdot\operatorname{Re}x^{*}x>1-\alpha\), which shows that \(S(z^{*},\alpha_{1})\subset S(x^{*},\alpha)\). Furthermore, notice that \(\operatorname{Re}x^{*}y\leq|x^{*}y|\leq 1\), and so
\[\Big{\|}\frac{x}{\operatorname{Re}x^{*}x}-y\Big{\|}\geq\frac{\operatorname{ Re}y^{*}x}{\operatorname{Re}x^{*}x}-\operatorname{Re}y^{*}y\geq\frac{ \operatorname{Re}x^{*}y}{\operatorname{Re}x^{*}x}\cdot\operatorname{Re}y^{*}x -\operatorname{Re}y^{*}y>2-\eta.\]
Therefore, we obtain
\[\|x-y\|\geq\Big{\|}\frac{x}{\operatorname{Re}x^{*}x}-y\Big{\|}-\left(\frac{1} {\operatorname{Re}x^{*}x}-1\right)>(2-\eta)-\left(\frac{\alpha}{1-\alpha} \right)>2-\epsilon.\]
Applying Lemma 3.2 again, we can also show the converse, the same proof in [17, Lemma 2.2] transfers to complex Banach spaces.
**Corollary 3.3**.: _Let \(X\) be a complex Banach space. Then \(x\in S_{X}\) is a \(\Delta\)-point if and only if for every \(\epsilon>0\) and every slice \(S=S(x^{*},\alpha)\) containing \(x\in S\), there exists a slice \(S(z^{*},\alpha_{1})\) such that \(S(z^{*},\alpha_{1})\subset S(x^{*},\alpha)\) and \(\|x-y\|>2-\epsilon\) for all \(y\in S(z^{*},\alpha_{1})\)._
As a consequence, we obtain the relationship between the locally uniformly nonsquare points and the \(\Delta\)-points on both real and complex Banach spaces.
**Proposition 3.4**.: _Let \(X\) be a complex Banach space. A locally uniformly nonsquare point \(x\in S_{X}\) is not a \(\Delta\)-point of \(X\)._
Proof.: We show that a \(\Delta\)-point \(x\in S_{X}\) cannot be a locally uniformly nonsquare point. Let \(\epsilon>0\) and \(\eta\in(0,\frac{\epsilon}{2})\). By Lemma 3.2, for every \(\alpha>0\) where \(\frac{\alpha}{1-\alpha}<\eta\) and every slice \(S=S(x^{*},\alpha)\) containing \(x\), there exists a slice \(S(z^{*},\alpha_{1})\subset S\) such that \(\|x-y\|>2-\eta>2-\epsilon\) for all \(y\in S(z^{*},\alpha_{1})\). In particular, we have \(\frac{\operatorname{Re}z^{*}y}{\|y\|}>\frac{1-\alpha_{1}}{\|y\|}\geq 1-\alpha_{1}\) for every \(y\in S(z^{*},\alpha_{1})\). Hence \(y^{\prime}=\frac{y}{\|y\|}\in S(z^{*},\alpha_{1})\) and \(\|x-y^{\prime}\|>2-\epsilon\) for \(y\in S(z^{*},\alpha_{1})\).
Moreover, by the fact that \(\alpha<\frac{\alpha}{1-\alpha}<\eta\) and \(x,y\in S\), we have \(\|x+y^{\prime}\|\geq\operatorname{Re}x^{*}x+\operatorname{Re}x^{*}y^{\prime}>2-2 \alpha>2-2\eta>2-\epsilon\). Thus, for every \(\epsilon>0\) there exists \(y^{\prime}\in S_{X}\) such that \(\min\{\|x+y^{\prime}\|,\|x-y^{\prime}\|\}>2-\epsilon\). This shows that \(x\in S_{X}\) is not a locally uniformly nonsquare point.
**Corollary 3.5**.: _Let \(X\) be a complex Banach space. If \(X\) is LUNSQ, then the space does not admit \(\Delta\)-points. As a consequence, every LUNSQ space does not satisfy the Daugavet property, DD2P, and DLD2P._
A Banach space \(X\) is said to have the _anti-Daugavet property_ for a class of operators \(\mathcal{M}\) if the following equivalence holds:
\[\|I+T\|=1+\|T\|\iff\|T\|\in\sigma(T),\]
where \(\sigma(T)\) is the spectrum of \(T\in\mathcal{M}\). If \(\mathcal{M}=L(X)\), we simply say the space \(X\) satisfies the anti-Daugavet property. We mention that the only if part always hold for any bounded linear operators.
It is well-known that any uniformly rotund or uniformly smooth Banach spaces have the anti-Daugavet property. Moreover, this property is connected to the alternative convexity or smoothness properties that are introduced in [22]:
**Definition 3.6**.:
1. _A Banach space_ \(X\) _is uniformly alternatively convex or smooth (uacs) if for all sequences_ \((x_{n}),(y_{n})\subset S_{X}\) _and_ \((x_{n}^{*})\subset S_{X^{*}}\)_,_ \(\|x_{n}+y_{n}\|\to 2\) _and_ \(x_{n}^{*}(x_{n})\to 1\) _implies_ \(x_{n}^{*}(y_{n})\to 1\)_._
2. _A Banach space_ \(X\) _is strongly locally uniformly alternatively convex or smooth (sluacs) if for every_ \(x\in S_{X}\)_,_ \((x_{n})\subset S_{X}\) _and_ \((x_{n}^{*})\subset S_{X^{*}}\)_,_ \(\|x_{n}+x\|\to 2\) _and_ \(x_{n}^{*}(x_{n})\to 1\) _implies_ \(x_{n}^{*}(x)\to 1\)_._
3. _A Banach space_ \(X\) _is alternatively convex or smooth (acs) if for all_ \(x,y\in S_{X}\) _and_ \(x^{*}\subset S_{X^{*}}\)_,_ \(\|x+y\|=2\) _and_ \(x^{*}(x)=1\) _implies_ \(x^{*}(y)=1\)_._
Any uniformly convex (resp. locally uniformly rotund) and uniformly smooth (resp. uniformly Gateaux-smooth, smooth) Banach spaces are known to be uacs (resp. sluacs, acs) [14].
Even though it is mentioned in [22] that alternative convexity or smoothness for complex Banach spaces can be defined in a similar fashion, any recent investigation on this property also assumes the scalar field to be \(\mathbb{R}\). Hence, we provide equivalent definitions that only involve with the real part of bounded linear functionals which enable us to consider complex Banach spaces.
**Proposition 3.7**.:
1. _A Banach space_ \(X\) _is uacs if and only if for all sequence_ \((x_{n}),(y_{n})\subset S_{X}\) _and_ \((x_{n}^{*})\subset S_{X^{*}}\)_,_ \(\|x_{n}+y_{n}\|\to 2\) _and_ \(\mbox{Re}\,x_{n}^{*}x_{n}\to 1\) _implies_ \(\mbox{Re}\,x_{n}^{*}y_{n}\to 1\)_._
2. _A Banach space_ \(X\) _is sluacs if and only if for every_ \(x\in S_{X}\)_,_ \((x_{n})\subset S_{X}\) _and_ \((x_{n}^{*})\subset S_{X^{*}}\)_,_ \(\|x_{n}+y_{n}\|\to 2\) _and_ \(\mbox{Re}\,x_{n}^{*}x_{n}\to 1\) _implies_ \(\mbox{Re}\,x_{n}^{*}x\to 1\)_._
3. _A Banach space_ \(X\) _is_ _es if and only if for all_ \(x,y\in S_{X}\) _and_ \(x^{*}\in S_{X^{*}}\)_,_ \(\|x+y\|=2\) _and_ \(\mbox{Re}\,x^{*}(x)=1\) _implies_ \(\mbox{Re}\,x^{*}(y)=1\)_._
Proof.: We assume that \(X\) is a complex Banach space. Since the proofs for (i) and (ii) are similar, we only prove (i).
Suppose that for every \((x_{n}),(y_{n})\subset S_{X}\) and \((x_{n}^{*})\subset S_{X^{*}}\), \(x_{n}^{*}x_{n}\to 1\) and \(\|x_{n}+y_{n}\|\to 2\). Then for every \(\epsilon>0\), there exists \(N_{1}\in\mathbb{N}\) such that for every \(n\geq N_{1}\), \(|x_{n}^{*}x_{n}|>1-\frac{\epsilon}{2}\). We see that
\[1\geq(\mbox{Re}\,x_{n}^{*}x_{n})^{2}+(\mbox{Im}\,x_{n}^{*}x_{n})^{2}=|x_{n}^{ *}x_{n}|^{2}>\left(1-\frac{\epsilon}{2}\right)^{2}+(\mbox{Im}\,x_{n}^{*}x_{n} )^{2}.\]
Hence \(\mbox{Im}\,x_{n}^{*}x_{n}<\sqrt{\epsilon}\), which in turn implies that \(\mbox{Re}\,x_{n}^{*}x_{n}\to 1\). Then by the assumption, we obtain \(\mbox{Re}\,x_{n}^{*}y_{n}\to 1\). Again, for every \(\epsilon>0\), there exists \(N_{2}\in\mathbb{N}\) such that \(\mbox{Re}\,x_{n}^{*}y_{n}>1-\frac{\epsilon}{2}\). This implies that
\[1\geq|x_{n}^{*}y_{n}|^{2}=(\mbox{Re}\,x_{n}^{*}y_{n})^{2}+(\mbox{Im}\,x_{n}^{ *}y_{n})^{2}\geq(1-\epsilon)^{2}+(\mbox{Im}\,x_{n}^{*}y_{n})^{2},\]
and so \(\mbox{Im}\,x_{n}^{*}y_{n}\to 0\) as \(n\to\infty\). Therefore, we see that \(x_{n}^{*}y_{n}=\mbox{Re}\,x_{n}^{*}y_{n}+i\mbox{Im}\,x_{n}^{*}y_{n}\to 1\).
For (iii), if for all \(x,y\in S_{X}\) and \(x^{*}\in S_{X^{*}}\) we have \(\|x+y\|=2\) and \(x^{*}x=\operatorname{Re}x^{*}x=1\), then \(\operatorname{Re}x^{*}y=1\) by the assumption. Hence, we see that
\[1\geq|x^{*}y|=(\operatorname{Re}x^{*}y)^{2}+(\operatorname{Im}x^{*}y)^{2}=1+( \operatorname{Im}x^{*}y)^{2}.\]
Therefore, \(x^{*}y=\operatorname{Re}x^{*}y=1\).
Even though every uacs Banach spaces are UNSQ [14, 22], there have been no explicit description on the relationship between the sluacs and LUNSQ (resp. acs and NSQ) Banach spaces. As a matter of fact, the similar statement also holds for both sluacs and acs Banach spaces.
**Proposition 3.8**.:
1. _Every sluacs space is LUNSQ._
2. _Every acs space is NSQ._
Proof.: (i) Suppose that a sluacs space \(X\) is not LUNSQ. Then there exists \(x\in S_{X}\) such that for every \(\delta>0\), there exists \(y\in S_{X}\) such that \(\|x+y\|>2-\delta\) and \(\|x-y\|>2-\delta\). So choose a sequence \((x_{n})_{n=1}^{\infty}\) such that \(\|x+x_{n}\|>2-\frac{1}{2^{n}}\) and \(\|x-x_{n}\|>2-\frac{1}{2^{n}}\). In view of Hahn-Banach theorem, we can also find a sequence \((x_{n}^{*})_{n=1}^{\infty}\) such that
\[\operatorname{Re}x_{n}^{*}x_{n}-\operatorname{Re}x_{n}^{*}x=\|x-x_{n}\|\; \text{and}\;\operatorname{Re}x_{n}^{*}x_{n}+\operatorname{Re}x_{n}^{*}x=\|x+ x_{n}\|.\]
Then we see that
\[2-\frac{1}{2^{n}}<\operatorname{Re}x_{n}^{*}x_{n}+\operatorname{Re}x_{n}^{*} x\leq\|x\|+\operatorname{Re}x_{n}^{*}x_{n}=1+\operatorname{Re}x_{n}^{*}x_{n},\]
and so \(\operatorname{Re}x_{n}^{*}x_{n}\to 1\) as \(n\to\infty\). Since the space \(X\) is assumed to be sluacs, \(x_{n}^{*}x\to 1\) as \(n\to\infty\). However, by repeating the same argument to \(\|x-x_{n}\|\), we also obtain that \(-\operatorname{Re}x_{n}^{*}x\to 1\). This leads to a contradiction.
(ii) Suppose that an acs space \(X\) is not nonsquare. Then there exist \(x,y\in S_{X}\) such that \(\|x+y\|=\|x-y\|=2\). Let \(x^{*}\in S_{X^{*}}\) such that \(\operatorname{Re}x^{*}(x)+\operatorname{Re}x^{*}(y)=\|x+y\|\). From the fact that
\[2=\operatorname{Re}x^{*}(x)+\operatorname{Re}x^{*}(y)\leq\operatorname{Re}x^{ *}(x)+\|y\|=\operatorname{Re}x^{*}(x)+1,\]
we have \(\operatorname{Re}x^{*}x=1\). This also shows that \(\operatorname{Re}x^{*}y=1\). However, since \(\|x-y\|=2\) and the space \(X\) is acs, we have \(-\operatorname{Re}x^{*}x=1\), which is a contradiction. Therefore, the space \(X\) must be nonsquare.
We mention that any locally uniformly rotund (LUR) Banach spaces does not have \(\Delta\)-points. As a matter of fact, we can show further that every sluacs Banach space does not have \(\Delta\)-points based on our observation.
**Corollary 3.9**.: _Let \(X\) be a Banach space. Every sluacs Banach space does not contain \(\Delta\)-points._
Proof.: Every sluacs Banach space is LUNSQ by Proposition 3.8.(i). Then by Proposition 3.4, we see that the space does not contain \(\Delta\)-points.
There has been a long standing open problem whether a Banach space with the Daugavet property can be rotund. While there is a rotund normed space (not complete) with the Daugavet property [20], the existence has not been verified for Banach spaces. Since every rotund Banach space is nonsquare, it will be interesting to know the answer to the following question that may help us to prove or disprove the open problem.
**Problem 3.10**.: _Does a NSQ Banach space \(X\) contain \(\Delta\)-points?_
## 4. Remarks on the Daugavet property of \(A(k,x)\)
Let \(K\) be a compact Hausdorff space. The space \(C(K)\) is the set of all complex-valued continuous functions over \(K\) endowed with the supremum norm \(\|\cdot\|_{\infty}\). A _uniform algebra_\(A\) is a closed subalgebra of \(C(K)\) that separates points and contains constant functions. For a compact subset \(K\subset\mathbb{C}\), the space \(P(K)\) (resp. \(R(K)\)) of continuous functions that can be approximated uniformly on \(K\) by polynomials in \(z\) (resp. by rational functions with poles off \(K\)) and the space \(A(K)\) of continuous functions that are analytic on the interior of \(K\) are well-known examples of uniform algebras. When \(K=\overline{\mathbb{D}}\), the corresponding uniform algebra \(A(K)=A(\overline{\mathbb{D}})\) is the disk algebra. We refer to [9, 28] for more details on uniform algebras.
For a complex Banach space \(X\), let \(C(K,X)\) be the set of all vector-valued continuous functions over \(K\) equipped with the supremum norm. We recall the definition of the vector-valued function space \(A(K,X)\).
**Definition 4.1**.: _Let \(K\) be a compact Hausdorff space and \(X\) be a Banach space. The space \(A(K,X)\) is called a function space over the base algebra \(A\) if it is a subspace of \(C(K,X)\) that satisfies:_
1. _The base algebra_ \(A:=\{x^{*}\circ f:x^{*}\in X^{*},f\in A(K,X)\}\) _is a uniform algebra._
2. \(A\otimes X\subset A(K,X)\)_._
3. _For every_ \(g\in A\) _and every_ \(f\in A(K,X)\)_, we have_ \(g\cdot f\in A(K,X)\)_._
If \(X=\mathbb{F}\), then the space \(A(K,X)\) becomes the uniform algebra \(A\) on a compact Hausdorff space \(K\). It is clear that \(C(K,X)\) is a function space over a base algebra \(C(K)\). As a nontrivial example, for given Banach spaces \(X\) and \(Y\), let \(A_{w^{*}}(B_{X^{*}},Y)\) be the space of all weak\({}^{*}\)-to-norm continuous functions on the closed unit ball \(B_{X^{*}}\) that are holomorphic on the interior of \(B_{X^{*}}\). It is a closed subspace of \(C(B_{X^{*}};Y)\), where \(B_{X^{*}}\) is the weak\({}^{*}\)-compact set. Then \(A_{w^{*}u}(B_{X^{*}};Y)\) is a function space over base algebra \(A_{w^{*}u}(B_{X^{*}})\).
A subset \(L\subset K\) is said to be a _boundary_ for \(A\) if for every \(f\in A\) there exists \(t\in L\) such that \(f(t)=\|f\|_{\infty}\). The smallest closed boundary for \(A\) is called the _Shilov boundary_ denoted by \(\Gamma\). A point \(x\in K\) is a _strong boundary point_ for a uniform algebra \(A\) if for every open subset \(U\subset K\) containing \(x\), there exists \(f\in A\) such that \(\|f\|_{\infty}=|f(t_{0})|=1\) and \(\sup_{t\in K\setminus U}|f(t)|<1\). For a compact Hausdorff space \(K\), the set of all strong boundary points on \(A\) is coincides with the Choquet boundary \(\Gamma_{0}\), that is, the set of all extreme points on the set \(K_{A}=\{\lambda\in A^{*}:\|\lambda\|=\lambda(1_{A})=1\}\)[9, Theorem 4.3.5]. Moreover, the closure of \(\Gamma_{0}\) is \(\Gamma\) in this case [9, Corollary 4.3.7.a]. For instance, the Shilov boundary of the disk algebra \(A(\overline{\mathbb{D}})\) is the unit circle \(\partial\overline{\mathbb{D}}\).
To study various geometrical properties of \(A(K,X)\) and Bishop-Phelps-Bollobas property for Asplund operators which range space is a uniform algebra, a Urysohn-type lemma has played an important role. Here we use a stronger version of the lemma provided in [5].
**Lemma 4.2**.: _[_6_, Lemma 3.10]_ _Let \(K\) be a compact Hausdorff space. If \(t_{0}\) is a strong boundary point for a uniform algebra \(A\subset C(K)\), then for every open subset \(U\subset K\) containing \(t_{0}\) and \(\epsilon>0\), there exists \(\phi=\phi_{U}\in A\) such that \(\phi(t_{0})=\|\phi\|_{\infty}=1\), \(\sup_{K\setminus U}|\phi(t)|<\epsilon\) and_
\[|\phi(t)|+(1-\epsilon)|1-\phi(t)|\leq 1\]
_for every \(t\in K\)._
We can also construct a Urysohn-type function at an isolated point in the Shilov boundary.
**Lemma 4.3**.: _[_27_, Lemma 2.5]_ _Let \(A\) be a uniform algebra on a compact Hausdorff space \(K\) and let \(t_{0}\) be an isolated point of the Shilov boundary \(\Gamma\) of \(A\). Then there exists a function \(\phi\in A\) such that \(\phi(t_{0})=\|\phi\|=1\) and \(\phi(t)=0\) for \(t\in\Gamma\setminus\{t_{0}\}\)._
The next statement is in the proof of the case (iii) for [27, Theorem 4.2], but we state it explicitly here.
**Lemma 4.4**.: _Let \(K\) be a compact Hausdorff space and \(\Gamma\) be the Shilov boundary of the base algebra for the space \(A(K,X)\). Suppose that \(\Gamma\) has an isolated point. Then, \(A(K,X)\) is isometrically isomorphic to \(X\oplus_{\infty}Y\) where \(Y\) is \(A(K,X)\) restricted to \(K\setminus\{t_{0}\}\)._
Proof.: Let \(t_{0}\in\Gamma\) be an isolated point. By Lemma 4.3, there exists \(\phi\in A\) such that \(\phi(t_{0})=\|\phi\|_{\infty}=1\) and \(\phi(t)=0\) for \(t\in\Gamma\setminus\{t_{0}\}\). Let \(\tilde{K}=K\setminus\{t_{0}\}\). Define a norm-one projection \(P:A(K,X)\to A(K,X)\) by \(Pf=\phi\cdot f\) and denote \(Y\) be the restriction of \(A(K,X)\) to \(\tilde{K}\). As a matter of fact, the image \(P(A(K,X))\) is isometrically isomorphic to \(X\). Indeed, define a linear operator \(\Psi:P(A(K,X))\to X\) by \(\Psi(Pf)=f(t_{0})\). Then \(\|\Psi(Pf)\|_{X}=\|f(t_{0})\|_{X}=\|Pf\|\). Moreover, we see that for every \(x\in X\) there exists \(f\in A(K,X)\) such that \(f(t_{0})=x\). Hence, \(\Psi\) is surjective, which in turn implies that the operator \(\Psi\) is an isometric isomorphism on \(P(A(K,X))\).
Now, we claim that the space \(A(K,X)\) is isometrically isomorphic to \(X\oplus_{\infty}Y\). For \(f\in A(K,X)\), define a bounded linear operator \(\Phi:A(K,X)\to X\oplus_{\infty}Y\) by \(\Phi f=(Pf,f_{|\tilde{K}})\). Then we see that
\[\|\Phi f\|=\max\{\|Pf\|,\|f_{|\tilde{K}}\|\}=\max\left\{\|f(t_{0})\|_{X},\sup_ {t\in\tilde{K}}\|f(t)\|_{X}\right\}=\|f\|.\]
Notice that for a given \((f,g)\in X\oplus_{\infty}Y\), there exist \(f_{1},f_{2}\in A(K,X)\) such that \(f=Pf_{1}\) and \(g=f_{2|\tilde{K}}\). Let \(h=Pf_{1}+f_{2}-Pf_{2}\in A(K,X)\). Then we have \(\Phi(h)=(Pf_{1},f_{2|\tilde{K}})\). Hence, the operator \(\Phi\) is also surjective, and so it is an isometric isomorphism between \(A(K,X)\) and \(X\oplus_{\infty}Y\).
The following lemma will be useful for later.
**Lemma 4.5**.: _Let \(X\) be a Banach space. Suppose that \(L\) is a closed boundary for \(A\). The space of restrictions of elements of \(A(K,X)\) to \(L\) is denoted by \(A(L,X)\) and the restrictions of elements of \(A\) to \(L\) is denoted by \(A(L)\). Then \(A(L,X)\) is isometrically isomorphic to \(A(K,X)\)._
### The polynomial Daugavet property in \(A(k,x)\)
First we provide a sufficient condition for \(A(K,X)\) with the polynomial Daugavet property. We mention that the proof method is inspired by [8, Theorem 2.7].
**Theorem 4.6**.: _Let \(K\) be a compact Hausdorff space and let \(\Gamma\) be the Shilov boundary of the base algebra \(A\) of \(A(K,X)\). If \(\Gamma\) does not have isolated points, then \(A(K,X)\) has the polynomial Daugavet property._
Proof.: In view of [7, Corollary 2.2], it suffices to show that for every \(p\in\mathcal{P}(X)\) with \(\|p\|=1\), every \(x_{0}\in S_{X}\), and every \(\epsilon>0\), there exist \(\alpha\in S_{\mathbb{C}}\) and \(y\in B_{X}\) such that
\[\operatorname{Re}\alpha p(y)>1-\epsilon\;\;\text{and}\;\;\|x_{0}+\alpha y\|>2-\epsilon.\]
Let \(0<\epsilon<1\), \(P\in\mathcal{P}(X)\) with \(\|P\|=1\), and let \(f_{0}\in S_{A(K,X)}\). Choose \(h\in S_{A(K,X)}\) and \(\alpha\in\mathbb{T}\) such that \(|P(h)|>1-\frac{\epsilon}{2}\) and \(\operatorname{Re}\alpha P(h)>1-\frac{\epsilon}{2}\). Also choose \(t_{0}\in\Gamma_{0}\) such that \(\|f_{0}(t_{0})\|_{X}>1-\frac{\epsilon}{8}\). Let \(U=\{t\in K:\|f_{0}(t)-f_{0}(t_{0})\|_{X}<\frac{\epsilon}{8}\;\;\text{and}\;\; \|h(t)-h(t_{0})\|_{X}<\frac{\epsilon}{8}\}\) be a nonempty open subset of \(K\). We consider two cases.
Case 1: Suppose that there exists \((t_{i})_{i=1}^{\infty}\subset U\) such that \(\|\alpha^{-1}f_{0}(t_{i})-h(t_{i})\|\to 0\). Then we have
\[\|f_{0}+\alpha h\| \geq \sup_{i}\|f_{0}(t_{i})+\alpha h(t_{i})\|\] \[\geq \sup_{i}(2\|f_{0}(t_{0})\|_{X}-2\|f_{0}(t_{0})-f_{0}(t_{i})\|_{X}- \|f_{0}(t_{i})-\alpha h(t_{i})\|_{X}\] \[\geq 2-\frac{\epsilon}{4}-\frac{\epsilon}{4}-\frac{\epsilon}{4}>2-\epsilon.\]
Case 2: Now suppose that there exists \(\eta>0\) such that \(\|\alpha^{-1}f_{0}(t)-h(t)\|>\eta\) for every \(t\in U\). Since \(\Gamma\) is perfect, we see that the strong boundary point \(t_{0}\in U\) is not an isolated point. Let \(\{U_{i}\}_{i=1}^{\infty}\) be a collection of pairwise disjoint open subsets of \(U\) such that \(\cup_{i=1}^{\infty}U_{i}\subset U\). From the fact that the Choquet boundary \(\Gamma_{0}\) is dense in \(\Gamma\), there exist strong boundary points \(t_{i}\in U_{i}\) for each \(i\in\mathbb{N}\). Then by Lemma 4.2, there exists \(\phi_{i}\in A\) such that
\[\phi_{i}(t_{i})=1,\ \,\sup_{K\setminus U_{i}}|\phi_{i}(t)|<\frac{\epsilon}{2^ {i+3}},\ \ \mbox{and}\ \ |\phi_{i}(t)|+\left(1-\frac{\epsilon}{2^{i+3}}\right)|1-\phi_{i}(t)|\leq 1 \ \mbox{for every}\ \ t\in K. \tag{2}\]
Let \(h_{i}=h+\phi_{i}(\alpha^{-1}f_{0}(t_{i})-h(t_{0}))\in A(K,X)\). Then for every \(t\in\cup_{i=1}^{\infty}U_{i}\), by (2), we have
\[\|h_{i}(t)\|_{X} = \|h(t)+\phi_{i}(t)\alpha^{-1}f_{0}(t_{i})-\phi_{i}(t)h(t_{i})\|_{X}\] \[\leq \|h(t)-h(t_{i})\|_{X}+\|h(t_{i})-\phi_{i}(t)h(t_{i})\|_{X}+\|\phi_ {i}(t)\alpha^{-1}f_{0}(t_{i})\|_{X}\] \[\leq \|h(t)-h(t_{0})\|_{X}+\|h(t_{0})-h(t_{i})\|_{X}+|1-\phi_{i}(t)|+| \phi_{i}(t)|\] \[\leq \frac{\epsilon}{4}+\left(1-\frac{\epsilon}{2^{i+3}}\right)|1- \phi(t)|+\frac{\epsilon}{2^{i+3}}|1-\phi_{i}(t)|+|\phi_{i}(t)|\] \[\leq \frac{\epsilon}{4}+1+\frac{\epsilon}{2^{i+2}}<1+\frac{\epsilon}{ 2}.\]
On the other hand, for every \(t\in K\setminus\cup_{i=1}^{\infty}U_{i}\),
\[\|h_{i}(t)\|_{X}\leq\|h(t)\|_{X}+|\phi_{i}(t)|\|\alpha^{-1}f_{0}(t_{i})-h(t_{i })\|_{X}\leq 1+\frac{\epsilon}{2^{i+3}}\cdot 2<1+\frac{\epsilon}{2}. \tag{3}\]
Moreover, we see that
\[\|h_{i}\|\geq\|h_{i}(t_{i})\|_{X}=\|f_{0}(t_{i})\|_{X}\geq\|f_{0}(t_{0})\|-\|f_ {0}(t_{0})-f_{0}(t_{i})\|\geq 1-\frac{\epsilon}{2}. \tag{4}\]
Now, let \(g_{i}=\frac{h_{i}}{\|h_{i}\|}\). By (4) we obtain
\[\|h_{i}-g_{i}\|=|1-\|h_{i}\|<\frac{\epsilon}{2}.\]
For every \((\beta_{i})\in\ell_{\infty}\), notice that
\[\sup_{n}\left\|\sum_{i=1}^{n}\beta_{i}\phi_{i}(\alpha^{-1}f_{0}( t_{i})-h(t_{i}))\right\| \leq \sup_{n}\sup_{t\in K}\sum_{i=1}^{n}|\beta_{i}||\phi_{i}(t)|\| \alpha^{-1}f_{0}(t_{i})-h(t_{i})\|_{X}\] \[\leq \sup_{n}\sup_{t\in K}\sum_{i=1}^{n}2|\beta_{i}||\phi_{i}(t)|\] \[\leq 2\sup_{i}|\beta_{i}|\left(1+\frac{\epsilon}{2^{4}}+\frac{ \epsilon}{2^{5}}\cdots\right)=2\left(1+\frac{\epsilon}{8}\right)\sup_{i}|\beta _{i}|\]
Hence by [10, Theorem V.6], the series \(\sum_{i=1}^{n}\beta_{i}\phi_{i}(\alpha^{-1}f_{0}(t_{i})-h(t_{i}))\) is weakly unconditionally Cauchy. Since we assumed that \(\|\alpha^{-1}f_{0}(t)-h(t)\|>\eta\), there exists a basic subsequence \((\phi_{\sigma(i)}(\alpha^{-1}f_{0}(t_{\sigma(i)})-h(t_{\sigma(i)}))\) that is equivalent to the basis \((e_{i})\) in \(c_{0}\) by using Bessaga-Pelczynski
principle [10, pg. 45]. From the fact that a polynomial on a bounded subset of \(c_{0}\) is weakly continuous [12, Proposition 1.59], we have \(\operatorname{Re}\alpha P(h_{\sigma(i)})\to\operatorname{Re}\alpha P(h)\) as \(i\to\infty\).
Choose \(k\in\mathbb{N}\) such that \(\operatorname{Re}\alpha P(h_{k})>1-\frac{\epsilon}{2}\). Then we have
\[\operatorname{Re}\alpha P(g_{k})=\frac{\operatorname{Re}\alpha P(h_{k})}{\|h_ {k}\|}>\frac{1-\epsilon/2}{1+\epsilon/2}\geq 1-\epsilon.\]
Therefore, we finally obtain
\[\|f_{0}+\alpha g_{k}\|\geq\|f_{0}+\alpha h_{k}\|-\|g_{k}-h_{k}\| \geq \|f_{0}(t_{i})+\alpha h_{k}(t_{i})\|-\frac{\epsilon}{2}\] \[= 2\|f_{0}(t_{i})\|-\frac{\epsilon}{2}\] \[\geq 2\|f_{0}(t_{0})\|-2\|f_{0}(t_{i})-f_{0}(t_{0})\|-\frac{ \epsilon}{2}\] \[\geq 2\|f_{0}(t_{0})\|-\frac{3\epsilon}{4}\] \[\geq 2-\epsilon.\]
Let \(\mathcal{P}_{K}(X,X)\) be the set of all compact polynomials from \(X\) to itself. For \(P\in\mathcal{P}_{K}(X,X)\), the numerical range \(V(P)\) is defined by
\[V(T)=\{x^{*}(Tx):x^{*}\in S_{X^{*}}\,\,\,\text{and}\,\,\,x\in S_{X}\,\,\,\text {where}\,\,\,x^{*}(x)=1\}.\]
Now, we recall the polynomial Daugavetian index.
**Definition 4.7**.: _[_35_]_ _For a Banach space \(X\), the Daugavetian index \(\text{Daug}\,(X)\) is defined by_
\[\text{Daug}_{p}\,(X) = \max\{m\geq 0:\|I+P\|\geq 1+m\|P\|,\,\,\,\text{for every}\,\,\,P\in\mathcal{P}_{K}(X,X)\}\] \[= \inf\{\omega(P):P\in\mathcal{P}_{K}(X,X),\|P\|=1\},\]
_where \(\omega(P)=\sup\text{Re}\,V(T)\)._
It is well-known that \(\text{Daug}_{p}\,(X)\in[0,1]\) and \(\text{Daug}_{p}(X)\leq\text{Daug}(X)\), where \(\text{Daug}(X)\) is the Daugavetian index introduced in [29]. A Banach space \(X\) has the polynomial Daugavet property if and only if \(\text{Daug}_{p}(X)=1\). This comes from the fact that a Banach space \(X\) satisfies the Daugavet equation for every rank-one polynomials if and only if \(X\) satisfies the same equation for every weakly compact polynomials (see Theorem 1.5). We recall the following lemma that will be useful for later.
**Lemma 4.8**.: _[_35_, Proposition 2.2, 2.3]_ _Let \(\{X_{\lambda}\}_{\lambda\in\Lambda}\) be a family of infinite-dimensional Banach spaces and let \(Z\) be the \(c_{0}\)- or \(\ell_{\infty}\)-sum of the family. Then_
\[\text{Daug}_{p}\,(Z)=\inf\{\text{Daug}_{p}\,(X_{\lambda}):\lambda\in\Lambda\}.\]
If there exists a finite-rank projection on \(X\) such that \(\|P\|=\|I-P\|=1\), then \(\text{Daug}\,(X)=0\)[29]. Hence \(\text{Daug}_{p}\,(X)=0\) in this case. Examples of such spaces are \(C(K)\) where \(K\) has isolated points and Banach spaces \(X\) with an \(1\)-unconditional basis [29, pp. 635]. Similar to the space \(C(K)\), we can also construct such a projection for uniform algebras.
**Proposition 4.9**.: _Let \(A\) be a uniform algebra on a compact Hausdorff space \(K\) and let \(t_{0}\) be an isolated point of the Shilov boundary \(\Gamma\) of \(A\). Then, there exists \(P:A\to A\) defined by \(P=\delta_{t_{0}}\otimes\phi\), where \(\delta_{t_{0}}\in S_{A^{*}}\) and \(\phi\in S_{A}\), such that \(P\) is a projection and \(\|P\|=\|I-P\|=1\)._
Proof.: Let \(t_{0}\in K\) be an isolated point of \(\Gamma\). Then, by Lemma 4.3, there exists a function \(\phi\in A\) such that \(\phi(t_{0})=\|\phi\|=1\) and \(\phi(t)=0\) for \(t\in\Gamma\setminus\{t_{0}\}\). Now, define \(P=\delta_{t_{0}}\otimes\phi\) where \(\delta_{t_{0}}\) is the pointwise evaluation at \(t_{0}\). Since \(P^{2}f=f(t_{0})\phi(t_{0})\cdot\phi=f(t_{0})\phi=Pf\), the rank-one operator \(P\) is a projection on \(A\).
Let \(f\in S_{A}\). Then we have \(|Pf(t)|=|f(t_{0})\phi(t)|=0\) for \(t\in K\setminus\{t_{0}\}\) and \(|Pf(t_{0})|=|f(t_{0})|\). Hence, \(\|Pf\|_{\infty}=|f(t_{0})|\leq 1\). However, we see that \(\|P\phi\|_{\infty}=1\). So \(\|P\|=1\). Similarly, \(|[(I-P)f](t)|=|f(t)-f(t_{0})\phi(t)|=|f(t)|\) for \(t\in K\setminus\{t_{0}\}\) and \(|[(I-P)f](t_{0})|=0\). Let \(U\) be an open set in \(K\) that contains \(t_{1}\in\Gamma_{0}\). Then there exists \(\tilde{\phi}\in A\) such that \(\tilde{\phi}(t_{1})=1\) and \(\sup_{K\setminus U}|\tilde{\phi}|<1\). We see that \(\|(I-P)\tilde{\phi}\|_{\infty}=\|\tilde{\phi}\|_{\infty}=1\). Therefore, we obtain \(\|I-P\|=1\).
**Corollary 4.10**.: _Let \(K\) be a compact Hausdorff space and let \(A\) be a uniform algebra on \(K\). If the Shilov boundary of \(A\) contains an isolated point, then \(\text{Daug}_{p}(A)=0\)._
Proof.: This is an immediate consequence of Proposition 4.9.
**Theorem 4.11**.: _Let \(X\) be a complex Banach space and let \(K\) be a compact Hausdorff space. Then_
\[\text{Daug}_{p}\,(A(K,X))=\max\{\text{Daug}_{p}(A),\text{Daug}_{p}(X)\}.\]
Proof.: Let \(P\in\mathcal{P}_{K}(A(K,X))\). We first show that
\[\|I+P\|\geq 1+\text{Daug}_{p}(X)\|P\|.\]
For a given \(\epsilon>0\) there exists \(f_{0}\in S_{A(K,X)}\) and \(t_{0}\in\Gamma_{0}\) such that \(\|P(f_{0})(t_{0})\|_{X}\geq\|P\|-\frac{\epsilon}{2}\). Since \(P\) is continuous at \(f_{0}\), there exists \(\delta>0\) such that
\[\text{If }\,\|f_{0}-g\|<\delta,\,\,\,\text{then }\,\|P(f_{0})-P(g)\|\leq \frac{\epsilon}{2}. \tag{5}\]
Now, consider \(U=\{t\in K:\|f_{0}(t)-f_{0}(t_{0})\|_{X}<\frac{\delta}{4}\}\). Since the set \(U\) is a nonempty open subset of \(K\) that contains the strong boundary point \(t_{0}\), by Lemma 4.2, there exists \(\phi\in A\) such that
\[\phi(t_{0})=1,\,\,\sup_{K\setminus U}|\phi(t)|<\frac{\delta}{8},\,\,\,\text{ and }\,\,|\phi(t)|+\left(1-\frac{\delta}{8}\right)|1-\phi(t)|\leq 1\,\,\,\text{for every }\,\,t\in K. \tag{6}\]
Fix \(x_{0}\in S_{X}\) such that \(f_{0}(t_{0})=\|f_{0}(t_{0})\|\cdot x_{0}\) and define \(\Psi:\mathbb{C}\to A(K,X)\) by
\[\Psi(z)=\left(1-\frac{\delta}{8}\right)(1-\phi)f_{0}+\phi\cdot x_{0}\cdot z.\]
Then we have
\[\Psi(\|f_{0}(t_{0})\|_{X})(t)-f_{0}(t) = \left(1-\frac{\delta}{8}\right)(1-\phi(t))f_{0}(t)+\phi(t)f_{0}(t _{0})-f_{0}(t)\] \[= \phi(t)(f_{0}(t)-f_{0}(t_{0}))-\frac{\delta(1-\phi(t))f_{0}(t)}{8}.\]
In view of (6), notice that
\[\left\|\phi(t)(f_{0}(t)-f_{0}(t_{0}))-\frac{\delta(1-\phi(t))f_{0}(t)}{8} \right\|_{X}\leq\|\phi\|_{\infty}\cdot\|(f_{0}(t)-f_{0}(t_{0}))\|_{X}+\frac{ \delta}{4}<\frac{\delta}{2}\]
for every \(t\in U\) and that
\[\left\|\phi(t)(f_{0}(t)-f_{0}(t_{0}))-\frac{\delta(1-\phi(t))f_{0}(t)}{8} \right\|_{X}<2\cdot\frac{\delta}{4}=\frac{\delta}{2}\]
for every \(t\in K\setminus U\). Hence we can see that \(\|\Psi(\|f_{0}(t_{0})\|_{X})-f_{0}\|<\delta\), and so
\[\|P(\Psi(\|f_{0}(t_{0})\|_{X}))(t_{0})-P(f_{0})(t_{0})\|_{X}\leq\|P(\Psi(\|f_{0} (t_{0})\|_{X}))-P(f_{0})\|<\frac{\epsilon}{2}\]
by (5). This implies that
\[\|P(\Psi(\|f_{0}(t_{0})\|_{X}))(t_{0})\|_{X}>\|P(f_{0})(t_{0})\|_{X}-\frac{ \epsilon}{2}>\|P\|-2\cdot\frac{\epsilon}{2}=\|P\|-\epsilon.\]
In view of Hahn-Banach theorem, there exists \(x_{0}^{*}\in S_{X^{*}}\) such that
\[x_{0}^{*}\left(P(\Psi(\|f_{0}(t_{0})\|_{X}))(t_{0})\right)=\|P(\Psi(\|f_{0}(t_ {0})\|_{X}))(t_{0})\|_{X}>\|P\|-\epsilon.\]
Notice that the function \(f(z)=x_{0}^{*}\left(P(\Psi(z))(t_{0})\right)\) is holomorphic. Hence, by the maximum modulus theorem, there exists \(z_{0}\in\mathbb{T}\) such that
\[\|P(\Psi(z_{0}))(t_{0})\|_{X}\geq x_{0}^{*}\left(P(\Psi(\|f_{0}(t_{0})\|_{X})) (t_{0})\right)>\|P\|-\epsilon.\]
Take \(x_{1}=z_{0}x_{0}\in S_{X}\) and let \(x_{1}^{*}\in S_{X^{*}}\) such that \(x_{1}^{*}x_{1}=1\). Define a function \(\Phi:X\to A(K,X)\) by
\[\Phi(x)=x_{1}^{*}x\left(1-\frac{\delta}{8}\right)(1-\phi)f_{0}+\phi\cdot x.\]
We see that \(\|\Phi(x)\|\leq 1\) for every \(x\in B_{X}\) from (6). In particular, \(\Phi(x_{1})=\Psi(z_{0})\). Hence \(\|P(\Phi(x_{1}))(t_{0})\|>\|P\|-\epsilon\). Consider \(Q\in\mathcal{P}_{K}(X)\) defined by \(Q(x)=P(\Phi(x))(t_{0})\). Notice that
\[\|Q\|\geq\|Qx_{1}\|_{X}=\|(P(\Phi(x_{1}))(t_{0})\|_{X}>\|P\|-\epsilon.\]
This implies that \(\|I+Q\|\geq 1+\text{Daug}_{p}(X)\|Q\|>1+\text{Daug}_{p}(X)(\|P\|-\epsilon)\). Now choose \(x_{2}\in B_{X}\) such that \(\|x_{2}+Qx_{2}\|>1+\text{Daug}_{p}(X)(\|P\|-\epsilon)\) and let \(g=\Phi(x_{2})\). Then we obtain
\[\|I+P\|\geq\|g+Pg\| \geq \|g(t_{0})+P(g)(t_{0})\|_{X}\] \[\geq \left\|x_{1}^{*}x_{2}\left(1-\frac{\delta}{8}\right)(1-\phi(t_{0} )f(t_{0})+\phi(t_{0})x_{2}-Q(x_{2})\right\|_{X}\] \[= \|x_{2}+Q(x_{2})\|>1+\text{Daug}_{p}(X)(\|P\|-\epsilon)\]
As \(\epsilon\to 0\), we have \(\|I+P\|\geq 1+\text{Daug}_{p}(X)\|P\|\). This consequently shows that \(\text{Daug}_{p}(A(K,X))\geq\text{Daug}_{p}(X)\).
If \(\Gamma\) does not have isolated points, then \(A(K,X)\) has the polynomial Daugavet property by Theorem 4.6. This implies that
\[\text{Daug}_{p}(A(K,X))=\text{Daug}_{p}(A)=1,\]
and so we have \(\text{Daug}_{p}(A(K,X))=\max\{\text{Daug}_{p}(A),\text{Daug}_{p}(X)\}\).
If \(\Gamma\) has isolated points, then \(A(K,X)=X\oplus_{\infty}Y\) by Lemma 4.4 and \(\text{Daug}(A)=0\) by Corollary 4.10. From Lemma 4.8, we see that \(\text{Daug}_{p}(A(K,X))\leq\text{Daug}_{p}(X)\). Therefore, we also obtain \(\text{Daug}_{p}(A(K,X))=\max\{\text{Daug}_{p}(A),\text{Daug}_{p}(X)\}\).
**Corollary 4.12**.: _Let \(K\) be a compact Hausdorff space. Then the space \(A(K,X)\) has the polynomial Daugavet property if and only if either the base algebra \(A\) or \(X\) has the polynomial Daugavet property._
### Remarks on the property \((d)\) and the convex diametral local diameter two property in \(A(k,x)\)
Since the equivalence between the property \((\mathcal{D})\) and the DLD2P is not clear, it is natural to explore various Banach spaces that potentially distinguish these properties. However, we show that this is not the case for \(A(K,X)\). Under the additional assumption that \(X\) is uniformly convex, the space \(A(K,X)\) has the Daugavet property if and only if the Shilov boundary of the base algebra does not have isolated points [27, Theorem 5.6]. Moreover, the Daugavet property of \(A(K,X)\) is equivalent to all diametral D2Ps under the same assumption. In fact, carefully inspecting the proof of [27, Theorem 5.4], we see that the rank-one projection constructed in there has norm-one. With the aid of our previous observations, we can see that the DLD2P is also equivalent to the property \((\mathcal{D})\) for \(A(K,X)\).
**Proposition 4.13**.: _[_27_, Theorem 5.4]_ _Let \(X\) be a uniformly convex Banach space, \(K\) be a compact Hausdorff space, \(\Gamma\) be the Shilov boundary of the base algebra \(A\) of \(A(K,X)\), and \(f\in S_{A(K,X)}\). Then the following statements are equivalent:_
1. \(f\) _is a Daugavet point._
2. \(f\) _is a_ \(\Delta\)_-point._
3. _Every rank-one, norm-one projection_ \(P=\psi\otimes f\)_, where_ \(\psi\in A(K,X)^{*}\) _with_ \(\psi(f)=1\)_, satisfies_ \(\|I-P\|=2\)_._
4. _there is a limit point_ \(t_{0}\) _of_ \(\Gamma\) _such that_ \(\|f\|=\|f(t_{0})\|_{X}\)_._
Proof.: (i) \(\implies\) (ii) is clear. The implication (ii) \(\implies\) (iii) comes from Theorem 2.2. Indeed, for a Banach space \(Y\), a point \(f\in S_{Y}\) is a \(\Delta\)-point if and only if every rank-one projection of the form \(P=\psi\otimes f\), where \(\psi\in Y^{*}\) with \(\psi(f)=1\), satisfies \(\|I-P\|\geq 2\). Hence, we immediately have (iii) if this projection \(P\) has the norm one. (iii) \(\implies\) (iv) and (iv) \(\implies\) (i) are identical to the proof of (ii) \(\implies\) (iii) and (iii) \(\implies\) (i) in [27, Theorem 5.4], respectively.
**Corollary 4.14**.: _[_27_, Corollary 5.5]_ _Let \(K\) be a compact Hausdorff space, \(\Gamma\) be the Shilov boundary of \(A(K)\), and \(f\in S_{A(K)}\). Then the following statements are equivalent:_
1. \(f\) _is a Daugavet point._
2. \(f\) _is a_ \(\Delta\)_-point._
3. _Every rank-one, norm-one projection_ \(P=\psi\otimes f\)_, where_ \(\psi\in A(K)^{*}\) _with_ \(\psi(f)=1\)_, satisfies_ \(\|I-P\|=2\)_._
4. _there is a limit point_ \(t_{0}\) _of_ \(\Gamma\) _such that_ \(\|f\|_{\infty}=|f(t_{0})|\)_._
As a consequence, we obtain the following characterizations for the space \(A(K,X)\) and infinite-dimensional uniform algebras.
**Proposition 4.15**.: _[_27_, Theorem 5.6]_ _Let \(X\) be a uniformly convex Banach space, let \(K\) be a compact Hausdorff space, and let \(\Gamma\) be the Shilov boundary of the base algebra \(A\) of \(A(K,X)\). Then the following statements are equivalent:_
1. \(A(K,X)\) _has the polynomial Daugavet property._
2. \(A(K,X)\) _has the Daugavet property._
3. \(A(K,X)\) _has the DD2P._
4. \(A(K,X)\) _has the DLD2P._
5. \(A(K,X)\) _has the property (_\(\mathcal{D}\)_)._
6. _The Shilov boundary_ \(\Gamma\) _does not have isolated points._
Proof.: (i) \(\implies\) (ii) \(\implies\) (iii) \(\implies\) (iv) \(\implies\) (v) is clear from their definitions. The implication (vi) \(\implies\) (i) can be shown by Theorem 4.6. Showing (v) \(\implies\) (vi) is also identical to the proof of [27, Theorem 5.6] with Corollary 4.13.
**Corollary 4.16**.: _[_27_, Corollary 5.7]_ _Let \(K\) be a compact Hausdorff space and let \(\Gamma\) be the Shilov boundary of a uniform algebra \(A(K)\). Then the following are equivalent:_
1. \(A(K)\) _has the polynomial Daugavet property._
2. \(A(K)\) _has the Daugavet property._
3. \(A(K)\) _has the DD2P._
4. \(A(K)\) _has the DLD2P._
5. \(A(K)\) _has the DLD2P._
6. \(A(K)\) _has the property (_\(\mathcal{D}\)_)._
7. _The Shilov boundary_ \(\Gamma\) _does not have isolated points._
In view of Lemma 4.2, we can also show that the sufficient condition for the convex-DLD2P in [27, Theorem 5.9] can be described with strong boundary points.
**Theorem 4.17**.: _Let \(K\) be a compact Hausdorff space, \(X\) be a uniformly convex Banach space, and let \(\Gamma\) be the Shilov boundary of the base algebra of \(A(K,X)\). Denote by \(\Gamma^{\prime}\) the set of limit points of the Shilov boundary. If \(\Gamma^{\prime}\cap\Gamma_{0}\neq\emptyset\), then \(A(K,X)\) has the convex-DLD2P._
Proof.: In view of Lemma 4.5, we assume that \(K=\Gamma\). Denote the set of all \(\Delta\)-points of \(A(K,X)\) by \(\Delta\) and the base algebra of \(A(K,X)\) by \(A\). We claim that \(S_{A(K,X)}\subset\overline{\operatorname{conv}}\Delta\).
Let \(f\in S_{A(K,X)}\). Choose a point \(t_{0}\in\Gamma^{\prime}\cap\Gamma_{0}\) and let \(\lambda=\frac{1+\|f(t_{0})\|_{X}}{2}\). For \(\epsilon>0\), let \(U\) be an open subset of \(K\) such that \(\|f(t)-f(t_{0})\|_{X}<\epsilon\). Then by Lemma 4.2, there exists \(\phi\in A\) such that \(\|\phi\|_{\infty}=\phi(t_{0})=1\), \(\sup_{t\in K\setminus U}|\phi(t)|<\epsilon\), and
\[|\phi(t)|+(1-\epsilon)|1-\phi(t)|\leq 1\]
for every \(t\in K\).
Choose a norm-one vector \(v_{0}\in X\) and let
\[x_{0}=\begin{cases}\frac{f(t_{0})}{\|f(t_{0})\|_{X}}&\text{if }f(t_{0})\neq 0\\ v_{0}&\text{if }f(t_{0})=0.\end{cases}\]
Now, define
\[f_{1}(t) =(1-\epsilon)(1-\phi(t))f(t)+\phi(t)x_{0}\] \[f_{2}(t) =(1-\epsilon)(1-\phi(t))f(t)-\phi(t)x_{0},\quad t\in K.\]
Notice that \(f_{1},f_{2}\in A(K,X)\) because \(A\otimes X\subset A(K,X)\). Moreover,
\[\|f_{1}(t)\|_{X} =\|(1-\epsilon)(1-\phi(t))f(t)+\phi(t)x_{0}\|_{X}\] \[\leq(1-\epsilon)|1-\phi(t)|+|\phi(t)|\leq 1,\]
for every \(t\in K\). In particular, we have \(\|f_{1}(t_{0})\|_{X}=1\), and so \(\|f_{1}(t_{0})\|_{X}=\|f_{1}\|=1\). By the same argument, we also have \(\|f_{2}(t_{0})\|_{X}=\|f_{2}\|=1\). Thus, \(f_{1},f_{2}\in\Delta\) by Proposition 4.13. Let \(g(t)=\lambda f_{1}(t)+(1-\lambda)f_{2}(t)\). We need to consider two cases.
Case 1: Suppose \(f(t_{0})\neq 0\). Then \(g(t)=(1-\epsilon)(1-\phi(t))f(t)+\phi(t)f(t_{0})\). We see that
\[\|g(t)-f(t)\|_{X} = \|(1-\epsilon)(1-\phi(t))f(t)+\phi(t)f(t_{0})-f(t)\|_{X}\] \[= \|(1-\epsilon)(1-\phi(t))f(t)+\phi(t)f(t_{0})-(1-\epsilon)f(t)- \epsilon f(t)\|_{X}\] \[= \|(1-\epsilon)(-\phi(t))f(t)+(1-\epsilon)\phi(t)f(t_{0})+\epsilon \phi(t)f(t_{0})-\epsilon f(t)\|_{X}\] \[= \|(1-\epsilon)\phi(t)(f(t_{0})-f(t))+\epsilon\phi(t)f(t_{0})- \epsilon f(t)\|_{X}\] \[\leq (1-\epsilon)|\phi(t)|\cdot\|f(t)-f(t_{0})\|_{X}+\epsilon|\phi(t) |\cdot\|f(t_{0})\|_{X}+\epsilon\|f(t)\|_{X}\] \[\leq (1-\epsilon)|\phi(t)|\cdot\|f(t)-f(t_{0})\|_{X}+2\epsilon.\]
For \(t\in U\), we see that \((1-\epsilon)|\phi(t)|\cdot\|f(t)-f(t_{0})\|_{X}\leq(1-\epsilon)\epsilon<\epsilon\). On the other hand, for \(t\in K\setminus U\), we have \((1-\epsilon)|\phi(t)|\cdot\|f(t)-f(t_{0})\|_{X}\leq 2(1-\epsilon)\epsilon<2\epsilon\). Hence, \(\|g-f\|<4\epsilon\), and so \(f\in\overline{conv}\Delta\).
Case 2: Now, suppose \(f(t_{0})=0\). Then we have \(\|f(t)\|_{X}<\epsilon\) for every \(t\in U\). Moreover, notice that \(\lambda=\frac{1}{2}\) and \(g(t)=(1-\epsilon)(1-\phi(t))f(t)\). This implies that
\[\|g(t)-f(t)\|_{X} = \|(1-\epsilon)(1-\phi(t))f(t)-(1-\epsilon)f(t)-\epsilon f(t)\|_{X}\] \[\leq (1-\epsilon)|\phi(t)|\cdot\|f(t)\|_{X}+\epsilon\|f(t)\|_{X}\leq( 1-\epsilon)|\phi(t)|\cdot\|f(t)\|_{X}+\epsilon.\]
Notice that \((1-\epsilon)|\phi(t)|\cdot\|f(t)\|_{X}\leq(1-\epsilon)\epsilon<\epsilon\) for every \(t\in U\). From the fact that \(\sup_{t\in K\setminus U}|\phi(t)|<\epsilon\) for \(t\in K\setminus U\), we have \((1-\epsilon)|\phi(t)|\cdot\|f(t)\|_{X}\leq(1-\epsilon)\epsilon<\epsilon\). This shows that \(\|g-f\|<2\epsilon\), and so \(f\in\overline{\operatorname{conv}}\Delta\).
Since \(f\in S_{A(K,X)}\) is arbitrary, we see that \(S_{X}\subset\overline{\operatorname{conv}}\Delta\). Therefore, the space \(A(K,X)\) has the convex-DLD2P.
**Corollary 4.18**.: _Let \(K\) be a compact Hausdorff space and \(\Gamma^{\prime}\) be the set of limit points in the Shilov boundary of a uniform algebra. If \(\Gamma^{\prime}\cap\Gamma_{0}\neq\emptyset\), Then the uniform algebra has the convex-DLD2P._
|
2302.11635 | Quantifying Magnetic Fields Using Deformed Diamagnetic Liquid Profiles | Measuring the magnetic field of permanent magnets can be challenging, but
recent research has demonstrated the potential of using deformed diamagnetic
liquids to estimate the magnetic field. In this paper, we explore two methods
for measuring the magnetic field from the response of the diamagnetic liquid.
The first method involves measuring the profile of the deformed liquid with a
laser and then calculating the square of the magnetic field using an
appropriate equation. The second method involves measuring the maximum slope of
the liquid and numerically calculating the magnetic field distribution using
the model of an ideal solenoid. We present experimental results using these
methods and compare them with other established methods for measuring magnetic
fields. The results show that the proposed methods are effective and have
potential for use in a variety of applications. The proposed methods can help
address the challenge of measuring magnetic fields in situations where other
methods are not suitable or practical. | David Shulman | 2023-02-22T20:22:00Z | http://arxiv.org/abs/2302.11635v1 | # Quantifying Magnetic Fields Using Deformed Diamagnetic Liquid Profiles
###### Abstract
Measuring the magnetic field of permanent magnets can be challenging, but recent research has demonstrated the potential of using deformed diamagnetic liquids to estimate the magnetic field. In this paper, we explore two methods for measuring the magnetic field from the response of the diamagnetic liquid. The first method involves measuring the profile of the deformed liquid with a laser and then calculating the square of the magnetic field using an appropriate equation. The second method involves measuring the maximum slope of the liquid and numerically calculating the magnetic field distribution using the model of an ideal solenoid. We present experimental results using these methods and compare them with other established methods for measuring magnetic fields. The results show that the proposed methods are effective and have potential for use in a variety of applications. The proposed methods can help address the challenge of measuring magnetic fields in situations where other methods are not suitable or practical.
Permanent magnets, Magnetic fields, Analytical models, Numerical models
## I Introduction
Magnetic field measurement is a fundamental task in various fields, such as materials science, engineering, and medicine. Traditional methods for magnetic field measurement include Hall probes [1], fluxgate magnetometers [2], and superconducting quantum interference devices (SQUIDs) [3]. These methods are highly accurate and sensitive, but they can be expensive, require sophisticated instrumentation, and may not be suitable for non-destructive testing or in situ measurements.
In this paper a novel method for measuring magnetic fields has been proposed based on the response of a deformed diamagnetic liquid. Diamagnetic materials are those that exhibit a weak, negative response to magnetic fields and are repelled by the poles of a magnet. When a diamagnetic liquid is subjected to a magnetic field, it deforms into a characteristic shape that depends on the strength and direction of the field. By measuring the shape of the deformed liquid, it is possible to infer the distribution of the magnetic field that caused the deformation.
Various studies have explored the deformation of diamagnetic liquids under magnetic fields, and the accuracy and reliability of this method have been evaluated [4, 5, 6, 7, 8]. However, to the best of our knowledge, there has been no previous work on measuring the magnetic field from the response of a deformed diamagnetic liquid.
In this paper, we propose a method for measuring the magnetic field of a permanent magnet based on the response of a deformed diamagnetic liquid. We present the theoretical background of the method, experimental setup and procedures, and the results of our experiments. We also compare our results with those obtained from traditional magnetic field measurement methods and discuss the advantages and limitations of our method. Our method has the potential to be a low-cost, non-destructive, and non-invasive alternative for magnetic field measurement, particularly for large or irregularly-shaped magnets.
The rest of the paper is organized as follows. In Section II, we provide an overview of the theoretical background of the method. In Section III, we describe the experimental setup and procedures. In Section IV, we present the results of our experiments. In Section V, we compare our results with those obtained from traditional magnetic field measurement methods. Finally, in Section VI, we discuss the advantages and limitations of our method and provide some concluding remarks.
## II Theoretical Background of the Methods
### _Theoretical background of the first method_
The first method for measuring magnetic fields involves using the response of a deformed diamagnetic liquid to infer the distribution of the magnetic field that caused the deformation. Diamagnetic materials, such as certain types of liquids, exhibit a weak negative response to magnetic fields and are repelled by the poles of a magnet. When a diamagnetic liquid is subjected to a magnetic field, it deforms into a characteristic shape that depends on the strength and direction of the field.
\[z\left(r,h\right)=\frac{\chi B^{2}\left(r,h\right)}{2\mu_{0}\rho g}, \tag{1}\]
and equation for the square of the magnetic field:
\[B^{2}=z\left(r,h\right)\frac{2\mu_{0}\rho g}{\chi}, \tag{2}\]
where \(z\) is the height of the deformed liquid, \(r\) is the radial distance from the center of the deformed liquid, \(h\) is the separation between the magnet and liquid/vapor interface, \(\chi\) is the magnetic susceptibility of the liquid, \(B\) is the magnetic field strength, \(\mu_{0}\) is the magnetic permeability of free space, \(\rho\) is the density of the liquid, and \(g\) is the acceleration due to gravity.
If the surface tension of the liquid can be ignored, the shape of the deformed liquid can be described by Equation (1). This equation relates the height of the liquid surface at any point to the strength of the magnetic field at that point. The square of the magnetic field can then be calculated using Equation (2). These equations allow the magnetic field distribution to be inferred from the shape of the liquid surface, which can be measured with a laser, see Fig. (1).
This method has the advantage of being non-invasive and contactless, and it can be used to measure the magnetic field distribution in a wide range of applications. However, if the surface tension of the liquid cannot be ignored, the equations become more complex, and the method may require additional corrections. Additionally, the accuracy of the method may be affected by factors such as the surface cleanliness of the liquid and the presence of other nearby magnetic objects.
### _Theoretical Background of the Second Method_
The second technique for measuring the magnetic field involves examining the shape of a liquid surface that has been distorted by the magnetic field. The method consists of measuring the maximum slope of the distorted profile, and then comparing the experimental measurement to the analytical solution for the maximum slope using numerical methods.
In order to calculate the magnetic field using this method, a model of the magnetic field must be chosen e.g. from the Biot-Savart law for the ideal solenoid [9]. The Biot-Savart law states that the magnetic field at a point in space, due to a current-carrying wire, is proportional to the current and the length of the wire, and inversely proportional to the distance from the wire. The Biot-Savart law for an ideal solenoid, which is a coil of wire wound in a helix with a uniform current density, can be derived by integrating the Biot-Savart law over the entire length of the solenoid.
Once the model of the magnetic field has been chosen, it can be compared to the experimental measurement of the maximum slope of the distorted liquid surface. By using numerical methods to fit the model to the experimental data, the magnetic field can be quantified more precisely. Please see in the appendix the complete mathematical formulation of this method.
## III Apparatus and Measurement Technique
### _Apparatus_
The experimental setup used to measure the magnetic field strength consists of the following components:
* A sample liquid placed in a Petri dish with a diameter of 90 mm.
* A helium-neon laser (4mW 1107p) with a wavelength of 633 nm, supplied by JDS Uniphase Corporation, to enable the measurement of the shape of the liquid/vapor interface.
* Stacks of Neodymium permanent magnets, supplied by MAGSY, Czech.
* An XYZ actuator with an accuracy of 10 \(\mu m\) for precise location of the permanent magnet. The actuator was adjusted from components supplied by CCM Automation Technology. In the experiments, an Arduino controller for step motors was used.
* A digital camera (8.0-megapixel digital bridge camera Sony Cyber-shot DSC-F828).
* A Gauss meter, GM2 Gauss Meter, manufactured by AlphaLab Inc., USA, with an accuracy of \(\pm\)0.01 T.
The experiments were carried out under ambient conditions (P=1 atm; T=25 C). A photograph of the experimental unit is shown in Fig. (2). The angle of incidence laser beam on the surface of the liquid is about 5\({}^{\circ}\).
### _Measurement Technique_
#### Iii-B1 Introduction
The measurement technique used in this study is based on the observation of the shape of the liquid/vapor interface using a laser displacement sensor. The shape of the interface is related to the magnetic field strength in the region above the permanent magnet. As the magnet is moved closer to the liquid surface, the magnetic field strength increases, which leads to a change in the shape of the interface. This change in shape can be captured using the laser and analyzed to determine the magnetic field strength.
The experimental setup consists of a Petri dish filled with the liquid sample, which is placed on a stable surface. The Neodymium permanent magnet is then placed at various distances above the surface of the liquid, and the resulting shape of the liquid/vapor interface is observed using a helium neon laser with a wavelength of 633 nm. The laser beam is directed at the liquid surface at an angle of incidence of about 5 degrees, and the reflected beam on the screen is captured by a digital camera.
When the fluid is deformed by a magnetic field by an angle \(\Delta\theta\), the reflection angle shifts by \(2\Delta\theta\), which causes a corresponding shift in height \(\Delta y\) on the screen, as shown in Fig. (1). Hence we have the following relationship for the change in the angle of reflection:
\[2\Delta\theta=\Theta-\arctan\left(\frac{y-\Delta y}{L}\right) \tag{3}\]
The angle of displacement is directly related to the slope of the liquid/vapor interface, which can be used to calculate the magnetic field strength using the Young-Laplace equation. The accuracy of the measurements is dependent on the accuracy of the laser displacement sensor, as well as the accuracy of the positioning of the magnet.
To ensure accurate measurements, the position of the magnet was fixed with a laboratory-built XYZ actuator with an accuracy of 10 micrometers. The accuracy of the magnet position was verified using a Gauss meter with an accuracy of \(\pm\)0.01 T. The experiments were carried out under ambient conditions of temperature and pressure (T=25 C, P=1 atm).
#### Iii-B2 Measurement Technique for the First Method
To calculate the surface profile from the reflection angle from a curved surface, we must obtain the displacement of the water as a function of position. The small angle approximation allows us to consider the change in the angle of the water
surface as the slope of the water. This makes it possible to obtain the displacement by performing a Riemann sum on the measured data. In other words, we can write:
\[\Delta z(r)=\sum_{i=1}^{n}\left[\tan\left(2\Delta\theta(r_{i})\right)\Delta r_{i}\right] \tag{4}\]
where \(\Delta z(r)\) is the displacement of the water surface at position \(r\), \(\Delta\theta(r_{i})\) is the change in reflection angle at the \(i\)th measurement point, and \(\Delta r_{i}\) is the distance between consecutive measurement points. The summation is taken over \(n\) measurement points.
#### Iii-A3 Measurement Technique for the Second Method
The measurement technique for the second method involves measuring the maximum slope of the liquid surface distorted by the magnetic field. This maximum slope can be obtained by analyzing the shift in the reflection angle on the screen caused by the curved surface.
To perform this measurement, the magnet is moved above the surface of the liquid, and the resulting shift in the reflection angle on the screen is recorded. The maximum shift in the reflection angle corresponds to the point where the slope of the liquid surface is at its maximum. This method provides a more simple measurement of the magnetic field compared to the first method, as it directly measures the maximum slope of the liquid surface.
Once the maximum shift in the reflection angle is recorded, it can be used to calculate the maximum slope of the liquid surface using Eq. (3).
## IV Results and Discussion
### _First Method_
Fig. (3) compares the surface profile of the diamagnetic liquid, calculated using Eq. (1), to the exact analytical solution given by Eq. (6), in the case of water. The magnet was positioned 3 mm above the liquid surface during the measurement. The distance between the magnet and the liquid surface is 3 mm. As can be seen from the figure, there is a good agreement between the two solutions. However, it should be noted that the comparison could be even better with a liquid having a smaller surface tension. For example, when using ethanol or water with added surfactants such as dish soap or detergent, the comparison is expected to be even more accurate, as demonstrated in Fig. (4), which shows the comparison of the surface profile of ethanol obtained using this method. In Fig. (5), we compare the exact square of the magnetic field in ethanol to the square of the magnetic field calculated using Eq. (2).
The comparison of the calculated and exact solutions for the square of the magnetic field in the case of ethanol shows a high degree of agreement, with an R-squared value of 0.99 indicating very good correspondence. However, it should be noted that there is a maximum error of approximately \(9\%\) for the \(B^{2}\) values, which should be taken into consideration for specific applications.
Fig. (6) depicts a comparison of the surface profile of water obtained from experimental data and the profile calculated using Eq. (1). As seen from the figure, there is some noise in the experimental data, but the calculated profile from the analytical solution is very close to the experimental data. Based on this observation, we can conclude that the main source of error in this method is due to the neglect of surface tension, which is a critical factor in determining the shape of the liquid surface. Therefore, future research in this area should focus on developing a more accurate model that accounts for the effect of surface tension, which would improve the accuracy of the method and extend its applicability to a wider range of liquids. Nevertheless, the current results show that the proposed method is effective for measuring the magnetic field and has potential for use in various applications where other methods may not be suitable or practical.
### _Second Method_
The second method for measuring the magnetic field involved measuring the maximum slope of the curved surface, followed by fitting the model of the magnetic field to the data. To verify the accuracy of this method, we also measured the distribution of the magnetic field using a gaussmeter (see Ref. [9]) and compared the results to those obtained from this method, see Fig. (7).
In our framework, we use a model for the magnetic field derived from the Biot-Savart law, and the necessary parameters for this model are the magnetization \(B_{0}\) that is obtained experimentally. In Fig. (7), we show the experimentally obtained values for \(B_{0}\) as a function of vertical distance from the magnet. The error bars provide an estimate of the average measurement error associated with the instrument used. It can be observed from the figure that there are several outlier points that deviate significantly from the overall trend. Therefore, to obtain more accurate results, it is necessary to take the mean of multiple measurements and perform statistical analysis to identify and remove outliers. The plot shows that the magnetization \(B_{0}\) changes as a function of distance from the magnet. This is consistent with the presence of inhomogeneity, as previously discussed in Ref. [9]. The experimental data in Fig. (7) is fitted with a linear function, which is also shown in the figure. The good comparison between the fitted function and the experimental data confirms the accuracy of the measurements.
Upon comparing the experimental values of the magnetic field obtained from the second method with those obtained from the Gaussmeter, we observed that the maximum difference between the two sets of values was less than 1%. This result indicates that the second method is a reliable and accurate way of measuring the magnetic field of permanent magnets. Furthermore, the error bars in the experimental data represent the average error of the measuring apparatus, which is relatively small. However, we also observed that the magnetization \(B_{0}\) obtained experimentally varies with the distance h from the magnet, as described in Ref. [9]. Fig. (8) provides a visualization of the difference in the distribution of the square of the magnetic field in the case of a deviation of \(B_{0}\) of 1%. As can be seen from the figure, the maximum error in \(B^{2}\) in this case is about 2%. These results indicate that the
method is relatively robust to small variations in \(B_{0}\), which is an important consideration in practical applications where accurate measurement of the magnetic field is critical.
### _Pros and Cons of Both Methods_
There are two methods for measuring the magnetic field using deformed diamagnetic liquids: the first method involves measuring the profile of the deformed liquid, while the second method involves measuring the maximum slope of the liquid. Each method has its own advantages and disadvantages.
#### Iv-C1 Method 1
Pros:
1. The mathematical calculation is simple.
2. The method does not require knowledge of the surface tension of the liquid used for measurement.
Cons:
1. The measurement technique can be difficult.
2. The method may be less accurate compared to the second method.
#### Iv-C2 Method 2
Pros:
1. The measurement technique is simple.
2. The method is very accurate.
Cons:
1. The mathematical calculation can be difficult.
2. The method requires knowledge of the surface tension of the liquid used for measurement.
By considering the pros and cons of both methods, it is possible to choose the appropriate method based on the requirements of the specific application. For example, if accuracy is the primary concern and the surface tension of the liquid is known, then the second method may be preferred. Conversely, if simplicity is more important and accuracy is less of a concern, then the first method may be more suitable.
## V Conclusion
In this study, we explored two methods for measuring the magnetic field of permanent magnets using deformed diamagnetic liquids. The first method involved measuring the profile of the deformed liquid with a laser and calculating the square of the magnetic field using an appropriate equation. The second method involved measuring the maximum slope of the liquid and numerically calculating the magnetic field distribution using the model of an ideal solenoid.
We presented experimental results using these methods and compared them with other established methods for measuring magnetic fields. The results showed that both methods were effective and had potential for use in a variety of applications.
The first method, despite its simplicity in mathematical calculation, requires a difficult measurement technique and is less accurate compared to the second method. On the other hand, the second method, while requiring more difficult mathematical calculations, provides very accurate results and has a simpler measurement technique. However, it requires knowledge of the surface tension of the measuring liquid.
Overall, these methods can help address the challenge of measuring magnetic fields in situations where other methods are not suitable or practical. Future work could focus on exploring other possible applications and further improving the accuracy of these methods.
## Appendix A Theoretical background of the second method
The Young-Laplace equation, which describes the equilibrium shape of a liquid surface in response to external forces, including magnetic forces. The left-hand side of the equation represents the gravitational and surface tension forces, while the right-hand side represents the magnetic force, see Ref. [4]:
\[\frac{\partial^{2}z}{\partial r^{2}}+\frac{1}{r}\frac{\partial z}{\partial r}- \frac{\rho g}{\gamma}z=\frac{\chi B^{2}\left(r,h\right)}{2\mu_{0}\gamma} \tag{5}\]
The solution to this equation gives the profile of the deformed liquid surface:
\[z\left(r,h\right)=-\left[\int_{0}^{r}\frac{\chi B^{2}\left(r^{\prime},h\right) }{2\mu_{0}\gamma}I_{0}\left(\lambda_{c}^{-1}r^{\prime}\right)r^{\prime}dr^{ \prime}\right]K_{0}\left(\lambda_{c}^{-1}r\right)-\left[\int_{r}^{\infty}\frac {\chi B^{2}\left(r^{\prime},h\right)}{2\mu_{0}\gamma}K_{0}\left(\lambda_{c}^{- 1}r^{\prime}\right)r^{\prime}dr^{\prime}\right]I_{0}\left(\lambda_{c}^{-1}r\right) \tag{6}\]
where \(\gamma\) is the surface tension of the liquid, \(I_{0}\) and \(K_{0}\) are modified Bessel functions of the first and second kind, respectively. The interplay between the gravity and the surface tension is quantified by the capillary length, denoted \(\lambda\). The derivative of Eq. 6 is given by:
\[\theta\approx\frac{dz}{dr}=\left[\int_{0}^{r}\frac{\chi B^{2}\left(r^{\prime},h\right)}{2\mu_{0}\gamma}I_{0}\left(\lambda_{c}^{-1}r^{\prime}\right)r^{ \prime}dr^{\prime}\right]\lambda_{c}^{-1}K_{1}\left(\lambda_{c}^{-1}r\right)- \left[\int_{r}^{\infty}\frac{\chi B^{2}\left(r^{\prime},h\right)}{2\mu_{0} \gamma}K_{0}\left(\lambda_{c}^{-1}r^{\prime}\right)r^{\prime}dr^{\prime}\right] \lambda_{c}^{-1}I_{1}\left(\lambda_{c}^{-1}r\right) \tag{7}\]
To find the maximum slope of the deformed liquid surface, we apply the second derivative to the equation for the curved surface (Eq. 7), set it equal to zero, and then solve numerically and fit to the model of the magnetic field derived from the Biot-Savart law for the ideal solenoid. This allows us to calculate the magnetic field strength at any point in the solenoid, and is the basis for the second method of measuring the magnetic field.
\[\frac{d^{2}z}{dr^{2}}=\frac{\chi B^{2}\left(r,h\right)}{2\mu_{0} \gamma}-\left[\int_{0}^{r}\frac{\chi B^{2}\left(r,h\right)}{2\mu_{0}\gamma}I _{0}\left(\lambda_{c}^{-1}r\right)rdr\right]\lambda_{c}^{-2}\left(K_{0}\left( \lambda_{c}^{-1}r\right)+\frac{1}{\lambda_{c}^{-1}r}K_{1}\left(\lambda_{c}^{-1 }r\right)\right)-\] \[-\left[\int_{r}^{\infty}\frac{\chi B^{2}\left(r,h\right)}{2\mu_{0} \gamma}K_{0}\left(\lambda_{c}^{-1}r\right)rdr\right]\lambda_{c}^{-2}\left(I_{ 0}\left(\lambda_{c}^{-1}r\right)-\frac{1}{\lambda_{c}^{-1}r}I_{1}\left(\lambda _{c}^{-1}r\right)\right)\]
and hence
\[B^{2}\left(r_{m},h\right)=\left[\int_{0}^{r_{m}}B^{2}\left(r,h \right)I_{0}\left(\lambda_{c}^{-1}r\right)rdr\right]\lambda_{c}^{-2}\left(K_{ 0}\left(\lambda_{c}^{-1}r_{m}\right)+\frac{1}{\lambda_{c}^{-1}r_{m}}K_{1} \left(\lambda_{c}^{-1}r_{m}\right)\right)-\] \[-\left[\int_{r_{m}}^{\infty}B^{2}\left(r,h\right)K_{0}\left( \lambda_{c}^{-1}r\right)rdr\right]\lambda_{c}^{-2}\left(I_{0}\left(\lambda_{c }^{-1}r_{m}\right)-\frac{1}{\lambda_{c}^{-1}r_{m}}I_{1}\left(\lambda_{c}^{-1}r _{m}\right)\right)\]
Where \(r_{m}\) is a radial distance from magnet z axes to a point of the maximum slope curved liquid surface.
The model of the magnetic field used in this investigation is the solution of the Biot-Savart law for the ideal solenoid:
\[B_{r}\left(r,h\right)=B_{0}\int_{0}^{\pi/2}d\psi\left(\cos^{2}\psi-\sin^{2} \psi\right)\left\{\frac{\alpha_{+}}{\sqrt{\cos^{2}\psi+k_{+}^{2}\sin^{2}\psi}}- \frac{\alpha_{-}}{\sqrt{\cos^{2}\psi+k_{-}^{2}\sin^{2}\psi}}\right\} \tag{8}\]
\[B_{z}\left(r,h\right)=\frac{B_{0}a}{r+a}\int_{0}^{\pi/2}d\psi\left(\frac{\cos^ {2}\psi+\tau\sin^{2}\psi}{\cos^{2}\psi+\tau^{2}\sin^{2}\psi}\right)\left\{ \frac{\beta_{+}}{\sqrt{\cos^{2}\psi+k_{+}^{2}\sin^{2}\psi}}-\frac{\beta_{-}}{ \sqrt{\cos^{2}\psi+k_{-}^{2}\sin^{2}\psi}}\right\} \tag{9}\]
\[\alpha_{\pm}=\frac{a}{\sqrt{h_{\pm}^{2}+\left(r+a\right)^{2}}},\ \ \ \ \ \ \ \ \ \ \ \ \beta_{\pm}=\frac{h_{\pm}}{\sqrt{h_{\pm}^{2}+\left(r+a\right)^{2}}}\]
\[h_{+}=h,\ \ \ h_{-}=h-2b,\ \ \ \ \tau=\frac{a-r}{a+r}\]
\[k_{\pm}=\sqrt{\frac{h_{\pm}^{2}+\left(a-r\right)^{2}}{h_{\pm}^{2}+\left(a-r \right)^{2}}}\]
where \(a\) is the radius and \(2b\) is the length of the solenoid; \(\left(r,\ \varphi,\ h\right)\) are the cylindrical coordinates with the origin at the center of the solenoid; \(n\) - is the number of turns per unit length. To obtain the equations in the current form, we have also introduced the following integration variable change: \(2\psi\equiv\pi-\varphi\). To compare the calculation results with the results of the measurements, the radius and the length of the solenoid for the calculations were chosen equal to the radius and the length of the permanent magnet investigated experimentally.
Our problem involves two unknowns: the magnetization of the permanent magnet, \(B_{0}\), and the radial distance of the maximum slope from the magnet axis, \(r_{m}\). We have two equations that describe the problem:
\[\begin{cases}\theta\approx\frac{dz}{dr}=\left[\int_{0}^{r_{m}}\frac{\chi B^{2} \left(r^{\prime},h\right)}{2\mu_{0}\gamma}I_{0}\left(\lambda_{c}^{-1}r^{\prime }\right)r^{\prime}dr^{\prime}\right]\lambda_{c}^{-1}K_{1}\left(\lambda_{c}^{-1 }r_{m}\right)-\left[\int_{r_{m}}^{\infty}\frac{\chi B^{2}\left(r^{\prime},h \right)}{2\mu_{0}\gamma}K_{0}\left(\lambda_{c}^{-1}r^{\prime}\right)r^{\prime }dr^{\prime}\right]\lambda_{c}^{-1}I_{1}\left(\lambda_{c}^{-1}r_{m}\right)\\ \\ B^{2}\left(r_{m},h\right)=\left[\int_{0}^{r_{m}}B^{2}\left(r,h\right)I_{0} \left(\lambda_{c}^{-1}r\right)rdr\right]\lambda_{c}^{-2}\left(K_{0}\left( \lambda_{c}^{-1}r_{m}\right)+\frac{1}{\lambda_{c}^{-1}r_{m}}K_{1}\left( \lambda_{c}^{-1}r_{m}\right)\right)-\\ -\left[\int_{r_{m}}^{\infty}B^{2}\left(r,h\right)K_{0}\left(\lambda_{c}^{-1}r \right)rdr\right]\lambda_{c}^{-2}\left(I_{0}\left(\lambda_{c}^{-1}r_{m}\right) -\frac{1}{\lambda_{c}^{-1}r_{m}}I_{1}\left(\lambda_{c}^{-1}r_{m}\right)\right) \end{cases}\]
To solve this problem numerically, we use Brent's method [10], which involves substituting the experimentally known value of \(\theta\) into the equations and finding the zero of the resulting series. While we use Brent's method in our work, other suitable numerical methods can also be used.
## Acknowledgment
I would like to express my deep gratitude to Professor Meir Lewkowicz and Professor Edward Bormashenko, my research supervisors, for their patient guidance, enthusiastic encouragement, and useful critiques
## Author Declarations
### Conflict of Interest
The author has no conflicts to disclose.
|
2306.12813 | An Enhanced Massive Black Hole Occupation Fraction Predicted in Cluster
Dwarf Galaxies | The occupation fraction of massive black holes (MBHs) in dwarf galaxies
offers interesting insights into initial black hole seeding mechanisms and
their mass assembly history, though disentangling these two effects remains
challenging. Using the {\sc Romulus} cosmological simulations we examine the
impact of environment on the occupation fraction of MBHs in low mass galaxies.
Unlike most modern cosmological simulations, {\sc Romulus} seeds MBHs based on
local gas properties, selecting dense ($n>3$ cm$^{-3}$), pristine
($Z<3e-4Z_{\odot}$), and rapidly collapsing regions in the early Universe as
sites to host MBHs without assuming anything about MBH occupation as a function
of galaxy stellar mass, or halo mass, {\it a priori}. The simulations predict
that dwarf galaxies with M$_{\star}<10^9$ M$_{\odot}$ in cluster environments
are $\sim2$ times more likely to host a MBH compared to those in the field. The
predicted occupation fractions are remarkably consistent with those of nuclear
star clusters. Across cluster and field environments, dwarf galaxies with
earlier formation times are more likely to host a MBH. While the MBH occupation
function is similar between cluster and field environments at high redshift
($z>3$), a difference arises as late-forming dwarfs -- which do not exist in
the cluster environment -- begin to dominate in the field and pull the MBH
occupation fraction down for low mass galaxies. Additionally, prior to in-fall
some cluster dwarfs are similar to progenitors of massive, isolated galaxies,
indicating that they might have grown to higher masses had they not been
impeded by the cluster environment. While the population of MBHs in dwarf
galaxies is already widely understood to be important for understanding MBH
formation, this work demonstrates that environmental dependence is important to
consider as future observations search for low mass black holes in dwarf
galaxies. | Michael Tremmel, Angelo Ricarte, Priyamvada Natarajan, Jillian Bellovary, Ramon Sharma, Thomas R. Quinn | 2023-06-22T11:24:12Z | http://arxiv.org/abs/2306.12813v2 | # An Enhanced Massive Black Hole Occupant Predicted in Cluster Dwarf Galaxies
###### Abstract
The occupation fraction of massive black holes (MBHs) in low mass galaxies offers interesting insights into initial black hole seeding mechanisms and their mass assembly history, though disentangling these two effects remains challenging. Using the Romulus cosmological simulations we examine the impact of environment on the occupation fraction of MBHs in low mass galaxies. Unlike most modern cosmological simulations, Romulus seeds MBHs based on local gas properties, selecting very dense, pristine, and rapidly collapsing regions in the early Universe as sites to host MBHs without assuming anything about MBH occupation as a function of galaxy stellar mass, or halo mass, _a priori_. The simulations predict that dwarf galaxies with \(\rm M_{\bullet}<10^{9}\)\(\rm M_{\odot}\) in cluster environments are \(\sim 2\) times more likely to host a MBH compared to those in the field. The predicted occupation fractions are remarkably consistent with those of nuclear star clusters. Across cluster and field environments, dwarf galaxies with earlier formation times are more likely to host a MBH. Thus, while the MBH occupation function is similar between cluster and field environments at high redshift (\(z>3\)), a difference arises as late-forming dwarfs - which do not exist in the cluster environment - begin to dominate in the field and pull the MBH occupation fraction down for low mass galaxies. Additionally, prior to in-fall some cluster dwarfs are similar to progenitors of massive, isolated galaxies, indicating that they might have grown to higher masses had they not been impeded by the cluster environment. While the population of MBHs in dwarf galaxies is already widely understood to be important for understanding MBH formation, this work demonstrates that environmental dependence is important to consider as future observations search for low mass black holes in dwarf galaxies.
Subject headings:galaxies:dwarf - black hole physics - galaxies:clusters +
Footnote †: slugcomment: Version November 4, 2021
## 1. Introduction
The origin and evolution of massive black holes (MBHs) of mass \(>10^{5}\)\(\rm M_{\odot}\), which are ubiquitously found in the centers of massive galaxies (Tremaine et al., 2002; Kormendy & Richstone, 1995; Kormendy & Ho, 2013), remains an important open question. There are different models for how the seeds of MBHs form in the early Universe (Volonteri, 2010; Natarajan, 2014) which predict different formation rates and initial masses. Seeds formed through the stellar evolution of population III stars would have initial masses on the order of \(\sim 100\)\(\rm M_{\odot}\) and would exist in virtually every galaxy, with a fraction capable of attaining much larger masses through, e.g. super-Eddington accretion events (Volonteri & Rees, 2005; Alexander & Natarajan, 2014; Inayoshi et al., 2016; Sassano et al., 2023). Other formation models require more specific conditions at high redshift, such as very high densities and a lack of both metals and molecular gas, but can produce seed black holes of mass \(10^{4}-10^{6}\)\(\rm M_{\odot}\)(Natarajan, 2011; Alexander & Natarajan, 2014). The collapse of a dense star cluster can produce a very massive star through runaway stellar mergers that then forms a massive black hole (Devecchi & Volonteri, 2009; Davies et al., 2011). Alternatively, if fragmentation into a star cluster is prevented, gas can collapse directly into a very massive (quasi-) star type object, which then forms a massive black hole (Lodato & Natarajan, 2006, 2007). The origins of MBHs are notoriously difficult to constrain observationally, as the early seeding epochs are currently inaccessible (though JWST is expected to change that soon); and typical detection methods are best only at finding very massive and/or rapidly growing MBHs, which effectively have their initial conditions mostly erased (Volonteri et al., 2008; Volonteri & Gnedin, 2009). JWST and next-generation telescopes, such as Euclid, have the potential to observe some of the earliest phases of MBH growth (Sesana & Khan, 2015; Natarajan et al., 2017; Pacucci et al., 2019) and could shine new light into their formation physics and environments. Electromagnetic observations will be complemented by new gravitational wave detectors like LISA (Laser Interferometer Space Antenna), with the ability to detect merging, low-mass black holes out to \(z>20\)(Colpi et al., 2019). Constraining the MBH population at high redshift will help to constrain the relative efficiency of different seeding mechanisms (Sesana, 2013; Sesana & Khan, 2015; Ricarte & Natarajan, 2018) while providing new insight into the mechanisms driving their growth and co-evolution with their host (proto-)galaxies.
However, clues to MBH formation exist more locally as well. Dwarf galaxies (with stellar mass M\({}_{\star}<10^{10}\) M\({}_{\odot}\)) may host black holes that have not grown significantly, neither through accretion nor mergers, and therefore may maintain their connection to their initial conditions (Volonteri & Natarajan, 2009; Greene, 2012; Ricarte & Natarajan, 2018). There is a growing sample of MBHs detected in dwarf galaxies using a variety of methods and wavelengths, typically observing MBHs as low luminosity active galactic nuclei (AGN; e.g. Reines et al., 2011; Reines & Deller, 2012; Reines et al., 2020; Baldassare et al., 2015, 2016; Mezcua et al., 2018; Nguyen et al., 2019; Woo et al., 2019; Birchall et al., 2020; Molina et al., 2021; Latimer et al., 2021; Cann et al., 2021; Burke et al., 2022). Observations like these have been used to estimate the true occupation fraction - the fraction of galaxies hosting any MBH, regardless of whether it is detected as an AGN (Greene, 2012; Miller et al., 2015; Nguyen et al., 2019; Burke et al., 2022). While there is evidence that the occupation fraction does not dramatically change down to stellar masses as low as \(10^{9}\) M\({}_{\odot}\)(Miller et al., 2015; Baldassare et al., 2020; Burke et al., 2022), mapping detected AGN fractions to the underlying population of MBHs in low mass galaxies remains difficult and highly uncertain. JWST and next-generation telescopes like Vera Rubin will also prove useful in expanding our view of MBHs in dwarf galaxies (Baldassare et al., 2018; Cann et al., 2021; Burke et al., 2022).
Cosmological simulations, which self-consistently model the collapse and merger history of dark matter halos with the baryonic evolution (gas accretion, star formation, MBH growth) of galaxies, have proven to be an invaluable tool to study the co-evolution of MBHs and galaxies (e.g. Di Matteo et al., 2008, 2017; Okamoto et al., 2008; Sijacki et al., 2015; Rosas-Guevara et al., 2016; Dubois et al., 2016; Nelson et al., 2019; Blank et al., 2019; Habouzit et al., 2021; Koudmani et al., 2021, 2022; Ni et al., 2022). However, studying MBHs in low mass galaxies in these simulations also remains a challenge. Even the most modern large-scale simulations lack the resolution to correctly model galaxies much lower than \(10^{9}\) M\({}_{\odot}\) in stellar mass. Even smaller-scale simulations that can attain very high resolution, such as TNG50 (Nelson et al., 2019) or FABLE (Henden et al., 2018), utilize simplistic prescriptions for seeding MBHs whereby all galaxies residing in dark matter halos above a certain mass are seeded with a MBH at their centers. While this may allow for predictions of the accretion rates and luminosities of MBHs in low mass galaxies (i.e the AGN occupation fraction), the underlying occupation fraction of MBHs is an explicitly assumed prior. This type of prescription will mean that for many low mass galaxies black holes are seeded at rather late times instead of at high redshift like most theoretical models (although recent works by Natarajan (2021) and Mayer et al. (2023) have noted that MBH formation could continue throughout cosmic time and may not be limited to metal poor regions). In addition, when considering satellite or backsplash galaxies, simplistic schemes to force MBHs to the centers of galaxies can generate unrealistic numerical effects and artificially impact the occupation fraction in some environments (Borrow et al., 2022).
Recent high-resolution simulations that resolve dwarf galaxies while also incorporating more predictive models for MBH formation, such as the Romulus simulations (Tremmel et al., 2017, 2019), the Obelisk Simulations (Trebitsch et al., 2021), the New Horizon simulations (Dubois et al., 2021; Beckmann et al., 2023), and the MARVELous and DC Justice League simulations (Bellovary et al., 2019; Munshi et al., 2021; Applebaum et al., 2021) are valuable tools in predicting MBH evolution and occupation fraction within low mass galaxies. In this Paper we utilize the Romulus suite of simulations to examine the environmental dependence of MBH occupation in dwarf galaxies. Romulus forms MBHs from dense, pristine gas in the early Universe (\(z>5\)) without any _a priori_ assumptions about which halos should or should not host a MBH. The dynamics of MBHs is followed realistically down to sub-kpc separations, so they are not always forced to the centers of galaxies (Tremmel et al., 2015). The physics models governing MBH growth, star formation, and supernovae feedback that have been incorporated into Romulus have been optimized to broadly reproduce observed scaling relations across nearly five orders of magnitude in halo mass (Tremmel et al., 2017, 2019; Ricarte et al., 2019; Sharma et al., 2020).
MBHs in field dwarf galaxies have been extensively studied in Romulus, which produces a realistic population of dwarf galaxy AGN consistent with observations (Sharma et al., 2022) that can also affect their evolution (Sharma et al., 202, 2022), something which is seen in other simulations as well (Koudmani et al., 2021, 2022). We expand on previous analysis in this work by examining how the dwarf galaxy MBH occupation fraction evolves with both time and environment. To do this we compare results from Romulus25, a 25 Mpc-per-side uniform volume simulation, with RomulusC, a zoom-in simulation of a \(10^{14}\) M\({}_{\odot}\) galaxy cluster. Between these two simulations, we have several hundred simulated dwarf galaxies with stellar masses in the range \(10^{7}-10^{10}\) M\({}_{\odot}\) that are resolved with at least 200 baryonic resolution elements (and \(10^{4}\) for dark matter).
In Section 2 we provide a brief overview of the Romulus simulations and the relevant physics models incorporated in them. In Section 3 we present our results of the dependence of occupation fraction on environment and its evolution with time. In Section 4 we explore the origins of the enhanced MBH occupation fraction in \(z=0\) cluster dwarf galaxies that we find. Section 5 discusses our results and Section 6 provides a summary of our conclusions.
## 2 The Romulus Simulations
In this section we briefly describe the relevant properties of the simulations. For a more detailed discussion, including how the parameters were chosen, we point the reader to Tremmel et al. (2017, 2019).
### Overview of the Romulus Simulations
The Romulus Simulations (Tremmel et al., 2017, 2019) are a suite of cosmological hydrodynamic simulations that includes a 25 Mpc-per-side uniform volume simulation (Romulus25) and a zoom-in simulation of a \(10^{14}\) M\({}_{\odot}\) galaxy cluster (RomulusC). With a dark matter mass resolution of \(3.4\times 10^{5}\) M\({}_{\odot}\), both Romulus simulations are able to resolve halos as small as \(\sim 3\times 10^{9}\) M\({}_{\odot}\) with more than 10,000 particles. With typical gas and star particle masses of \(2.1\times 10^{5}\) and \(6\times 10^{4}\) M\({}_{\odot}\) respectively, and a spline softening of 350 pc (equivalent to 250 pc plummer softening), the two simulations are also able to resolve the baryonic structure of dwarf galaxies as small as \(\sim 10^{7}\) M\({}_{\odot}\) in stellar mass with hundreds of resolution elements. Both simulations are run with the same cosmology (\(\Omega_{0}=0.3086,\Lambda=0.6914,h=0.6777,\sigma_{8}=0.8288\); Planck Collaboration et al., 2016) and physics.
The Romulus simulations were run using the N-body+Smoothed particle hydrodynamics code, ChaNGa (Menon et al., 2015), which incorporates standard physics modules previously used in GASOLINE (Wadsley et al., 2004, 2008, 2017). This includes a cosmic UV background (Haardt & Madau, 2012) with self-shielding (Pontzen et al., 2008), star formation with 'blastwave' supernovae feedback (Stinson et al., 2006), low temperature metal cooling (Guedes et al., 2011), and thermal and metal diffusion (Shen et al., 2010; Governato et al., 2015). It also includes recent improvements, such as an SPH force calculation that uses the geometric mean density (Ritchie & Thomas, 2001; Menon et al., 2015; Governato et al., 2015), an updated turbulent diffusion implementation (Wadsley et al., 2017), and a time-dependent artificial viscosity and time-step adjustment system (Saitoh & Makino, 2009; Wadsley et al., 2017).
### Star Formation, Supernovae Feedback and Gas Cooling
Star formation occurs within dense (n\(>0.2\) cm\({}^{-3}\)), cold (T\(<10^{4}\) K) gas. Each gas particle that meets these criteria is allowed to form star particles on a characteristic timescale of \(10^{6}\) years with the following probability:
\[p=\frac{m_{gas}}{m_{star}}\left(1-e^{-c_{\star}(10^{6}/t_{dyn})}\right), \tag{1}\]
where \(m_{\rm star}=0.3m_{\rm gas}\), \(c_{\star}=0.15\), and \(t_{\rm dyn}\) is the dynamical time of the gas particle. Energy from supernovae couples thermally to nearby gas with an efficiency of 75%. Supernova feedback uses the 'blastwave' implementation (Stinson et al., 2006), where gas cooling is shutoff for a period of time to avoid numerical overcooling. This implementation of feedback and star formation produces dwarf galaxies that lie on the observed stellar mass-halo mass relations (Tremmel et al., 2017) and includes a realistic population of 'ultra-diffuse' galaxies (Tremmel et al., 2020; Wright et al., 2021; Van Nest et al., 2022).
An important limitation of Romulus is the lack of high temperature metal-line cooling. This can affect the accretion history of gas onto massive galaxies (van de Voort et al., 2011). This choice, discussed in more detail in Tremmel et al. (2019), is motivated by a variety of previous simulations showing that the inclusion of metal-line cooling in simulations without adding molecular hydrogen physics and more detailed star formation prescriptions results in unrealistic dwarf galaxies (Christensen et al., 2014). Despite being among the highest resolution simulations of its class, even Romulus is unable to resolve the multiphase ISM so it is not possible to include these more detailed physical processes while maintaining realistic low mass galaxies. It has been shown that RomulusC, our most massive halo using these input physics, maintains a realistic intracluster medium (Tremmel et al., 2019). Because this paper is focused on dwarf galaxies and on the formation of MBHs at high redshift from very metal-poor gas (see below), this choice does not impact our results.
### Black Hole Accretion and Feedback
Massive black holes are allowed to grow by accreting nearby gas via a modified Bondi-Hoyle formalism (Bondi & Hoyle, 1944) that accounts for angular momentum support and includes a density-dependent boost factor (Booth & Schaye, 2009) that is meant to account for unresolved, multiphase gas:
\[\dot{M}_{\bullet}=\left(\frac{n}{n_{*}}\right)^{\beta}\begin{cases}\frac{(GM _{\bullet})^{2}p}{(c_{\star}^{2}+c_{\star}^{2})^{3/2}}&\text{if $v_{\rm bulk}>v_{\theta}$}\\ \frac{(GM_{\bullet})^{2}p_{\rm csc}}{(v_{\theta}^{2}+c_{\star})^{2}}&\text{if $v_{\rm bulk }<v_{\theta}$},\end{cases} \tag{2}\]
where \(G\) is the gravitational constant; \(n_{*}\) is the star formation threshold (0.2 cm\({}^{-3}\); see previous section); \(v_{\rm bulk}\) is the bulk velocity of the gas; \(v_{\theta}\) is its rotational velocity; \(c_{s}\) is its sound speed; \(\rho\) is its mass density; and \(\beta\) is set to 2. The radiative efficiency (\(\epsilon_{r}\)) of accreting MBHs is assumed to be 0.1 and all accretion is capped at the Eddington rate.
Growing MBHs produce feedback at a rate
\[\dot{E}_{BH}=\epsilon_{r}\epsilon_{f}\dot{M}_{\bullet}c^{2}, \tag{3}\]
where \(\epsilon_{f}\) is set to 0.02 and \(c\) is the speed of light. This energy is distributed instantaneously as thermal energy to the surrounding 32 nearest gas particles. Feedback from MBHs has been shown to successfully regulate the star formation in simulated massive galaxies in Romulus(Tremmel et al., 2017, 2019; Chadayamuri et al., 2021). However, too many massive galaxies remain star forming and disk-dominated in Romulus at \(z=0\)(Jung et al., 2022), indicating that stronger modes of feedback are still needed for higher mass galaxies. On the other hand, at the lower mass end, feedback from MBHs has also been shown to influence the evolution of dwarf galaxies in Romulus(Sharma et al., 2020, 2022a, 20) and has been shown to over-quench star formation at low masses, indicating that feedback may be too efficient at low masses. Because this paper focuses on the formation of black holes and less on the detailed evolution of their host galaxies, this should not affect our results. The majority of low mass galaxies in RomulusC are quenched by the cluster environment and not by internal processes (Tremmel et al., 2019).
### Black Hole Dynamics
Unlike most cosmological simulations that include MBHs, Romulus does not force black holes to the centers of galaxies. Rather, they are allowed to move realistically within their host galaxies. This is done through the implementation of a sub-grid routine to account for unresolved dynamical friction (Tremmel et al., 2015). This estimates and applies the necessary force each MBH would feel due to dynamical friction by integrating the Chandrasekhar formula (Chandrasekhar, 1943) out to the gravitational softening length, \(\epsilon_{g}\), during each black hole timestep.
\[\mathbf{a}_{\rm DF}=-4G^{2}M_{\bullet}\rho(<v_{\bullet})\ln\Lambda\frac{v_{ \bullet}}{v_{\bullet}^{3}}, \tag{4}\]
Here \(v_{\bullet}\) is the velocity of the black hole relative to nearby star and dark matter particles, \(\rho(<v_{\bullet})\) is the local density of star and dark matter particles that are moving slower than the black hole relative the background particles, and \(\ln\Lambda\) is the Coulomb logarithm and equal to \(\ln(\epsilon_{g}/r_{90})\). Here \(r_{90}\) is the 90\({}^{\circ}\) deflection radius based on the black hole's mass and local relative velocity. As shown in Tremmel et al. (2015), this method is able to produce realistically decaying orbits. The result of this model is that MBHs take non-negligible time to reach the halo center after galaxy mergers (Tremmel et al., 2018), and sometimes their orbits fail to decay and they end up as 'wandering', off-center MBHs (Tremmel et al., 2018; Bellovary et al., 2021; Ricarte et al., 2021, 2021, 2021). Two MBHs are
allowed to merge when they are within 700 pc (\(2\times\epsilon_{\rm g}\)) and mutually bound to one another.
The technique used here to model dynamical friction is similar to that employed in a growing number of other simulations, including Magneticum (Hirschmann et al., 2014), Obelisk (Trebitsch et al., 2021; Pfister et al., 2019) and Astrid (Chen et al., 2022; Ni et al., 2022). The high resolution of Romulus allows for the dynamical evolution of MBHs to be accurately tracked down to sub-kpc scales with dynamical friction applied only locally, sampling densities of stars and dark matter near each MBH (distances \(<350\) pc). Other simulations like Horizon-AGN (Dubois et al., 2016) have models that account only for dynamical friction from gas (Ostriker, 1999) while Romulus only accounts for dynamical friction from stars and dark matter (i.e. collision-less particles). While it is possible that gas may dominate the density of a galaxy at times, it is difficult to fully account for the effects of torques due to unresolved structure and turbulence within the ISM which can be at least as important (Roskar et al., 2015; Bordolas et al., 2022; Lescaudron et al., 2022).
### Black Hole Seeding
A critical aspect of the Romulus simulations to this work is the prescription for MBH seeding. Like most modern, large-scale cosmological simulations, the actual formation sites for MBH seeds are unresolved in Romulus. To approximate the regions most likely to create an early, massive black hole, we form MBH seeds in gas that would otherwise be forming stars (see above) but with the following additional criteria:
* Metallicity less than \(3\times 10^{-4}\) Z\({}_{\odot}\).
* Particle density is greater than \(3\ cm^{-3}=15n_{\star}\).
* Temperature is between 9500 K and 10000 K.
This prescription will naturally pick out regions in the very early Universe that have yet to be polluted with metals from star formation, yet are still able to collapse to high densities (15 times greater than the threshold for star formation) on timescales shorter than that for star formation (i.e. the gas is able to reach densities well beyond the star formation threshold before forming stars) and without cooling much below \(10^{4}\) K (densities are growing faster than the gas can effectively cool). While these criteria are most directly reminiscent of the direct collapse formation scenario (gas collapsing into a single massive object; Lodato & Natarajan, 2006, 2007; Regan et al., 2017; Wise et al., 2019; Regan et al., 2020), or the collapse of a dense star cluster (runaway stellar collisions produce a single massive object; Devecchi & Volonteri, 2009; Davies et al., 2011), they are also consistent with regions where a low mass black hole seed (e.g. from a population III star) may be able to grow rapidly in a very short amount of time thanks to the presence of of dense, rapidly collapsing gas (e.g. Volonteri & Rees, 2005; Alexander & Natarajan, 2014; Inayoshi et al., 2016; Sassano et al., 2023).
Importantly, these adopted criteria result in an epoch of MBH seeding that is over 90% complete by \(z=5\) and makes no assumptions as to which halos should (or should not) host a MBH (Tremmel et al., 2017). This is in stark contrast to the methods more typically utilized in cosmological simulations, where MBHs are seeded based on a halo mass threshold (typically a few \(\times 10^{10}\) M\({}_{\odot}\)), which preferentially prevent MBHs from populating low mass galaxies early on, and will result in a prolonged period of MBH seeding. By predicting the existence of MBHs based only on local gas properties, Romulus can therefore _predict_ the occupation of MBHs in galaxies in a way that even simulations of similar resolution often cannot. Some cosmological simulations have started to employ a similar MBH formation prescription. For example, the New Horizon simulations have shown the importance of both resolving low mass galaxies and allowing them to be seeded with MBHs in studying MBH mergers and potential gravitational wave sources (Volonteri et al., 2020). However, it is important to note that this method would not capture formation processes that occur at later times in more metal-enriched gas (e.g. Regan et al., 2020; Natarajan, 2021; Mayer et al., 2023).
In Figure 1 we compare the distribution of black hole formation times in the field ( Romulus25) with that of the galaxy cluster (RomulusC), confirming that they are virtually indistinguishable in our simulations. This means that the initial seeding of MBHs at high redshift, which only depends on local gas properties, is not impacted directly by the environment. This will be an important fact to keep in mind as we delve deeper into the differences inferred in the MBH population between the two environments.
### Halo Selection and Galaxy Properties
Halo finding is performed with the Amiga Halo Finder (Knollmann & Knebe, 2009), which uses a spherical top-hat collapse technique to define the virial radius (R\({}_{vir}\)) and mass (M\({}_{vir}\)) of each halo and sub-halo. It also assigns all of the baryonic content belonging to each dark matter halo. In our analysis we use R\({}_{200}\) which is the radius enclosing a mean density 200 times the critical density of the Universe at a given redshift. We only include halos with M\({}_{vir}>3\times 10^{9}\) M\({}_{\odot}\), such that each halo included in our analysis has at least \(\sim 10,000\) particles. An exception is made for our analysis of RomulusC where we include halos down to \(3\times 10^{8}\) M\({}_{\odot}\) to account for the fact that many halos have experienced tidal stripping of the outer regions of their dark matter halos in the dense cluster environment. As discussed in Tremmel et al. (2020), while the total halo mass is affected by the cluster environment, dwarf galaxies often keep most of their stellar
Figure 1.— Distribution of Black Hole Formation Times. The distribution of formation times for MBHs in Romulus25 and RomulusC. Despite sampling different environments, both simulations seed MBHs at very similar times.
mass intact. This choice to lower the halo mass threshold is to avoid discounting galaxies that would otherwise have been previously considered well-resolved prior to falling into the cluster. We have confirmed that our results are not sensitive to this choice.
The position of each galaxy/halo is calculated using the shrinking spheres approach (Power et al., 2003), which reliably extracts the centers of the central galaxies within each halo. As RomulusC is a zoom-in simulation, it is surrounded by a region of low resolution dark matter. Galaxies on the outskirts may be contaminated by high mass (low resolution) dark matter particles. We avoid including such galaxies in our analysis by requiring each galaxy to have less than 5% of its dark matter particles be contaminated by low resolution elements. This cut removes only 3% of galaxies within 2 Mpc (\(\sim 2\)R\({}_{200}\)) of the cluster center, the vast majority of which are beyond the virial radius.
When quoting the stellar mass of each galaxy, we estimate what the observed stellar mass would be from typical techniques. Munshi et al. (2013) found that typical observational techniques result in a systematic underestimate of galaxy stellar mass. Based on those results, we apply a correction factor of 0.6 to the 'raw' stellar masses (the summed mass of all star particles associated with a given halo), a conservative estimate given the results of Munshi et al. (2013). Leja et al. (2019) also find a similar discrepancy when comparing advanced spectroscopic techniques to estimating stellar masses with common photometric techniques. This choice affects only the stellar masses we present, but since all galaxies are treated equally in this analysis this has no effect on the substance of our results comparing different simulation regions.
Throughout this work we also classify some galaxies as 'isolated'. We base our definition of isolated on the results of Geha et al. (2012) that show the onset of environmental effects for dwarf galaxies (M\({}_{\star}<10^{10}\) M\({}_{\odot}\)) occur when they are approximately 1.5 Mpc away from a massive galaxy. The same environmental effects are seen on the quenched fraction of galaxies in Romulus(Sharma et al., 2022). Isolated galaxies in Romulus25 are therefore defined so that they are not within R\({}_{200}\) of any more massive halo. Additionally, for low mass galaxies with M\({}_{\star}<10^{10}\) M\({}_{\odot}\), they must be further than 1.5 Mpc from any galaxy with stellar mass greater than \(2.5\times 10^{10}\) M\({}_{\odot}\).
## 3. The Predicted Occupation Fraction of MbHs in Romulus
The top panel of Figure 2 shows the fraction of galaxies hosting at least one MBH in both the galaxy cluster (RomulusC) and in the field (Romulus25). For Romulus25, only galaxies within halos of mass \(>3\times 10^{9}\) M\({}_{\odot}\) are included. For RomulusC, only galaxies in halos of mass \(>3\times 10^{8}\) M\({}_{\odot}\) and within \(2R_{200}\) of cluster center are included (see previous section for more justification of this choice, though it does not affect our overall results). The results are compared with multiple observationally derived estimates for the underlying occupation fraction (Greene, 2012; Miller et al., 2015) and the observed occupation fraction of nuclear star clusters (NSCs) in dwarf galaxies within the Virgo and Fornax clusters (Sanchez-Janssen et al., 2019; Munoz et al., 2015), as well as the Local Group (Hoyer et al., 2021). Note that observational estimates for the underlying MBH occupation fraction are highly uncertain and rely on inference from active galactic nuclei. In the context of this work, these are useful benchmarks to compare our results while more detailed comparisons are not very useful. The error bars in occupation fraction in this and all following figures represent 95% binomial confidence intervals (Cameron, 2011).
The occupation fractions in the field derived from Romulus25 are consistent with estimates derived from observations and match quite well with observed NSC occupation fractions in the local volume (Hoyer et al., 2021). We find a significantly elevated occupation fraction among cluster dwarf galaxies compared to field dwarfs with M\({}_{\star}<10^{9}\) M\({}_{\odot}\). Interestingly, the occupation fraction in these low mass cluster galaxies matches remarkably well with the occupation fraction of nuclear star clusters in Fornax and Virgo, which also show enhanced occupation compared to lower density environments (Sanchez-Janssen et al., 2019; Munoz et al., 2015).
The bottom panel of Figure 2 shows the occupation fraction of central (within 1 kpc of the halo center) black holes that would be observed as active galactic nuclei at \(z=0.05\) as a function of stellar mass. We limit the spatial offset to mimic the fact that observers often search for X-ray sources associated with galactic centers specifically. We find that cluster dwarf galaxies are less likely to host X-ray luminous AGN compared to the field. As discussed in Tremmel et al. (2019) this is because the majority of cluster dwarf galaxies have had their gas supply removed due to ram pressure stripping, resulting in very low MBH growth. We note that it is possible that ram pressure can also cause gas to compress to the center and, in some cases, momentarily activate black hole growth (Poggianti et al., 2017). We see this effect in RomulusC (Ricarte et al., 2020) but this process is transient and only common among the more massive in-falling galaxy population. Low mass galaxies quickly lose their gas and both star formation and black hole accretion are shut off.
To estimate the X-ray luminosity of growing MBHs in the simulation, we first estimated the bolometric luminosity. To do this we followed previous work (Churazov et al., 2005; Habouzit et al., 2022; Sharma et al., 2022) and implemented a two-mode model that accounts for radiatively inefficient accretion flows during low Eddington ratio (\(f_{\rm Edd}\)) accretion:
\[L_{\rm bol}=\begin{cases}\epsilon_{r}\dot{M}_{\rm BH}c^{2},&f_{\rm Edd}\geq 0.1 \\ 10f_{\rm Edd}\epsilon_{r}\dot{M}_{\rm BH}c^{2},&f_{\rm Edd}<0.1.\end{cases} \tag{5}\]
In the above equation, \(\epsilon_{r}=0.1\), as in the simulation. This value is also used to estimate the Eddington ratio (\(f_{Edd}=\epsilon_{r}\dot{M}_{\rm BH}c^{2}/L_{\rm Edd}\)). We then apply the bolometric correction from Shen et al. (2020) to estimate the 0.5-10 keV X-ray luminosity of each MBH. The rationale and application of this approach is discussed further in Sharma et al. (2022) where it is shown that this calculation, as opposed to assuming a constant radiative efficiency, brings the simulated dwarf AGN fraction in Romulus much closer to observed values. Note that here we are not making any additional cuts based on the estimated X-ray luminosity of stars and gas in the galaxy, as is done in Sharma et al. (2022). Including this would likely bring down the luminous MBH fraction in the field and have less of an effect for the cluster galaxies, bringing the two potentially closer together. Regardless, our results would remain the same in that the enhanced occupation fraction of MBHs in the cluster is not expressed in the fraction of _luminous_ MBHs that are likely to be detectable with observations.
Figure 2.— This excess of mostly quiescent massive black holes in cluster primary galaxies. _Top_: The fraction of galaxies hosting at least one MBH as a function of stellar mass for galaxies in Romulus25 (blue) and RomulusC (orange). Error bars represent 95% binomial confidence intervals (Cameron, 2011). The grey region and open diamonds are both estimates derived from observations in Miller et al. (2015) and Greene (2012) respectively. The three hatched regions are occupation fractions of nuclear star clusters (NSCs) observed in dwarf galaxies in Virgo (green; Sánchez-Janssen et al., 2019), Fornax (magenta; Muñoz et al., 2015), and the local volume (cyan; Hoyer et al., 2021). For the local volume data, the hatched region represents 95% binomial confidence intervals, which were calculated using the total number of observed galaxies in each bin. While the field population in Romulus25 has occupation fractions roughly consistent with observationally derived estimates, RomulusC has an enhanced occupation fraction at low masses (\(\rm M_{\star}<10^{9}M_{\odot}\)). The occupation fraction in the RomulusC simulation is remarkably consistent with that in the observed NSCs within galaxy clusters, while the occupation fraction in field galaxies (Romulus25) is very consistent with local volume NSCs. _Bottom_: The fraction of galaxies hosting a central (\(D<1\) kpc), luminous (\(\rm L_{\star}>2\times 10^{38}\) erg s\({}^{-1}\)) MBH as a function of stellar mass. Also shown are two observational data sets examining AGN in cluster and early-type galaxies with a similarly low luminosity threshold (Gallo et al., 2010; Miller et al., 2015). The cluster dwarf population (orange), which are preferentially quenched galaxies, matches well with the observations and are noticeably lower than the occupation of luminous MBHs in field dwarfs (blue). Despite having a higher underlying MBH occupation fraction, cluster dwarfs are less likely to host low luminosity AGN compared to the field by \(z=0\).
Figure 4.— The Evolution of the MBH Occupation Fraction Sails in the Cluster Environment. Similar to Figure 3 here we show the occupation fraction for galaxies as a function of stellar mass at six different redshifts for cluster galaxies (left) and field galaxies (right). While the occupation fraction in field galaxies evolves steadily through time, the evolution is slower in the cluster environment and stalls at \(z=2-3\).
Figure 3.— Comparing MBH Occupation Fraction Across Cosmic Time. The occupation fraction of MBHs as a function of galaxy stellar mass at different redshifts. Blue points show the results for galaxies in the field (Romulus25) and the orange points show galaxies within 2 Mpc of the (proto-)cluster center at each redshift. At \(z=5\), when the vast majority (\(>95\%\)) of MBHs have formed in both simulations, the occupation fractions look very similar. The differences in the two distributions becomes more dramatic with cosmic time, particularly after \(z\sim 3\).
MBHs in the Romulus simulations are allowed to be off-center and indeed often are (Tremmel et al., 2018; Tremmel et al., 2018; Ricarte et al., 2021, 2021). Such off-center black holes may even be preferentially more common in dwarf galaxies, which host lower mass black holes that experience more inefficient dynamical friction (Bellovary et al., 2021). Observations searching for MBHs in low mass galaxies often focus on their centers and this would artificially lower the occupation fraction. However, we confirm for both sides of Figure 2 that the choice to include/exclude off-center (\(D>1\) kpc) MBHs makes very little difference in our predicted occupation fractions. Placing more strict criteria for hosting a MBH will decrease the overall fraction of galaxies but the effect is minor and of similar magnitude across environments. For luminous MBHs, it is difficult for non-central MBHs to accrete enough material to become luminous so the inclusion of off-center MBHs also has little effect here. Importantly, the decision to include/exclude off-center MBHs does not affect our main prediction: an enhanced occupation fraction of MBHs in low mass cluster galaxies.
### Evolution with Redshift
In Figure 3 we show the MBH occupation fraction as a function of stellar mass at six snapshots at different redshifts. We only examine this out to \(z=5\) because before this time many MBHs are still actively being seeded while, by \(z=5\), the vast majority (\(>90\%\)) of MBHs have formed (see Figure 1). From Figure 3 we can see that the occupation fraction in the different environments begins to look very similar beyond \(z\sim 3\). At \(z=5\) the occupation of MBHs as a function of stellar mass is nearly identical between field and cluster environments. Therefore, both the formation times (Figure 1) and host halos at high redshift (Figure 3; lower right panel) are similar between RomulusC and Romulus25, indicating that the presence and location of dense, metal-free gas is very similar between environments at early epochs.
In Figure 4 we compare the occupation fraction within each environment at different redshifts. In the field (right), the occupation fraction evolves steadily with redshift, declining over time especially at lower masses. For the cluster (left) the occupation fraction ceases to evolve significantly past \(z\sim 3\). It is this difference in evolution that results in the difference seen at \(z=0.05\). The occupation fraction in the cluster at \(z=0.05\) is more similar to the field at \(z=3\), when the evolution stopped for cluster galaxies. In contrast, the decline in MBH occupation fraction continues for field dwarfs throughout cosmic time. As we will discuss further in Section 4, this decline seen in the field is driven by late-forming dwarf galaxies which do not exist in the cluster environment.
### Dependence on Cluster- centric Distance and Halo Mass
We can explore in more detail the extent to which the occupation fraction is dependent on environment. In the left-hand plot in Figure 5 we find evidence that the occupation fraction evolves with cluster-centric distance. As one looks more toward the cluster outskirts (0.75 <D< 2 R\({}_{200}\)) the MBH occupation fraction in galaxies does decline, although it remains systematically higher than that in isolated galaxies at stellar masses below \(10^{9}\) M\({}_{\odot}\). This implies that while we might expect a gradual evolution from isolated to cluster galaxies, the influence of the environment persists even in the outskirts of clusters.
We can use Romulus25 to see if there are enhancements in the occupation fraction in less dense environments, such as low mass groups. The right-hand plot in Figure 5 plots the occupation fraction of isolated galaxies in blue, cluster galaxies in orange, and galaxies within 2R\({}_{200}\) of a more massive halo with virial mass between \(10^{12}\) - \(10^{13.3}\) M\({}_{\odot}\) in red (i.e. the most massive halos in the 25 Mpc volume). We find no evidence that dwarf galaxies near these larger galaxies have any systematic enhancements to their MBH occupation. We compare again to the observed nuclear star cluster occupation in dwarfs from Hoyer et al. (2021), as well as Carlsten et al.
Figure 5.— The Occupation Fraction As A Function of Environment. _Left_: The occupation fraction of MBHs in galaxies in RomulusC at different cluster-centric distances. The orange points represent galaxies within 0.75 R\({}_{200}\) of cluster center while the red points are for galaxies in the cluster outskirts (0.75 < D/R\({}_{200}\) < 2). The blue points are for isolated galaxies in Romulus25. Although a subtle effect, there is evidence that galaxies in cluster outskirts do have less MBHs, though the low mass galaxies below \(10^{9}\) M\({}_{\odot}\) are still enhanced compared to isolated galaxies. _Right_: The occupation fraction of galaxies in Romulus25 in different environments. The red points show the occupation fraction of galaxies that are within 2R\({}_{200}\) of a halo of mass between \(10^{12}\) and \(10^{13.3}\) M\({}_{\odot}\). There is little difference between the occupation fraction at these intermediate masses and that for isolated dwarf galaxies (blue points). We also show observational results for local volume dwarfs from Carlsten et al. (2022) and Hoyer et al. (2021) as hatched regions.
(2022), and find that the MBH occupation in dwarf galaxies associated with lower mass halos match well with observed NSCs in the local volume.
We confirm that these results are not sensitive to the specific host halo mass range used, though we are limited by the small volume of Romulus25. These results imply that the enhancement in occupation fraction we see in the Romulus simulations is isolated to very massive halos (massive groups and above; M\({}_{vir}>10^{13.3}\) M\({}_{\odot}\)). Lower mass halos assemble their mass earlier, meaning that low mass galaxies in-fall at higher redshift when the halo has a shorter characteristic dynamical time. These halos are more likely to become disrupted before \(z=0\) and contribute to the population of wandering black holes in massive halos (Tremmel et al., 2018; Tremmel et al., 2018; Ricarte et al., 2021, 2021). It may also be that these lower density environments have a MBH occupation fraction enhancement only at galaxy masses that are currently unresolved in Romulus.
## 4 Exploring the Origin of the Environmental Dependence of Dwarf Galaxy Mbh Occupation
The evolution of galaxies in dense environments like galaxy clusters is different compared with galaxies in the field. For cluster member galaxies, eventually, their ability to accrete new mass or form new stars will be shut off by the cluster environment (Mistani et al., 2016). However, even before galaxies become bound to the cluster, or in-fall to within R\({}_{200}\), they are growing within an over-dense region. This means that the galaxies that exist within a cluster environment at a given stellar (or halo) mass will have very different evolutionary histories compared with similar mass galaxies in the field (Gao et al., 2005; Gao & White, 2007; Croton et al., 2007; Boylan-Kolchin et al., 2009). Cluster dwarf galaxies are likely to have earlier formation times because they must assemble their material before the cluster environment shuts down their growth. It may also be true that cluster dwarfs would have grown to much larger masses if their mass assembly was allowed to continue unabated by their environment (Mistani et al., 2016). Finally, cluster galaxies may also have their mass removed through tidal stripping as they interact with the cluster potential. In some cases, this stripping can unbind the majority of their stars, leaving behind only the dense, central stellar component (such as a NSC), which is then detected as an ultra-compact dwarf (Drinkwater et al., 2003; Bekki et al., 2003; Seth et al., 2014; Voggel et al., 2016). In the following sections we evaluate the ability of each of these evolutionary scenarios in explaining the unique MBH occupation fraction in the simulated cluster (RomulusC) environment compared to the field (Romulus25).
### Cluster dwarfs as stripped remnants of more massive galaxies
The enhanced dwarf galaxy MBH occupation fraction in RomulusC could be explained if low mass host galaxies in the cluster were once much more massive. If a more massive galaxy were to become tidally stripped, it could lose significant stellar mass while retaining its central MBH. If this is true for a large number of low mass galaxies in RomulusC, then it would make sense that their MBH occupation fractions would resemble that of more massive galaxies (i.e. their original mass prior to tidal stripping). However, as discussed in Tremmel et al. (2020), while significant dark matter mass is lost due to interacting with the cluster potential, the majority of galaxies have not lost much stellar mass. While much of the dark matter stripping occurs on larger scales, stellar mass is confined within the galaxies themselves. Tidal stripping of this much more compact component requires closer, more intense interactions with the center cluster potential that are likely to result in the complete disruption of the galaxy, as far as the simulation and halo finder are concerned. It is important to note that some of this disruption could be artificial due to the limited resolution of the simulation (van den Bosch & Ogiya, 2018). It is also important to note that Romulus cannot resolve extremely compact structures within galaxies that would be able to best survive significant tidal interactions, such as NSCs.
The analysis from Tremmel et al. (2020) was done by tracing halos back in time to compare their \(z=0\) stellar mass with their maximum stellar mass. This could induce a bias where the halos that pass closest to cluster center are more likely to have time-steps where they are missed by the halo finder, a well known issue (e.g. Knebe et al., 2011; Onions et al., 2012; Joshi et al., 2016). This would cause us to be unable to fully trace their evolution through time and these potentially heavily stripped galaxies would be ignored. Focusing on MBH hosts, we can instead trace the MBHs themselves through time without relying on halo finding and compare the final host stellar mass with the maximum host stellar mass, excluding any intervening steps where they are temporarily taken to be hosted by the main cluster halo (i.e. times where the halo finder fails to extract their host sub-halo).
Focusing only on MBHs hosted in cluster dwarf galaxies (M\({}_{\star}<10^{10}\)M\({}_{\odot}\)) at \(z=0.05\), we find that the median MBH host galaxy has only experienced a net loss of \(\sim 20\%\) of its stellar mass as it interacts with the cluster environment. Only one fifth of the MBH hosts have seen their stellar mass decrease by more than a factor of \(\sim 3\). Looking at figure 2, a typical stellar mass loss of a factor of \(\sim 3-5\) is needed to bring the occupation fractions in line with the field. In RomulusC such extreme mass loss is too rare to fully explain the overabundance of MBHs in low mass galaxies.
### Cluster Dwarfs as Failed Massive Galaxies
Galaxies in cluster environments will eventually stop accreting new material, as both dark matter and gas will flow onto the primary halo instead. The lack of replenishing gas supply combined with ram pressure removing the ISM will eventually slow down or completely quench new star formation in the galaxy. However, it is possible that, were these galaxies allowed to continue to grow unimpedpeded, they would be more massive at \(z=0\). If the progenitor galaxies to our cluster dwarfs are more similar to progenitors of massive field galaxies than they are to those of field dwarfs, this could explain the difference in MBH occupation fraction. In other words, it may be that cluster dwarfs actually represent progenitors to more massive field galaxies that were instead 'frozen' in their growth by their environment. In this scenario it is not required the galaxies lose stellar mass, just that they fail to reach the same masses as their field counterparts.
In order to explore this scenario, we trace our cluster galaxies back in time, finding the redshift, stellar mass, halo mass, and concentration1 at the time they reach maximum virial
mass (t\({}_{max}\)). We exclude all galaxies that fail to trace backward to a time prior to falling into the cluster. These progenitors are then matched to galaxies at that same redshift in Romulus25 which are major progenitors to isolated \(z=0.05\) galaxies, requiring the stellar and halo masses be within 0.2 dex and the difference in concentration be less than 0.2. For each cluster galaxy, we require they match with at least 4 field galaxies with this criteria. We then recalculate the occupation fractions using for each cluster galaxy the median \(z=0.05\) stellar mass among the matched isolated galaxies. The idea is that this should approximate the stellar mass they would have attained were they allowed to continue to grow.
Footnote 1: The \(z\)-\(z\) relation is not known to be true for the \(z\)-\(z\) relation.
Figure 6 shows the results of this analysis, plotting the occupation fraction using both the original \(z=0\) stellar masses (open orange points) and the stellar masses calculated by matching the progenitors to isolated galaxies (solid orange points). We are only able to do this analysis with galaxies with original stellar masses above \(10^{8}\) M\({}_{\odot}\). At lower masses too many galaxies become excluded because we fail to trace them back before in-fall into the cluster due to the halo finder failing to identify them at some point. We confirm that these results are not sensitive to our specific matching criteria, specifically which combination of halo mass, stellar mass, and concentration were used, as well as the number of matches required for the analysis.
If the majority of low mass MBH host galaxies are more like progenitors to massive, isolated galaxies, the 'corrected' occupation fraction technique would see many MBHs hosts move to larger (corrected) stellar masses, decreasing the occupation fraction at low mass and bringing the results more in-line with the field. While we see in Figure 6 that this matching technique does result in lower occupation fractions at low masses, the results are still systematically higher than the field. This implies that while some MBH host galaxies may be considered 'failed' massive galaxies (i.e. they could have grown larger had their mass accretion not been shut down by the cluster environment) this scenario fails to fully explain the discrepancy in occupation fraction between environments. Of course, because some galaxies had to be excluded due to bad tracking, the error bars are larger and the difference between the corrected and original occupation fractions for individual mass bins are often only marginally significant.
### An Overabundance of Early-forming Dwarfs in Clusters
As discussed in Sharma et al. (2020), dwarf galaxies in Romulus25 hosting black holes, especially those that are more massive, tend to have earlier formation times for both their stars and their overall halo mass. While feedback from black holes could influence the assembly history of stars, potentially quenching star formation even in low mass galaxies (Sharma et al., 2022; Koudmani et al., 2021), the fact that this trend is also seen in dark matter halo assembly indicates that it is likely something more fundamental. In cluster environments, the assembly of galaxies and the dark matter halos in which they reside is stopped by the cluster environment, such that a galaxy of a given mass in the cluster must have assembled that mass prior to in-fall. This is an expected result of assembly bias in the formation of dark matter halos (e.g. Gao et al., 2005; Gao & White, 2007; Croton et al., 2007; Boylan-Kolchin et al., 2009) and is seen in other cosmological hydrodynamic simulations (e.g. Mistani et al., 2016; Chaves-Montero et al., 2016).
Figure 7 plots the occupation fraction as a function of formation time for the same two stellar mass bins of dwarf galaxies. The top panels show the halo formation time (time to accumulate 50% of the maximum halo mass) and the bottom panels show the stellar formation time (time to form 50% of the maximum stellar mass). The orange and blue bands show the average occupation fraction for successfully tracked galaxies in each mass bin. Note that the average values are calculated only for the galaxies that have been successfully traced back through time and included in the calculation of the individual data points shown here.
In both the field and cluster environments, dwarf galaxies with earlier formation times are more likely to host MBHs. This is true when examining either stars or halo mass. The MBH occupation fraction for cluster dwarfs is similar to field dwarfs when controlling for formation time, though this connection is better illustrated by halo mass when considering higher mass dwarfs. In the field, dwarfs of a given mass are allowed to form throughout cosmic time, but those that accumulate their mass later are less likely to host MBHs. This lack of late-forming dwarf galaxies in the cluster is what causes the evolution of the occupation fraction to stop evolving after \(z\sim 3\) (see Figures 3 and 4). In the field, 'new' dwarf galaxies grow at later times, filing those lower mass bins with galaxies that lack MBHs. While this is a function of the specific seeding criteria we use, these results show that regions of very dense, rapidly collapsing, pristine gas are more likely to
Figure 6.— The Occupation Fraction for Galaxies Matched with Isolated Galaxies. Here we test the extent to which the difference in occupation fraction can be explained by cluster dwarf galaxies being ‘failed’ larger galaxies, i.e. galaxies that, were they not in a cluster environment, would have grown to be more massive. In blue we plot the occupation fraction for isolated galaxies in Romulus25 at \(z=0\). The solid orange points show the occupation fraction as a function of galaxy mass after each cluster galaxy, where its mass has been corrected to estimate what it might have attained the galaxy not fallen into a cluster. We do this by matching each cluster galaxy with \(z=0\) isolated galaxy based on its stellar and halo masses at \(z_{0,halo}\) (the time when the host dark matter halo reaches 50% of its maximum mass; see text for details). The open, light orange points are these same galaxies but with their original final masses. Note that we do not include the lowest mass galaxies in this analysis because too many of them fail to be traced back in time successfully (see text for details). While this matching process does alleviate some of the differences, the ‘corrected’ occupation fraction for cluster dwarfs remains systematically higher than isolated dwarf galaxies. This implies that saturated mass growth is only a part of the explanation for the different occupation fractions.
exist in the progenitors to early-forming dwarf galaxies compared to late-forming dwarfs. In late-forming dwarfs, such early phases of collapse occur too slowly, allowing for the formation of stars and metal enrichment before the required high densities (far beyond the threshold for star formation in the simulation) are reached (if they ever are). In this scenario, the role of environment is more to stop the formation of late-forming dwarf galaxies, rather than influence on the formation sites of MBHs. As can be seen in Figures Figures 1, 3, 4, and 4, the time and host halos of MBH seeding is very similar between the two environments.
Once again, we face the problem discussed in the previous section whereby low mass (M\({}_{\star}<10^{8}\) M\({}_{\odot}\)) galaxies with MBHs are more likely to be excluded because they fail to be tracked back in time successfully. These galaxies form earlier and fall into the cluster sooner so they are more likely to have passed closer to the cluster center and also be missed by the halo finder. However, there is still a significant difference in the mean occupation fraction in each mass bin relative to the field (comparing the orange and blue bands).
Controlling for formation time results in a better match between the two populations of galaxies when looking at halo mass, rather than stellar mass. This makes sense, as many additional factors may play a role in the star formation history of a galaxy, including the presence of feedback from a MBH (Sharma et al., 2020, 2022). The more massive dwarfs in clusters still appear biased high relative to isolated galaxies with similar formation times. This may indicate that a combination of effects are needed to fully explain this enhanced MBH occupation population, i.e. some dwarfs could have assembled into more massive galaxies were they isolated (see previous section) combined with a lack of late-forming dwarf galaxies in clusters.
## 5. Discussion
Observational constraints on the underlying MBH occupation fraction in low mass galaxies are uncertain because it is difficult to detect the low mass, low luminosity black holes. Indeed while the Romulus simulation has been shown to reproduce observed samples of dwarf galaxy AGN fractions and luminosities (with specific assumptions made to convert between black hole accretion rate and X-ray luminosity) the simulation predicts a large population of MBHs that would go undetected by even the most sensitive modern X-ray surveys (Sharma et al., 2022). Still, as observations improve, evidence increasingly points to a significant number of MBHs in low mass galaxies (Nguyen et al., 2018; Baldassare et al., 2020; Burke et al., 2022). Much work is still needed from
Figure 7.— Early Forming Dwarfs are More Likly to Host MBHs. The occupation fraction of MBHs as a function of \(s_{0,halo}\) (top) and \(s_{0,stars}\) (bottom) for dwarf galaxies in two stellar mass bins. The bands represent the total occupation fraction in each mass bin (only for galaxies that can be traced adequately for back in time; see text for details). The relationship between formation time and MBH occupation fraction is similar for cluster dwarfs (orange) and isolated dwarfs (blue), though galaxies in the higher mass bin are still biased slightly high when controlling for either formation timescale. Dwarf galaxies that form earlier are more likely to host a MBH. The important difference between the two simulations, therefore, is that dwarf galaxies within cluster environments form earlier than those that are more isolated.
the observational side, but upcoming time domain surveys from the Vera Rubin Telescope may offer hope for dramatically increasing the completeness of the observed MBH population through their intrinsic variability (Baldassare et al., 2018; Burke et al., 2022) as well as tidal disruption events (Bricman and Gomboc, 2020). JWST may also be a powerful tool for detecting AGN with low X-ray luminosities (Cann et al., 2021). As observations continue to get better at detecting MBHs in low mass galaxies, predictions like the ones made in this work will be crucial in understanding and contextualising them with respect to MBH formation models.
The challenge for simulations is, as always, resolution which comes at the cost of the size and statistical sample of the data. Large-scale simulations like Romulus reach a middle ground by being large enough to have many galaxies while also capable of resolving dwarf galaxies. Still, smaller volumes means a lack of environment diversity with only a handful of groups and a single low-mass galaxy cluster. While newer, large-volume simulations are becoming better in terms of both resolution and the black hole physics they implement (e.g. Dubois et al., 2021; Trebitsch et al., 2021; Ni et al., 2022) it remains a challenging balance. Even at the resolution of Romulus and these other state-of-the-art simulations, the ISM remains largely unresolved, requiring relatively simple prescriptions for black hole formation (see below for further discussion). Zoom-in simulations are another viable path forward, allowing more more detailed formation prescriptions (e.g. Dunn et al., 2018) and more detailed analysis of both black hole dynamics and the internal structure of dwarf galaxies (Bellovary et al., 2019, 2021). Very high resolution simulations targeting the high redshift Universe have also proven useful tools for examining the physics of MBH formation (Wise et al., 2019; Regan et al., 2017, 2020; Regan, 2023).
The seeding algorithm for MBHs implemented in Romulus is more predictive than many previous large-scale cosmological simulations because it seeds MBHs based on local gas properties (density, temperature, metalicity) without making any _a priori_ assumptions on which halos/galaxies should host a MBH. As discussed below, our model is still simplistic because of limited resolution, but the simulations still have significant predictive power. In particular, our results demonstrate that the gas properties of galaxies at \(z>5\) is connected to their formation history and, therefore, so may be their likelihood of hosting a MBH. Higher resolution simulations will be able to further test this prediction, as will future observations. The environmental dependence of MBH occupation that we predict here should be considered a potential way to differentiate between MBH formation mechanisms. For example, an observed environmental dependence of MBH occupation would support the theory that the primary formation channel of MBHs occurs in the early (\(z>5\)) Universe from metal poor gas. However, a lack of observed environmental dependence, based on our results here, would indicate that other formation channels which have MBHs grow at later times and from more metal polluted gas (e.g. Regan et al., 2020; Natarajan, 2021; Mayer et al., 2023) likely dominate.
### The Effect of MBH Formation Model
The most important caveat to these results is that they will naturally rely on our choice of MBH formation criteria. The main concern is whether our choices directly influence our results, which is not the case here. The criteria in Romulus are common sense requirements given any of the leading MBH formation models and the requirement that each MBH should be able to grow to large masses in a relatively short amount of time. In practice, our criteria will pick out gas that is collapsing to very high densities on a timescale shorter than the typical star formation timescale (assumed to be \(10^{6}\) Myr) and faster than it can effectively cool. The additional criteria that this gas must be (nearly) pristine means that such locations must form prior to or simultaneously with the very first stars forming in the (proto-)galaxy. Despite the simplicity of the model, the connection between the high-redshift properties of (proto-)galaxies and their future assembly history and environment remains a prediction of the model, rather than a direct consequence of our choice in criteria. Still, we do not attempt to test different criteria and it is very possible that this would influence our results. For example, softening the metallicity or density requirements would make MBHs much more common likely wash out any environmental dependence.
It should be noted that this model is only capable of capturing MBH formation channels that take place in pristine (or near-pristine) gas. This is primarily what results in an early formation epoch, as most gas becomes polluted as stars form in the simulation. However, it may be possible to grow MBHs at later times in metal enriched gas, either growing a low mass seed within star clusters (Natarajan, 2021) or in massive merger events (Mayer et al., 2023). The formation model implemented in Romulus would not capture such channels.
The fact that our theoretical results produce AGN occupation fractions consistent with observations (see Figure 2 and Sharma et al., 2022) indicates that our model parameters are reasonable, at least. However, Sharma et al. (2022) find that AGN feedback in dwarf galaxies is the primary cause for over-quenching low mass galaxies. This could indicate that our occupation fractions are too high in the field, though this is just as likely an issue with overly efficient MBH accretion and/or feedback. In any case, a more strict formation criteria would decrease occupation fractions of MBHs and could potentially increase the divide between environments even further.
### Halo Finder Limitations
Our analysis has been limited by our ability to extract halos in consecutive timesteps. The difficulty of extracting substructure close to the centers of dense structures like clusters is a well-known issue with halo finding routines. While this may result in an artificial lack of low mass galaxies deep within the cluster, this should not effect our overall results on the occupation fraction of cluster dwarf galaxies. In fact, galaxies closer to the center of the cluster are likely to have fallen in earlier and are therefore more likely to have hosted a MBH, so including more central dwarfs could increase our cluster occupation fractions further.
More important is the effect on our ability to trace halos backward in time. This requires that a given halo is detected in all timesteps while it is in the cluster, which may not happen if it passes close to the center at any point. Given the wide mass range we examine and the fact that we are able to successfully trace back the majority of even the smallest galaxies, these missed galaxies should not affect our conclusions. In fact, we should preferentially miss the earliest forming dwarf galaxies that fall into the cluster first, which would only strengthen the effect that we see already.
An important numerical affect caused by limited resolution is the artificial disruption of dark matter halos (and galaxies). As discussed in detail in van den Bosch & Ogiya (2018), limited particle count and gravity resolution results in substructure becoming artificially disrupted. In the simulation, most galaxies that do experience significant tidal stripping of their stars are very soon completely disrupted while, in reality, it is possible they should survive. This would mean that we underestimate the portion of MBHs that exist in significantly stripped galaxies. This would likely further increase the difference in occupation fraction we already see in the cluster, as we would have an additional population of dwarf galaxies hosting MBHs that we currently do not resolve. It would also mean that tidal stripping is more important than we currently predict to the overall MBH population in cluster dwarfs (see Voggel et al., 2019, for more discussion on this population of MBHs based on observations).
Similarly, limited resolution means that Romulus will not resolve dense stellar structures, such as nuclear star clusters, at the centers of galaxies. These structures are more resilient to tidal effects, so even if the disruption of subhalos were all real it is possible that some of these structures would survive around the MBHs as ultra-compact dwarf galaxies (Drinkwater et al., 2003; Bekki et al., 2003; Seth et al., 2014). Similar to artificial disruption, the effect of this would be that there is a more significant population of tidally stripped remnants that host MBHs. It would also further increase the predicted environmental dependence of the occupation fraction by, once again, adding a new population of dwarf galaxies hosting MBHs to our sample. Further, this would create a population of galaxies with significantly overmassive black holes. As discussed in Ricarte et al. (2019), the accretion histories of MBHs in RomulusC dwarf galaxies is very similar to that of field galaxies, and so galaxies in both environments exist on the same stellar mass-black hole mass relation. This might not remain the case if the number of artificially disrupted galaxies/nuclear star clusters is accounted for, but we leave this question to future work more focused on explaining observations of MBHs in ultra-compact dwarf galaxies (e.g. Seth et al., 2014; Afanasiev et al., 2018; Voggel et al., 2019).
### Connection to Nuclear Star Clusters
The MBH occupation fractions we predict with our relatively simple seed formation model matches remarkably well with observations of NSCs in cluster dwarf galaxies (Munoz et al., 2015; Sanchez-Janssen et al., 2019), as well as local group dwarfs (Hoyer et al., 2021; Carlsten et al., 2022). There is reason to think that the formation of NSCs and MBHs could be connected. It may be that MBHs are seeded as a result of the evolution of a dense nuclear star cluster (Devecchi & Volonteri, 2009; Davies et al., 2011). More broadly, the environment likely to form/grow a MBH (very dense, pristine gas) is also the cite of very dense, early star formation that seeds an initial NSC (note that NSCs often include stars with variety of ages and metalicities, indicated extended star formation histories Seth et al., 2006; Carson et al., 2015; Kacharov et al., 2018). Some work suggests that NSCs and MBHs form from entirely separate mechanisms (e.g. Scott & Graham, 2013) or that NSCs may grow mostly from mergers of other star clusters, rather than in-situ formation (Antonini et al., 2015; Fahrion et al., 2020). In reality, it is likely that a combination of mechanisms are occurring (e.g. Fahrion et al., 2021, 2022).
Many NSCs co-exist with MBHs and their masses scale with the mass and properties of their host galaxies in similar ways (Wehner & Harris, 2006; Ferrarese et al., 2006; Seth et al., 2008; Georgiev et al., 2016; Nguyen et al., 2018, 2019; Fahrion et al., 2022). The potential well of a NSC can help low mass MBHs grow (Natarajan, 2021; Askar et al., 2022). Conversely, feedback from MBHs, as well as their dynamical interactions with nearby stars, can hinder the growth of NSCs and even disrupt them completely (Antonini, 2013; Antonini et al., 2015; Sanchez-Janssen et al., 2019). The presence of MBHs within dense, nuclear regions are thought to explain the presence of supermassive black holes in ultra-compact dwarf galaxies typically found in groups and clusters (Seth et al., 2014; Afanasiev et al., 2018; Voggel et al., 2019).
The formation criteria for MBHs in Romulus is relatively agnostic regarding the exact physics as we are far from resolving the formation process itself. Rather, it is meant to encapsulate the type of environment where one may expect a MBH to form and grow rapidly in the early Universe, i.e. where cold, low metalicity gas is collapsing on timescales much shorter than the star formation timescale. Such conditions are required for all formation channels of MBHs. Even starting as Population III remnants, the seeds would need a lot of dense gas nearby to quickly grow large enough to become \(10^{5}\) M\({}_{\odot}\) in a short amount of time (see Volonteri, 2012, for further discussion on this). While we are far from being able to resolve NSCs in the simulation, the connection to observed NSC occupation makes sense regardless of the details of NSC formation and suggests that NSCs might serve as incubators for the formation and growth of MBH seeds over cosmic time as noted by (Natarajan, 2021). An in-situ formation channel would require similar properties to MBH formation (very dense gas in the early Universe) but a formation channel dominated by globular cluster mergers (e.g. Antonini & Merritt, 2012; Antonini et al., 2015; Fahrion et al., 2020) would also fit with a connection to MBHs. Romulus predicts that the environment required for MBH formation is more likely in galaxies (and dark matter halos) that form earlier. Such galaxies will be likely to form more globular clusters that will then have more time to sink and merge and form a NSC. Observations have shown an overabundance of globular clusters in dwarf ellipticals residing in galaxy cluster environments (e.g. Miller et al., 1998; Miller & Lotz, 2007; Jordan et al., 2007; Peng et al., 2008; Sanchez-Janssen & Aguerri, 2012) and the cause of this can be attributed to their earlier formation times (Mistani et al., 2016; Carleton et al., 2021).
The fact that our results match so well with observed NSC populations across environments (Munoz et al., 2015; Sanchez-Janssen et al., 2019; Hoyer et al., 2021; Carlsten et al., 2022) while producing realistic black hole occupation fractions in the field (though observational constraints are murky at best) supports the notion that MBH formation could be connected with NSC formation. While we cannot directly resolve the formation of NSCs, our results indicate that their presence, like that of MBHs, may be connected not only to the properties of gas at high redshift, but to the overall formation history of galaxies and therefore, indirectly, their environment.
## 6 Conclusions
We use the Romulus simulations to predict an enhanced MBH occupation fraction in cluster dwarf galaxies (\(M_{\star}<10^{9}\)M\({}_{\odot}\)). The Romulus simulations are unique in their ability to both resolve low mass galaxies and implement a model
for MBH seeding that relies only on local gas properties (density, temperature, metallicity) rather than requiring any _a priori_ assumptions about which halos should or should not host a MBH. Despite forming black holes at similar times and within similar galaxy masses, we find that the cosmic evolution of the MBH occupation fraction in galaxies is halted at \(z\sim 3\) in the cluster environment relative to the field. This 'freezing out' of the occupation fraction results in a factor of \(\sim 2\) enhancement to the fraction of dwarf galaxies that host MBHs in clusters relative to the field at \(z=0.05\).
We investigate the cause of the enhancement in more detail and find that it can likely be explained by a combination of two mechanisms:
1. Early formation times of dwarf galaxies in cluster environments makes them more likely to host a MBH. When controlling for formation time, cluster and field galaxies have similar MBH occupation fractions, but late-forming dwarf galaxies that do not host MBHs dilute the field population and pull down the MBH occupation fraction, while these systems do not exist in clusters.
2. Some cluster dwarf galaxies may be 'failed' massive galaxies. Were they allowed to grow in the field, unimpeded by the cluster environment, they would likely have attained much higher masses.
We do not find evidence that many MBH host galaxies experience tidal stripping of their stars. However, future work will examine whether some MBHs in the simulation may have unresolved, compact stellar structures around them.
The enhanced MBH occupation fraction in the cluster simulation appears to fall with increased cluster-centric distance, but it remains in place out to \(2R_{200}\) for galaxies with \(M_{\star}<10^{9}\rm M_{\odot}\). While we do not attempt to model the detailed physics of MBH formation in these simulations, the connection between galaxy assembly history, environment, and the properties of gas at high redshift are important predictions. The presence of quickly collapsing, high density regions of pristine gas are likely formation sites of MBH seeds. While such environments are common in the progenitors of massive galaxies, we predict that only early forming dwarf galaxies are typically able to produce such environments. Such dwarf galaxies make up a much higher fraction of the overall population in dense environments like galaxy clusters. Our findings have important consequences for the origin and evolution of host galaxy-MBH co-evolution.
Finally, we find that the predicted MBH occupation fraction in the cluster is remarkably consistent with the observed occupation fraction of nuclear star clusters in Virgo and Fornax, while the field occupation fraction is similar to that of dwarfs in the local volume. So, the dense, pristine gas at \(z>5\) that the simulation attributes to MBH formation may also be connected to the formation of nuclear star clusters, either directly (they form from similar gas at similar times) or indirectly (e.g. connected to the earlier formation times associated with the host galaxies).
These results show how MBH occupation may be not just a function of galaxy mass but also environment, and that there are many connections between the high redshift properties of galaxies and their overall assembly history. We note that RomulusC is a relatively low mass cluster and more massive ones may have even more enhancement in their occupation fractions. These results also indicate that we must be cautious in how we utilize AGN observations to constrain the underlying MBH occupation fraction, as the connection is heavily dependent on environment and, more generally, the age of the galaxy (i.e. its formation time). Further work and higher resolution simulations will be needed to better understand how the environmental dependence of the underlying MBH occupation fraction may be inferred from observations. More broadly, these results demonstrate how simulations with more predictive models for MBH physics are crucial to our understanding of MBH formation and evolution.
## Acknowledgements
This work used the pynbody(Pontzen et al., 2013) and Tanagos(Pontzen & Tremmel, 2018) software packages. MT was supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2001810. PN gratefully acknowledges support from the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation. The authors thank Frank van den Bosch, Marta Volonteri, Mark Kennedy, Paul Callanan, and Katja Fahrion for useful discussions and comments.
## Data Availability
The Tanagos database for both Romulus simulations are available upon email request. Recreaction of all of the figures in this paper with processed data (derived using Tanagos databases) can be done using the jupyter notebook, code, and data filester te available at [https://github.com/mtremmel/tremmel2023_mbh_occFrac](https://github.com/mtremmel/tremmel2023_mbh_occFrac)
|
2303.09353 | A Quantum SMT Solver for Bit-Vector Theory | Given a formula $F$ of satisfiability modulo theory (SMT), the classical SMT
solver tries to (1) abstract $F$ as a Boolean formula $F_B$, (2) find a Boolean
solution to $F_B$, and (3) check whether the Boolean solution is consistent
with the theory. Steps~{(2)} and (3) may need to be performed back and forth
until a consistent solution is found. In this work, we develop a quantum SMT
solver for the bit-vector theory. With the characteristic of superposition in
quantum system, our solver is able to consider all the inputs simultaneously
and check their consistency between Boolean and the theory domains in one shot. | Shang-Wei Lin, Si-Han Chen, Tzu-Fan Wang, Yean-Ru Chen | 2023-03-16T14:32:50Z | http://arxiv.org/abs/2303.09353v1 | # A Quantum SMT Solver for Bit-Vector Theory
###### Abstract
Given a formula \(F\) of satisfiability modulo theory (SMT), the classical SMT solver tries to (1) abstract \(F\) as a Boolean formula \(F_{B}\), (2) find a Boolean solution to \(F_{B}\), and (3) check whether the Boolean solution is consistent with the theory. Steps (2) and (3) may need to be performed back and forth until a consistent solution is found. In this work, we develop a quantum SMT solver for the bit-vector theory. With the characteristic of superposition in quantum system, our solver is able to consider all the inputs simultaneously and check their consistency between Boolean and the theory domains in one shot.
quantum computing, SMT solver, Grover's algorithm
## I Introduction
Satisfiability modulo theory (SMT) is the problem of determining the satisfiability of a first-order formula with respect to a decidable first-order theory. Compared to Boolean satisfiability problem (SAT), the atoms in SMT can be in different forms depending on the theories they base on. With this richer expressiveness, SMT is widely used in plenty of applications, particularly in formal verification, ranging from recent topics like neural networks [13][5] and smart contracts [4][2] to hardware circuit designs [9][1]. Consider the following formula \(F\), based on the theory of fixed-width bit-vector, consisting of three integer variables \(a\), \(b\) and \(c\), each of which is \(2\)-bit long.
\[F:[(a>b)\vee(a<b)\vee(a=b)]\wedge[\neg(a>b)\vee\neg(a<b)\vee\neg(a=b)]\]
Formula \(F\) is _satisfiable_ as there exists an assignment \(a=2\) and \(b=2\), with binary representation \(10\) and \(10\), respectively, making \(F\) evaluate to True. Such an assignment is called a _solutain_. If there does not exist any solution, the formula is called _unsatisfiable_.
To solve a SMT problem, most of state-of-the-art SMT solvers adopt the classical _lazy approach_[19], which consists of three steps: (1) abstract the original formula as a Boolean formula, i.e., make an abstraction from the original theory domain to the Boolean domain, (2) find a solution to the abstract Boolean formula, and (3) check the consistency between Boolean and the original theory domains. If the solution found in step (2) is not consistent with domains, it will be abandoned, and step (2) continues until a consistent solution is found, or the formula is concluded unsatisfiable.
For instance, if we use Boolean variables \(x\), \(y\), and \(z\) to denote the predicate \((a>b)\), \((a<b)\), and \((a=b)\), respectively, formula \(F\) can be abstracted as the following Boolean formula \(F_{B}\):
\[F_{B}:(x\lor y\lor z)\wedge(\neg x\lor\neg y\vee\neg z)\]
An SAT solver can be used to find a solution to the Boolean formula \(F_{B}\), and a bit-vector theory solver can help to check the consistency. Fig. 1 illustrates how the lazy approach solves formula \(F\). In the first three iterations, SAT solver finds an assignment but immediately rejected by the theory solver. For example, in iteration \(1\), SAT solver finds a solution \(x=0\wedge y=1\wedge z=1\) to formula \(F_{B}\), which is rejected by the theory solver because \((a<b)\) and \((a=b)\) cannot hold simultaneously. This back and forth process continues two more iterations, and a consistent solution is found in iteration \(4\).
One can observe that each iteration of this back and forth process only checks the consistency of one Boolean solution (against the theory) at a time. If the search space is huge, this approach becomes unscalable. Indeed, there is an _online_ version of the lazy approach, which adopts heuristics, making it more efficient. However, the online version does not reduce the theoretical time complexity.
In recent years, quantum technology is widely used in many applications to solve traditionally difficult problems by taking advantage of the nature of superposition and entanglement in quantum systems. Grover's algorithm [12], one of the famous quantum algorithms, helps to search target objects among a huge search space. There are two essential components in Grover's algorithm: (1) an _oracle_, and (2) the _diffuser_. In a nutshell, the oracle answers the "yes/no" question about whether an object in the search space is our target, while the diffuser tries to increase or maximize the probability of the target's being measured. The details of Grover's algorithm is briefed in Section II.
To use Grover's algorithm for a search problem, the key is to provide the oracle. As long as the oracle can correctly identify the targets among the search space, the diffuser, which is standard and independent from the search problem, can help to "extract" the targets. In this work, we develop a quantum SMT solver based on Grover's algorithm. More specifically, given a SMT formula, we develop a methodology to generate the oracle required by Grover's algorithm to search the solutions to the SMT formula.
Let us take formula \(F\) as an example again. Assume the two variables \(a\) and \(b\), each of which is \(2\)-bit long, are represented by binary strings \(a_{1}a_{2}\) and \(b_{1}b_{2}\), respectively. Together with the abstract Boolean formula \(F_{B}\), we can use seven bits to represent each element \(|v\rangle\) in the search space, as a column vector \(|x,y,z,a_{1},a_{2},b_{1},b_{2}\rangle\). There are requirements for \(|v\rangle\) to be a solution of formula \(F\). Firstly, in the Boolean domain, \(x\), \(y\), \(z\) have to satisfy the following condition:
\[F_{B}(x,y,z)=(x\lor y\lor z)\wedge(\neg x\vee\neg y\vee\neg z)=1 \tag{1}\]
Secondly, in the bit-vector theory domain, \(a_{1}\), \(a_{2}\), \(b_{1}\), \(b_{2}\) need to be consistent with their Boolean abstraction \(x\), \(y\), \(z\). Thus, they have to satisfy the following three conditions:
\[(x=1 \iff a_{1}a_{2}>b_{1}b_{2})\vee(x=0 \iff a_{1}a_{2}\not>b_{1}b_{2}) \tag{2}\] \[(y=1 \iff a_{1}a_{2}<b_{1}b_{2})\vee(y=0 \iff a_{1}a_{2}\not<b_{1}b_{2})\] (3) \[(z=1 \iff a_{1}a_{2}=b_{1}b_{2})\vee(z=0 \iff a_{1}a_{2}\not=b_{1}b_{2}) \tag{4}\]
Fig. 1: SMT solving by the classical lazy approach
Here comes the interesting and critical part. If we can construct an oracle \(\Psi\), which can help us to take care of the four conditions simultaneously, then the diffuser can proceed to extract the solution for us. That is, our oracle \(\Psi\) does the following:
\[\Psi(|v\rangle)=\begin{cases}-1\cdot|v\rangle&\text{if}\qquad(1)\wedge(2)\wedge( 3)\wedge(4)\\ |v\rangle&\text{otherwise}\end{cases}\]
If an input \(|v\rangle\) is a solution to \(F\), our oracle adds a "\(-1\)" phase to it; otherwise, \(|v\rangle\) is not changed. Since the input \(|v\rangle\) can be placed in superposition, representing every possible input, our oracle is able to identify all the solutions to \(F\) in one shot. Then, the diffuser recognizes those inputs with a "\(-1\)" phase and tries to increase/maximize their probability to be measured. Fig. 2 (b) shows all the \(16\) solutions to formula \(F\), and our oracle is able to identify all of them among the \(128\) inputs in one shot.
Fig. 2 (a) shows the structure of our oracle design and summarizes our contributions. There are four components in our oracle:
1. **SAT Circuit** (c.f. Section III-A) determines the solutions for the Boolean domain, i.e., Equation (1) in our example.
2. **Theory Circuit** (c.f. Section III-B) determines the truth value of predicates (e.g., \(a<b\), \(a>b\), \(a=b\) in our example) in the theory domain with respect to different inputs.
3. **Consistency Extractor** (c.f. Section III-C) identifies those solutions that are consistent in both Boolean and the theory domains, i.e., \((1)\wedge(2)\wedge(3)\wedge(4)\) in our example.
4. **Solution Inverser** (c.f. Section III-D) inverses the solutions to the SMT formula by adding a "-1" phase to them for the diffuser's further processing.
We have also proved the correctness of each component design. The rest of this paper is organized as follows. Section II reviews preliminary concepts about quantum computing. The proposed quantum SMT solver based on Grover's algorithm for formulas of the bit-vector theory is introduced in Section III. Evaluations of our approach are given in Section IV. Related works are discussed in Section V. The conclusion and our future works are discussed in Section VI.
## II Preliminary
We assume that our readers have basic concepts in quantum computing, e.g., the _tensor product_ operation, _inner product_ operation, _outer product_ operation, and primitive quantum gates such as X, Z, H, CCNOT, etc. We use the _ket_ notation \(|\cdot\rangle\) to denote the (column) vector representing the state of a quantum system, and the _bra_ notation \(\langle\cdot|\) to denote its conjugate transpose. Given two vectors \(|v_{1}\rangle\) and \(|v_{2}\rangle\), we use \(\langle v_{1}|v_{2}\rangle\) to denote their inner product, \(|v_{1}\rangle\langle v_{2}|\) for their outer product, and \(|v_{1}\rangle\otimes|v_{2}\rangle\) for their tensor product. For simplicity, we may write \(|v_{1}\rangle\otimes|v_{2}\rangle\) as \(|v_{1},v_{2}\rangle\), or even \(|v_{1}v_{2}\rangle\).
### _Grover's algorithm_
Grover's algorithm [12] is one of the most famous quantum algorithms. It is used to solve the searching problem for finding one target element in a disordered database with \(N\) elements. Due to the characteristic of parallel computation in quantum systems, Grover's algorithm takes \(O(\sqrt{N})\) operations to find the target element, which is a quadratic speed up compared with classical methods requiring \(O(N)\) operations. Grover's algorithm is widely used in many applications, such as cryptography [11], pattern matching [21], etc.
The overall structure of Grover's algorithm is shown in Fig. 3. The two main operations of it are _phase inversion_ and _inversion about the average_, which are handled by the oracle and diffuser, respectively. Initially, the input will be placed in superposition (\(|x\rangle\)) to evaluate all elements in the database at once. Next, the oracle function \(U_{f}\) considers all the possible inputs and marks the target element by applying phase inversion, i.e., \(U_{f}|x\rangle=(-1)^{f(x)}|x\rangle\), in which \(f(x)=1\) for the target element and \(f(x)=0\) for the others. The oracle is problem-dependent, while the diffuser is not. Thus, designing the correct oracle is the key to apply Grover's algorithm. After the target element being marked, the diffuser applies the _inversion about the mean_ operation, to amplify the probability of the target element, so that one can obtain the result by measurement. In order to achieve the optimal probability for the target element to be measured, the two operations (called a Grover iteration) need to repeat \((\pi/4)\sqrt{N}\) iterations.
Grover's algorithm can also be deployed for problems with multiple target elements. In such a case, the number of required iterations becomes \((\pi/4)\sqrt{N/M}\), where \(M\) is the number of target elements. In addition, one can measure the correct result only when the number of the target elements is less than that of the nontarget elements (i.e. \(M/N<50\%\)) due to the natural property of the algorithm [23]. Since the number of target elements is usually unknown before the searching, there are several ways to solve this issue. The most common one is to apply quantum counting [7] to obtain the (approximate) number of target elements before using Grover's algorithm. After knowing the ratio of \(M/N\), if it is more than \(50\%\), one can double the search space by adding \(N\) nontarget elements, so the target elements will always be less than \(50\%\) of the search space. Another way is to apply the advanced algorithm proposed by Boyer et al. [6].
### _SAT, SMT, and Theory of Fixed-Width Bit Vector_
A Boolean formula is constructed by logical operators such as negation (\(-\)), conjunction (\(\wedge\)) and disjunction (\(\vee\)) among Boolean variables. Given a Boolean formula, the Boolean satisfiability (SAT) problem asks if there exists an assignment of the Boolean variables that satisfies the formula.
Satisfiability modulo theory (SMT), an extension of SAT, is the problem of determining whether a mathematical formula is satisfiable. A mathematical formula is composed of atoms (predicates) connected
Fig. 3: Grover’s algorithm
Fig. 2: Structure of our oracle \(\Psi\)
by Boolean logical operators (e.g., formula \(F\) in Section I). The type of atoms depends on the theory used in the mathematical formula.
In this work, we focus on the theory of quantifier free fixed-width bit-vectors (c.f. Section 4.2.5 in [19]), denoted by \(\mathcal{BV}\) for short. The syntax of \(\mathcal{BV}\) atoms is given in Fig. 4. An expression \(E\) could be a variable \(\tilde{a}\) with \(n\) bits, a constant \(\mathcal{C}\), or the result of an arithmetic operation between two expressions \(E_{1}\) and \(E_{2}\). Notice that once the data width (\(n\) bits) is determined, it is fixed for all expressions. An atom is composed of two expressions and one comparison operator \(\rhd\) in between, where \(\rhd\in\{<,>,=,\geq,\leq,\neq\}\). The arithmetic operator includes word-concatenation, modulo-sum, modulo-multiplication, bitwise operation, shift operation, etc.
## III Methodology
In this section, we introduce our quantum SMT solver for the \(\mathcal{BV}\) theory based on Grover's algorithm. Fig. 5 shows its block diagram, echoing the overall structure in Fig 2 (a). As mentioned in Section I, the oracle is the key component for marking the solutions so that the diffuser knows which elements to increase/maximize their probability to be measured. Since the diffuser is standard and independent from the search problem, we will not discuss it further. Instead, in the following, we put emphasise on how to construct the oracle.
### _SAT Circuit_
Fernandes et al. [10] showed an example of constructing the oracle circuit for a 3-SAT problem and solved the problem based on Grover's algorithm. However, they did not provide a systematic way to construct the oracle circuit. In this section, we propose a method to constructively generate the oracle circuit for an arbitrary \(3\)-SAT formula and prove its correctness. Consider the following grammar for \(3\)-SAT formulas in conjunctive normal form (CNF):
\[\begin{array}{rcl}F&\simeq&C\mid F_{1}\wedge F_{2}\\ C&\simeq&l_{1}\lor l_{2}\lor l_{3}\\ l&\simeq&v\mid\neg v\mid 0\end{array}\]
Fig. 6 shows how to construct a quantum circuit for a clause \(C:l_{1}\lor l_{2}\lor l_{3}\). Notice that the \(Q\) gate depends on each literal \(l_{i}\). If \(l_{i}\) is a negative literal \(\neg v_{i}\), \(Q\) would be the \(\mathtt{I}\) gate, i.e., the identity gate; otherwise, \(Q\) would be the \(\mathtt{X}\) gate, also called the NOT gate. The qubit \(q_{a}(C)\) is an ancilla bit1 for internal computation, and \(q_{a}(C)\) is the output bit for the truth value of \(C\). Theorem 1 proves that the quantum circuit construction for a clause \(C\) is correct.
Footnote 1: The notation \(q_{a}(C)\) is not a function application. It just represents that \(q_{a}\) is the ancilla bit of clause \(C\). Similarly, \(q_{0}(C)\) represents the output bit of clause \(C\), and \(q_{a}(F)\) represents the output bit of formula \(F\).
**Theorem 1**: **[Clause Correctness]** _(\(1\))\(q^{\prime}_{a}(C)=1\iff\) clause \(C\) is true, (\(2\))\(q^{\prime}_{a}(C)=0\), and (\(3\))\(v^{\prime}_{i}=v_{i}\) for all \(i\in\{1,2,3\}\)._
**Proof 1**: _Given a clause \(C:l_{1}\lor l_{2}\lor l_{3}\), since if \(l_{i}\) is a negative literal \(\neg v_{i}\), \(Q\) would be the \(\mathtt{I}\) gate; otherwise, \(Q\) would be the \(\mathtt{X}\) gate, we have \(Q(v_{i})=\neg l_{i}\), as the red notations in Fig. 6. Let \(a\) be the value of \(q_{a}(C)\) after apply the first CNOT gate on it._
To prove condition (\(1\)): \(q^{\prime}_{a}(C)=0\Leftrightarrow(\neg l_{3}=1)\wedge(a=1)\iff(\neg l_{3}=1) \wedge(\neg l_{1}=1)\wedge(\neg l_{2}=1)\). If we apply negation on both side, we have \(\neg(q^{\prime}_{a}(C)=0)\Leftrightarrow(\neg(l_{1}=1)\wedge(\neg l_{2}=1) \wedge(\neg l_{3}=1))\). Thus, \(q^{\prime}_{a}(C)=1\Leftrightarrow\neg(\neg l_{1}=1)\vee(\neg l_{2}=1)\lor \neg(\neg l_{3}=1)\). That is, \(q^{\prime}_{a}(C)=1\Leftrightarrow(l_{1}=1)\vee(l_{2}=1)\vee(l_{3}=1)\).
To prove condition (\(2\)): \(q^{\prime}_{a}(C)=(\neg l_{1}\wedge\neg l_{2})\oplus a\). Since \(a\) is obtained by applying CNOT gate on \(q_{a}(C)\), with \(\neg l_{1}\) and \(\neg l_{2}\) as the control bits, we have \(q^{\prime}_{a}(C)=(\neg l_{1}\wedge\neg l_{2})\oplus((\neg l_{1}\wedge\neg l_{ 2})\oplus 0)=(\neg l_{1}\wedge\neg l_{2})\oplus(\neg l_{1}\wedge\neg l_{2})=0\).
To prove condition (\(3\)): \(v^{\prime}_{i}=Q(\neg l_{i})=Q(Q(v_{i}))=v_{i}\), because \(QQ=I\), as \(Q\) is either \(\mathtt{X}\) or \(\mathtt{I}\) gate.
Fig. 7 shows how to construct a quantum circuit for a formula \(F:F_{1}\wedge F_{2}\). It is required to construct the quantum circuits for \(F_{1}\) and \(F_{2}\) first, which are then conjuncted by a CNOT gate. The qubit \(q_{o}(F)\) is the output bit for the truth value of formula \(F\). Theorem 2 proves that the quantum circuit construction for a formula \(F\) is correct.
**Theorem 2**: **[Formula Correctness]** _(\(1\))\(q^{\prime}_{o}(F)=1\iff\) formula \(F\) is true, (\(2\))\(q^{\prime}_{a}(F)=0\), and (\(3\))\(v^{\prime}_{i}=v_{i}\) for all \(i\in\{1,2,\ldots,n\}\)._
**Proof 2**: _We prove this theorem by structural induction on the grammar to generate an arbitrary formula \(F\). The basic case, when \(F\) is a clause \(C\), has been proved in Theorem 1._
_Induction assumption : Let \(F_{1}\) and \(F_{2}\) be two formulas satisfying the three conditions: (a) \(q^{\prime}_{o}(F_{1})=1\iff\) formula \(F_{1}\) is true, and \(q^{\prime}_{o}(F_{2})=1\iff\) formula \(F_{2}\) is true. (b) \(q^{\prime}_{a}(F_{1})=0\) and \(q^{\prime}_{a}(F_{2})=0\). (c) \(v^{\prime}_{i}=v_{i}\) for \(i\in\{1,2,\ldots,n\}\)._
_Consider the induction step, the circuit for \(F:F_{1}\wedge F_{2}\) is constructed based on \(F_{1}\) and \(F_{2}\), as shown in Figure 7._
To prove condition (\(1\)): Because of the CNOT gate, \(q^{\prime}_{o}(F)=1\Leftrightarrow((q^{\prime}_{o}(F_{1})=1)\wedge(q^{\prime}_{ o}(F_{2})=1))\oplus 0\Leftrightarrow(q^{\prime}_{o}(F_{1})=1)\wedge(q^{\prime}_{o}(F_{2})=1)\). Based on induction assumption (b), we can conclude that \(q^{\prime}_{o}(F)=1\iff F_{1}\) is true and \(F_{2}\) is true \(\iff F\) is true._
To prove condition (\(2\)): \(q^{\prime}_{a}(F)=q^{\prime}_{a}(F_{2})=0\), according to the induction assumption (b).
To prove condition (\(3\)): Since \(v^{\prime}_{i}\) of \(F\) is equal to \(v^{\prime}_{i}\) of \(F_{2}\), based on induction assumption (c), \(v^{\prime}_{i}=v_{i}\) for all \(i\in\{1,2,\ldots,n\}\).
### _Theory Circuit_
Now, we illustrate how to construct the theory circuit for atoms consisting of arithmetic and comparison operations. Consider the syntax grammar for an arbitrary atom, as shown in Fig. 4. An expression could be a variable \(\tilde{a}\), a constant \(\mathcal{C}\), or the result of an arithmetic operation between two expressions \(E_{1}\) and \(E_{2}\). Constructing the circuit for a variable \(\tilde{a}\) or a constant \(\mathcal{C}\) is trivial, which is omitted here. The rest is the case of \(\tilde{a}\odot\tilde{b}\). Here, we do not list
Fig. 4: Syntax of \(\mathcal{BV}\) atoms
Fig. 5: Quantum circuit for our \(\mathcal{BV}\) SMT solver
Fig. 6: Quantum circuit for a clause \(C:l_{1}\lor l_{2}\lor l_{3}\)
all the arithmetic operations, as there are many. They can either be constructed based on primitive quantum gates or were developed in related works (c.f. Section V). Instead, we abstract the circuit for arithmetic operations as the general one, shown in Fig. 8 (a), for easy illustration. Depending on different arithmetic operations, one can obtain its corresponding final circuit constructively in a bottom-up manner.
For each atom of the form \(E_{1}\rhd E_{2}\), we adopt the comparator [18] to compare \(E_{1}\) and \(E_{2}\). The overall structure of the comparator is shown in Fig. 8 (b). Of course, the output \(E_{1}\odot E_{2}\) of the arithmetic circuit would be the input of the comparator circuit. The two output bits \(O_{1}\) and \(O_{2}\) of the comparator indicate the relation between \(E_{1}\) and \(E_{2}\), as follows. Notice that the case of \((1,1)\) does not exist in the comparator design [18].
\[(O_{1},O_{2})=\begin{cases}(0,1)&\text{if}\quad E_{1}<E_{2}\\ (1,0)&\text{if}\quad E_{1}>E_{2}\\ (0,0)&\text{if}\quad E_{1}=E_{2}\end{cases}\]
Based on the two output bits \(O_{1}\) and \(O_{2}\), one can construct the corresponding atom for each of the six different cases, as shown in Fig. 9 (a)-(f). Theorem 3 shows the correctness of our circuit construction for atoms.
**Theorem 3**: **[Atom correctness]** _For each atom \(E_{1}\rhd E_{2}\) and its corresponding atom bit, we have \(atom=1\iff(E_{1}\rhd E_{2})=1\)._
**Proof 3**: _The proof is straightforward by examining the atom output for each case based on the truth table of primitive quantum gates. We omit it here due to the page limit._
### _Consistency Extractor_
The task of _consistency extractor_ is to extract those assignments that are consistent in both Boolean and the bit-vector domains. That is, given an \(atom_{i}\) with its Boolean abstract variable \(v_{B_{i}}\), we want to make sure that \(v^{\prime}_{B_{i}}=1\) iff \(atom_{i}\) and \(v_{B_{i}}\) are logical equivalent.
As shown in Fig. 10 (a), consistency extractor is composed of a \(\mathtt{CNOT}\) gate and a \(\mathtt{X}\) gate. The input \(atom_{i}\) serves as the control bit of the \(\mathtt{CNOT}\) gate, while the other input \(v_{B_{i}}\) serves as the target bit, followed by a \(\mathtt{X}\) gate to flip its result. With this design, the output qubit \(v^{\prime}_{B_{i}}\) would be \(1\) iff \(atom_{i}\) and \(v_{B_{i}}\) have the same truth value. Theorem 4 proves the correctness of our quantum circuit for consistency extractor.
**Theorem 4**: **[Correctness of Consistency Extractor]**__
(\(v^{\prime}_{Bi}=1\)) \(\iff(v_{B_{i}}\equiv\atom_{i})\)_._
**Proof 4**: _Let \(a\) be the result of \(v_{Bi}\) after applying the \(\mathtt{CNOT}\) gate, as marked in red in Fig. 10._
\[\begin{array}{lcl}v^{\prime}_{Bi}=1&\Leftrightarrow&a=0\ \Leftrightarrow\ (v_{Bi}\oplus atom_{i})=0\\ &\Leftrightarrow&(v_{Bi}=0\wedge atom_{i}=0)\vee(v_{Bi}=1\wedge atom_{i}=1)\\ &\Leftrightarrow&v_{B_{i}}\ \equiv\atom_{i}\end{array}\]
### _Solution Inverter_
_Solution inverter_ is the key component of the oracle. It marks the solutions to the SMT formula by inversing them, i.e., giving them a "\(-1\)" phase such that the diffuser knows which elements to increase/maximize their probability being measured.
The circuit for solution inverter, as shown in Fig. 10 (b), is composed of two \((m+1)\)-\(\mathtt{CNOT}\) gates and a \(\mathtt{Z}\) gate, where \(m\) is the number of Boolean abstract variables. Theorem 5 proves the correctness of solution inverter.
**Theorem 5**: **[Correctness of Solution Inverter]**__
(1) \(q^{\prime}_{\mathtt{nmt}}=1\) iff \(q^{\prime}_{o}(F_{B})=1\) and \(v_{B_{i}}\equiv atom_{i}\), \(\forall i\in\{1,2,\ldots,m\}\). (2) All the solutions are added a "\(-1\)" phase.
**Proof 5**: _Let \(q^{\prime}_{\mathtt{nmt}}\) be the value of the \(q_{\mathtt{nmt}}\) bit after applying the first \((m+1)\)-\(\mathtt{CNOT}\) gate, as marked in red in Fig. 10 (b). Because \(q_{\mathtt{nmt}}\) (initialized as 0) is the target bit of the \((m+1)\)-\(\mathtt{CNOT}\) gate, provided that \(v^{\prime}_{B_{i}}\) is one of the \(m\) controlled bit, we know that \(q^{\prime}_{SMT}=1\Leftrightarrow(v_{B_{1}}\wedge v^{\prime}_{B_{2}}\wedge \ldots\wedge v^{\prime}_{B_{m}}\wedge q^{\prime}_{o}(F_{B}))=1\). By Theorem 4, we can conclude that condition (1) holds._
To prove condition (2), let us consider the state of the quantum circuit. Let \(|v^{\prime}_{B_{1}},\ldots,v^{\prime}_{B_{m}},\ldots,q^{\prime}_{o}(F_{B}),q^ {\prime}_{\mathtt{nmt}}\rangle\) be the quantum state after applying the first \((m+1)\)-\(\mathtt{CNOT}\) gate. As the value of \(q^{\prime}_{\mathtt{nmt}}\) could be \(0\) or \(1\), the system space \(S\) can be split into two disjoint sets \(S_{0}\) and \(S_{1}\). That is, \(S=S_{0}\cup S_{1}\), where \(S_{0}=\{|v^{\prime}_{B_{1}},\ldots,v^{\prime}_{B_{m}},\ldots,q^{\prime}_{o}(F_{ B}),0\rangle\}\) and \(S_{1}=\{|v^{\prime}_{B_{1}},\ldots,v^{\prime}_{B_{m}},\ldots,q^{\prime}_{o}(F_{ B}),1\rangle\}\). By condition (1) we just proved, \(S_{1}\) is exactly the set of all solutions to the SMT problem. After that, a \(\mathtt{Z}\) gate is applied on the \(q^{\prime}_{\mathtt{nmt}}\) bit. Since \(\mathtt{Z}(|0\rangle)=|0\rangle\) and \(\mathtt{Z}(|1\rangle)=-1|1\rangle\), the \(Z\) gate will give a "\(-1\)" phase for each element in \(S_{1}\), i.e., all solutions are added a "\(-1\)" phase.
Fig. 8: Theory Circuit
Fig. 10: Consistency Extractor and Solution Inverter
Fig. 9: Quantum circuit construction for atoms
### _Reverse circuit_
After inversing the solutions, the last part of the oracle function is the _reverse circuit_, which is constructed by the reversed circuits corresponding to the four aforementioned components. The purpose of the reverse circuit is to restore the qubits back to their original states, following the reversible nature of quantum computing. In addition, Grover's algorithm may take several iterations to amplify the probability of solutions. Ancilla bits need to be restored to be used for the following iterations.
## IV Evaluation
In this section, we demonstrate our quantum circuit design for solving a \(\mathcal{BV}\) SMT formula \(\mathcal{F}\), whose Boolean abstract formula is
\[\mathcal{F}_{B}:(x\lor y\lor z)\wedge(x\lor\neg y\lor z)\]
with atom \(x:(a+b<a\oplus b)\), \(y:(a+b>a\oplus b)\), and \(z:(a+b=1)\), where variables \(a\), \(b\), \(c\) are all \(2\)-bit long; '\(+\)' is the modulo-sum operation and '\(\oplus\)' is the exclusive-or operation.
Fig. 11 (a) shows the block diagram of the quantum SMT solver for formula \(\mathcal{F}\), in which only important qubits are shown. Internal qubits of each module are omitted. Although the SAT and theory (arithmetic + comparator) circuit are drawn sequentially in the diagram, they can operate in parallel in the realistic implementation since there are no dependency between them. The main inputs are the three Boolean abstract variables \(x,y,z\) and SMT variables \(a\), \(b\). They are placed in superposition by the Hadamard gate initially. To solve formula \(\mathcal{F}\), two comparators are required. Comparator one is responsible for the relation between \((a+b)\) and \((a\oplus b)\), while comparator two is for \((a+b)\) and the constant \(1\).
The circuit was implemented and simulated in Qiskit [3]. The whole circuit requires \(32\) qubits, which already reaches the maximum number of qubits that Qiskit supports. The breakdown of the qubits required is listed in Table I. The simulation result, as shown in Fig. 11 (b), was obtained by performing five Grover iterations as one shot, repeated for \(1,024\) shots to get \(1,024\) measurements. One may wonder how we get the required number of iterations (i.e., \(\frac{\pi}{4}\sqrt{N/M}\), c.f. Section II) for performing Grover's algorithm. Actually, this number could be obtained by quantum counting [7]. However, the avaliable \(32\) qubits supported by Qiskit are not sufficient to perform quantum counting for this circuit. Thus, we started from one iteration and increased the number until the probability distribution of the measurements started to get worse. The turning point that we got is five, for this experiment. Actually, we have manually calculated the value of \(\frac{\pi}{4}\sqrt{N/M}\) by enumerating the search space and the solutions. We found that it is consistent with the number, five.
As shown in Fig. 11 (b), we can observe six bit-strings with a higher probability compared to others. Table II interprets the six bit-strings back to the assignments of five main variables \(x\), \(y\), \(z\), \(a\), \(b\). They are exactly the solutions to formula \(\mathcal{F}\). The "Counts" column represents how many times the assignment was measured among the \(1,024\) shots. According to the simulation result, our circuit provides a \(99.32\%\) (the summation of "Counts" divided by \(1024\)) of probability of measuring the solutions. In addition, it is possible to obtain all the solutions within a reasonable number of measurement shots.
## V Related Works
### _SAT circuit_
SAT is useful in a wide range of applications, but it is also a famous NP-Complete problem. There are many works, such as hardware implementation of SAT solvers, trying to improve the performance of solving SAT problems. With the advantage of superposition in the quantum system, there exists a chance to speed up the process of SAT solving by using the quantum circuit. Alberto et al. [16] proposed three algorithms, including the quantum Fredkin circuit, quantum register machine and quantum P system, to solve the SAT problem. By adopting the superposition nature of the quantum system to the input, the system can obtain all the results of all possible inputs in one evaluation. But the common problem of those methods is that it is difficult to measure the SAT result since the probability of SAT results will be very low if the number of satisfiable assignments is way less than the number of unsatisfiable assignments. Although the author proposed a non-unitary operator or there are other methods like the chaotic dynamic system [17] to extract the SAT result, they are difficult to implement for now.
Another approach to solving the measurement problem is to adopt Grover's search algorithm. Since the algorithm is used to find the target elements in a database by amplifying the measurement probability of those elements, it is suitable for searching the satisfiable assignments if those assignments can be marked properly in the oracle function. Fernandes et al. [10] provide the concept of constructing the oracle circuit of 3-SAT solving based on Grover's algorithm.
### _Quantum arithmetic and comparator circuit_
Chakrabarti et al. [8] proposed a quantum ripple carry adder that is based on the famous VBE adder [22] with the improvement of gate level reduction. Their proposed adder reorganizes the carry gates that generate carry bits to let some parts of the gates operate in parallel. On the other hand, they discard the series of sum and carry gates in the second part of the VBE adder, and replace those gates with CNOT gates to generate the summation result. The author also proposed the design of a quantum carry look ahead adder, but there are no benefits of qubit usage or gate level compared to the ripple carry adder. We adopt the ripple carry adder for modulo-sum operation in our circuit.
Kotiyal et al. [15] proposed a design of quantum multiplier focusing on reducing the ancilla inputs and garbage outputs since the
usage of qubits is the main consideration of quantum circuit design. The advantage of their proposed multiplier is that they use the binary tree based design to handle partial product summation in parallel. Besides, they employ the adder without ancilla and garbage qubits proposed by Takahashi et al. [20] to reduce the caused by adders. The implementation result also showed that their multiplier has up to improvement of ancilla and garbage qubits usage compared to other previous works.
Kotiyal et al. [14] proposed a quantum bidirectional barrel shifter that can perform four shift operations, including logical right shift, logical left shift, arithmetic right shift and arithmetic left shift. In order to implement the 2to1 MUX, which is the major component of the barrel shifter, they adopt the Fredkin gate (i.e. CSWAP gate) for implementation by setting the control qubit as the select signal and one of the target qubits as the output. They also use the CNOT gate to handle signal fan out. Compared to previous works, their shifter has the reversal control unit to achieve the bidirectional shift even though the core module of the shifter only operates right shift operation.
Finally, the quantum comparator will be used to identify the atom's status once the two \(\mathcal{BV}\) terms are ready. The quantum comparator proposed by Oliveira et al. [18] can indicate the relation between two input quantum bit strings \(\mathbf{a}\) and \(\mathbf{b}\). Their circuit, compared to another subtraction based comparator NKO, requires less resources and has higher computation parallelism.
## VI Conclusion
In this paper, we proposed a quantum SMT solver for the \(\mathcal{BV}\) theory, based on Grover's algorithm. Our proposed approach not only provides a theoretically quadratic speedup compared to the traditional method, but also has the capability of discovering all solutions. Since solving SMT problems for other theories also consists of two parts (namely SAT and theory solving), our proposed oracle construction is general to be extended to other theories, which is our future work.
|
2308.13079 | Powerful Significance Testing for Unbalanced Clusters | Clustering methods are popular for revealing structure in data, particularly
in the high-dimensional setting common to contemporary data science. A central
statistical question is, "are the clusters really there?" One pioneering method
in statistical cluster validation is SigClust, but it is severely underpowered
in the important setting where the candidate clusters have unbalanced sizes,
such as in rare subtypes of disease. We show why this is the case, and propose
a remedy that is powerful in both the unbalanced and balanced settings, using a
novel generalization of k-means clustering. We illustrate the value of our
method using a high-dimensional dataset of gene expression in kidney cancer
patients. A Python implementation is available at
https://github.com/thomaskeefe/sigclust. | Thomas H. Keefe, J. S. Marron | 2023-08-24T20:50:17Z | http://arxiv.org/abs/2308.13079v1 | # Powerful significance testing for unbalanced clusters
###### Abstract
Clustering methods are popular for revealing structure in data, particularly in the high-dimensional setting common to contemporary data science. A central _statistical_ question is, "are the clusters really there?" One pioneering method in statistical cluster validation is _SigClust_, but it is severely underpowered in the important setting where the candidate clusters have unbalanced sizes, such as in rare subtypes of disease. We show why this is the case, and propose a remedy that is powerful in both the unbalanced and balanced settings, using a novel generalization of \(k\)-means clustering. We illustrate the value of our method using a high-dimensional dataset of gene expression in kidney cancer patients. A Python implementation is available at [https://github.com/thomaskeefe/sigclust](https://github.com/thomaskeefe/sigclust).
## 1 Introduction
Clustering is a rich topic that finds broad application, such as in bioinformatics, communication, and business. The ability to collect vast quantities of biological data has led to much success in using clustering methods in bioinformatics to investigate disease subtypes, see e.g., the celebrated paper by Perou et al. (2000). Finding clusters is typically an exploratory process that is followed up with validation by experts in the underlying biology. Despite a large number of available clustering algorithms, many fewer are available for statistically validating the results of clustering. Validation is typically performed using either _internal measures_, which concern cluster cohesion and separation, or _external measures_, which compare the clustering to some known classification. Halkidi et al. (2015) and Meila (2015) provide overviews of internal and external measures respectively. However, neither are sufficient for disease subtyping:
the internal measures lack statistical guarantees, and the external measures cannot be applied because typically there is not a known classification to which to compare the results. _Statistical_ procedures to validate clustering include SigClust (Liu et al., 2008), which tests whether two clusters are "really there" by determining if they produce a stronger cluster index (CI, an internal validation measure) than could be found under a hypothesis of just one cluster. SigClust has proved popular in bioinformatics and has extensions for high-dimensional data (Huang et al., 2015) and hierarchical clustering (Kimes et al., 2014). Other popular approaches include the _gap statistic_(Tibshirani et al., 2001), which is an estimator for the true number of clusters, and _consensus clustering_(Monti et al., 2003), which is based on resampling. _Bayesian mixture modeling_(Gelman et al., 2013, Chapter 20) offers a Bayesian approach. Most recently, the _RIFT_ test (Chakravarti et al., 2019) has outperformed SigClust in data with a large second eigenvalue.
This paper addresses a major limitation of SigClust, which is that its statistical power is severely limited when the clusters have very unbalanced sizes. In clinical datasets, such as the kidney cancer example in this work, this may cause SigClust to fail to validate important clusters representing rare subtypes of a disease. The underlying reasons have to do with the fact that SigClust is built on ideas from \(k\)-means clustering, and the observation, perhaps first made by MacQueen (1967), that \(k\)-means prefers to produce _balanced_ clusterings; see also Mirkin (2015). We elucidate this issue using the geometry of \(k\)-means, and discuss why it leads SigClust to fail in this setting. We then propose an improvement, _Weighted SigClust_, using a novel generalization of \(k\)-means clustering that better recovers unbalanced clusters.
The paper is organized as follows. In Section 2 we review SigClust and motivate our work using a kidney cancer dataset with a potential rare subtype that SigClust is not powered to detect. In Section 3 we present Weighted SigClust and use it to find support for the rare subtype. Section 4 discusses an algorithmic implementation. Section 5 is discussion.
### Notation
Throughout this work we will assume a dataset taking the form \(\{\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\}\subset\mathbb{R}^{d}\), alternately denoted by the matrix \(\mathbf{X}\in\mathbb{R}^{d\times n}\). When \(\mathbf{X}\) is partitioned into two clusters, we will denote them by the pair \((C_{1},C_{2})\in\mathcal{P}_{n}\), where \(\mathcal{P}_{n}\) denotes the collection of two-set partitions of the indices \(\{1,\cdots,n\}\):
\[\mathcal{P}_{n}=\{(C_{1},C_{2})\,:\,C_{1}\uplus C_{2}=\{1,\cdots,n\}\}. \tag{1}\]
The cluster means, or _centroids_, are denoted as \(\mathbf{\bar{x}}^{(1)}\) and \(\mathbf{\bar{x}}^{(2)}\), and the overall data mean veftor as \(\mathbf{\bar{x}}\).
Review of the cluster index and SigClust
In this section we review the SigClust procedure, which tests the validity of a given \(k=2\) clustering. It relies on a heuristic measure of cluster strength called the cluster index, which we review in Section 2.1. We continue with a review of SigClust in Section 2.2.
### The cluster index
The _cluster index_ (CI), so named by Liu et al. (2008), is a heuristic measure of the strength of a \(k=2\) clustering. For a dataset \(\mathbf{X}\) and clusters \((C_{1},C_{2})\), (recall the notation established in Section 1.1), the CI is defined
\[\text{CI}(C_{1},C_{2})=\frac{\sum_{i\in C_{1}}||\boldsymbol{x}_{i}-\boldsymbol {\bar{x}}^{(1)}||^{2}+\sum_{i\in C_{2}}||\boldsymbol{x}_{i}-\boldsymbol{\bar{ x}}^{(2)}||^{2}}{\sum_{i=1}^{n}||\boldsymbol{x}_{i}-\boldsymbol{\bar{x}}||^{2}}. \tag{2}\]
The numerator of (2) is called the _within-cluster sum of squares_, and is the typical objective function of \(k\)-means clustering. See Steinley (2006) for a comprehensive review of \(k\)-means. The denominator of (2) is called the _total sum of squares_, and has the effect of standardizing the CI to the unit interval.
The value of the CI comes from the following observation. If we call two groups of points _clusters_, then there are at least two ways to make the clusters "stronger":
1. Tighten the spread of the points around the cluster centers, e.g. by decreasing the within-cluster sums of squares, or,
2. Move the clusters farther apart, e.g., by increasing the distances between the centroids.
The CI is a useful measure of cluster strength because it decreases by doing either of the two above.
From the perspective of \(k\)-means clustering, minimizing (2) versus just minimizing its numerator are equivalent, because the denominator is constant in the data. The purpose of including the denominator is to give the criterion the upper bound of 1. The CI achieves this upper bound iff the clusters are moved so close together that their centroids collide, reflecting the notion that there can be no "weaker" clusters than these.
On the other hand, the CI reaches its lower bound of 0 iff the clusters are so tight that the points are exactly piled on the cluster centroids. As long as the centroids are themselves separated, the CI of these clusters will be 0, and this neatly reflects the notion that there can be no "stronger" clusters than these. We design the methods in this paper to retain these properties, which we encapsulate in the following definition.
**Definition 1** (The CI property).: A clustering criterion \(f:(\mathbb{R}^{d\times n}\times\mathcal{P}_{n})\rightarrow[0,1]\) has _the CI property_ iff
\[f(\mathbf{X},C_{1},C_{2})=0\iff\sum_{i\in C_{1}}||\boldsymbol{x}_{i}- \boldsymbol{\bar{x}}^{(1)}||^{2}=\sum_{i\in C_{2}}||\boldsymbol{x}_{i}- \boldsymbol{\bar{x}}^{(2)}||^{2}=0, \tag{3}\]
and
\[f(\mathbf{X},C_{1},C_{2})=1\iff\boldsymbol{\bar{x}}^{(1)}=\boldsymbol{\bar{x} }^{(2)}\text{ and }\sum_{i=1}^{n}||\boldsymbol{x}_{i}-\boldsymbol{\bar{x}}||^{2}>0. \tag{4}\]
\(\diamond\)
The discussion above may be summarized by saying that the numerator of (2) provides (3), while the denominator provides (4).
### The SigClust procedure
We review SigClust using an example from the PanCan database of The Cancer Genome Atlas (TCGA; Weinstein et al., 2013). The particular dataset we use here is PanCan's _Kidney Cancer Data_, which contains gene expression measurements of \(n=551\) kidney tumors measured across \(d=12,478\) genes. Figure 1 shows the first two principal components of this dataset, colored by the \(k\)-means labels with both \(k=2\) (left) and \(k=3\) (right). Visually, the clustering produced by \(k=2\) is quite sensible, but there is a clear outlier group of three points marked with crosses. Could they represent a rare subtype of kidney cancer? One might think that simply increasing \(k\) to 3 would separate this outlier group as the third cluster, but as the right panel shows, \(k=3\) results in splitting the large cluster in the bottom half of the figure instead. We therefore consider two interesting clusterings of this dataset:
1. (2-means) the 2-means clustering, blue versus green in the left panel of Figure 1
2. (outlier-inlier) the outlier group versus the rest, crosses versus dots in Figure 1,
which we will test using SigClust.
Following Marron and Dryden (2021, Section 13.2), SigClust can be used in two modes:
**Definition 2** (Exploratory/confirmatory modes).: Applying SigClust to a data sample \(\mathbf{X}\) with a particular clustering of interest, \((C_{1},C_{2})\in\mathcal{P}_{n}\), is called _confirmatory mode_ SigClust. Applying SigClust to \(\mathbf{X}\)_without_ a particular clustering in mind is referred to as _exploratory mode_ SigClust. When used in exploratory mode, SigClust uses the labels from 2-means clustering as the clustering of interest. Exploratory mode is therefore equivalent to using confirmatory mode with 2-means labels. \(\diamond\)
For our kidney cancer example, 2-means refers to using SigClust in exploratory mode, and outlier-inlier to confirmatory mode.
SigClust assesses whether the CI of a candidate clustering is smaller, i.e., stronger, than could be expected if the population contained only one cluster. To do this, SigClust must define appropriate null and alternative hypotheses, and then compute or estimate the distribution of the minimal CI under the null.
SigClust defines a _cluster_ as a group of observations from a single multivariate Gaussian distribution (which may be spherical or stretched). This motivates the hypotheses
* \(H_{0}\): the distribution is a multivariate Gaussian, i.e., has only one cluster;
* \(H_{A}\): the distribution is not a single multivariate Gaussian.
It is important to note that \(H_{A}\) takes the form "not \(H_{0}\)" rather than "a mixture of two multivariate Gaussians," or even "the population has clusters." Thus, \(H_{0}\) and \(H_{A}\) cover a quite general data model. In the strictest sense, SigClust is a test of Gaussianity, but one that is powerful against \(H_{0}\) when the distribution does in fact support clusters. Its power comes from using the CI as its test statistic, which is small when the data have two clusters, and large when they constitute one cluster.
SigClust estimates the null distribution of the CI using a parametric bootstrap scheme. A large number of synthetic datasets, say 100 or 1,000, each with \(n\) obser
Figure 1: PC1-PC2 scatterplot of kidney cancer data, colored by \(k\)-means labels. There is a clear outlier cluster marked in crosses, but \(k\)-means clustering will not select it with either \(k=2\) (blue/green at left) or \(k=3\) (blue/green/orange at right).
vations, are simulated using a \(d\)-dimensional Gaussian. Each synthetic dataset is clustered by 2-means clustering, and the resulting CI is then a parametric bootstrap sample from the null distribution.
In low-dimensional settings, these synthetic datasets can be simulated from the maximum-likelihood Gaussian, \(N_{d}(\mathbf{\bar{x}},\mathbf{\hat{\Sigma}})\). However, since the CI is invariant to translation and rotation, it is more statistically efficient to sample from \(N_{d}(\mathbf{0},\text{diag}(\hat{\lambda}_{1},\cdots,\hat{\lambda}_{d}))\), where \(\hat{\lambda}_{i}\) are estimates of the eigenvalues of the covariance matrix. Huang et al. (2015) showed that in high-dimensional settings, a soft-thresholding approach improves the estimation of these eigenvalues, which improves SigClust's power.
Finally, if the candidate CI is unusually small with respect to the simulated null distribution, SigClust concludes that the candidate clustering is stronger than would be plausible under \(H_{0}\). The significance may be assessed by computing an empirical p-value for the candidate CI with respect to the left tail of the simulated null distribution.
We now apply SigClust to the two clusterings of interest from the example in the left panel of Figure 1: 2-means, which is shown with blue and green colors; and outlier-inlier, which is shown using crosses and dots. The results are presented using the _SigClust diagnostic plot_ (Figure 2). The simulated null distribution is shown with a histogram, and the candidate CIs of interest are shown with vertical lines for comparison. First, we discuss the 2-means clustering. We see that the 2-means CI is lower (stronger) than any of the 100 CIs simulated under \(H_{0}\), which would produce an empirical p-value of less than \(\frac{1}{100+1}\). While this empirical p-value allows us to reject \(H_{0}\) at level.01, we typically also report a _z-score_, which is computed by standardizing the candidate CI with respect to the mean and standard deviation of the simulated null distribution. In this case, the 2-means CI is 11.8 standard deviations lower than the mean CI under \(H_{0}\). The z-score is helpful when comparing multiple candidate CIs, which may have all the same empirical p-value. However, using either the z-score or the empirical p-value, we can conclude that the 2-means clusters are much stronger than should occur in unclustered, Gaussian data.
The other CI in Figure 2 is the one produced by the outlier-inlier clustering, which exemplifies the major point of this paper. There are two things we want to say about it. First, it is near 1, saying that this clustering is weak with respect to the CI criterion. We will argue that this criterion is not fair to unbalanced clusters. Second, it is far outside the range of CIs produced under \(H_{0}\), but in the right tail of the null distribution instead of the left. SigClust is defined as a left-tail test because the CI is small when clusters are present; placing the rejection region in the left tail therefore maximizes power. Furthermore, although this CI is far outside the range of those produced under \(H_{0}\), there is no useful conclusion available. In fact, any dataset can be labeled to produce a CI out in the right tail, regardless of whether there is cluster structure or not. For example, simply assigning points to clusters by independent coin flips tends to produce overlapping clusters whose means are very close to each other, and to the overall mean. Hence, the resulting CI will be close to 1 and larger than
any of the CIs in the SigClust null distribution, but clearly no clustering structure has been assessed.
Although the outlier-inlier clustering is weak with respect to the CI criterion, it looks like a clustering of keen interest in the PCA scatterplot (Figure 1), suggesting the CI is not the appropriate measure for this scenario. In fact, the large CI occurs specifically because the clusters are so unbalanced. The three outlier points account for almost none of the of the total variance of the 551 observations, and barely influence the mean. Therefore, the CI is essentially comparing the total variance of the 548 inlier points around _their_ mean to the total variance of all 551 points around the very nearby _overall_ mean. Hence the numerator and denominator of the CI are nearly the same, giving a value near 1. In general, unbalanced clusters will have large CIs for this reason. Another example is visually explained in Figure 4 in Section 3.1. Unbalanced clusters therefore require a criterion that accounts for the relative sizes.
The outlier-inlier clustering provides just one example of unbalanced clusters that may be critically important, but that SigClust is not powered to validate. In the following section, we generalize the CI to give more weight to small clusters, so that the outlier-inlier clustering is strong. Integrating this modified clustering objective into SigClust greatly improves its power when clusters are unbalanced, and allows SigClust to recognize the outlier-inlier clustering as strongly significant.
Figure 2: SigClust diagnostic comparing the 2-means and outlier-inlier clusterings. The histogram shows the simulated distribution of the CI under \(H_{0}\); the vertical lines show the CIs of our two clusterings. Figure shows that conventional SigClust is unable to validate the outlier-inlier clustering.
The proposed methodology
SigClust lacks power in the case of strongly unbalanced clusters because such clusters produce large CIs, as discussed in Section 2.2. To address this issue, we propose a modification of the CI, called the _weighted CI_ (WCI), that can recognize unbalanced clusters as strong. Using the WCI as SigClust's test statistic then rectifies SigClust's power in the unbalanced setting. In Section 3.1 we motivate and develop the WCI using a toy dataset, and propose a clustering method around it, called _WCI clustering_. In Section 3.2 we integrate WCI clustering into SigClust, which we call _Weighted SigClust_, and present toy examples showing its improved power over conventional SigClust. Finally, we return to the kidney cancer example, where we show that Weighted SigClust finds strong support for both clusterings of interest.
### The weighted cluster index
In this section we propose the _weighted cluster index_ (WCI), which weights the sums-of-squares in (2) by a power of the cluster sizes. The WCI function is:
\[\text{WCI}_{g}(C_{1},C_{2})=\frac{\frac{1}{|C_{1}|^{g}}\sum_{i\in C_{1}}|| \boldsymbol{x}_{i}-\boldsymbol{\bar{x}}^{(1)}||^{2}+\frac{1}{|C_{2}|^{g}}\sum _{i\in C_{2}}||\boldsymbol{x}_{i}-\boldsymbol{\bar{x}}^{(2)}||^{2}}{\frac{1}{ |C_{1}|^{g}}\sum_{i\in C_{1}}||\boldsymbol{x}_{i}-\boldsymbol{\bar{x}}||^{2}+ \frac{1}{|C_{2}|^{g}}\sum_{i\in C_{2}}||\boldsymbol{x}_{i}-\boldsymbol{\bar{x} }||^{2}}, \tag{5}\]
where the exponent \(g\) is a tuning parameter that controls how much weighting is applied.
The idea of the WCI is to weight the sums of squares so as to allow small clusters to matter more in the optimization, while retaining the CI property (Definition 1). It is easy to verify that that WCI has the CI property; furthermore, when the clusters \(C_{1}\) and \(C_{2}\) have the same size, the WCI is equivalent to the CI. When \(g=0\) we recover the original CI. When using WCI clustering in SigClust, we recommend trying each of \(g=0\), \(0.25\), and \(0.5\), an using the value with the strongest z-score.
We motivate the WCI using a toy 2-D dataset, _Hotdog-Plus-Outliers_, which is a subset of the _Four Clusters_ dataset from Marron and Dryden (2021, Chapter 12). A scatterplot is shown in Figure 3. These data are simulated from two unbalanced clusters: a 60-point stretched Gaussian "hotdog," and two outlier points. We call this labeling the _true labels_. The left panel of Figure 3 colors the points by the true labels, while the right panel colors them by the 2-means labels. The 2-means clustering prefers to break apart the large cluster rather than recover the true labels. The true labels have a CI of \(0.67\), while the 2-means labels have a much stronger CI of \(0.43\). The reason that 2-means does not recover the correct clusters is that the two outlier points are just too few to contribute much to the optimization. Figure 4 provides a geometric explanation by visualizing each squared distance as a gray square, where one edge of the square connects a point to its associated centroid. The true labels achieve two tiny squares on the outlier points by partitioning them into their own cluster, but
Figure 4: Hottog-Plus-Outliers dataset colored by true labels (left) and 2-means labels (right). The semitransparent gray squares show the squared distances to the centroids (pink crosses). The amount of gray gives a visual indication of within-cluster sums of squares. The greater accumulation of gray on visualizes the difference between the sums-of-squares.
Figure 3: Hottog-Plus-Outliers toy dataset, colored by true class labels (left) and 2-means labels (right). 2-means clustering does not recover the true labels, because breaking apart the hottog produces a much lower CI of 0.43. The pink crosses in the right panel indicate the 2-means centroids.
at the expense of quite a few large squares in the hotdog cluster. The 2-means labels, however, break apart the hotdog. The semitransparent squares produce darker areas of gray when they overlap, so the total amount and darkness of gray gives a visual indication of the within-cluster sum of squares for each labeling. The greater total accumulation of gray in the left panel compared to the right illustrates that it is worth trading those two tiny squares on the blues in order to get more small and medium squares on the hotdog. The reason that \(k\)-means often does not recover unbalanced clusters is that it is just not worth getting a few tiny squares on small clusters, when large clusters can be broken apart to get a larger number of small and medium-sized squares. Therefore, our approach in defining the WCI is to use the sizes of the clusters as weights, which allows small clusters to play a larger role in the optimization.
In Figure 5 we rotate this dataset to its principal axes and consider the clusters generated by sliding a partition line along PC1.
We then plot the original CI, \(\text{WCI}_{0.25}\), \(\text{WCI}_{0.5}\), and \(\text{WCI}_{1}\) as functions of the partition point on PC1. The dashed vertical line marks the partition that minimizes the criterion. We see that conventional CI and \(\text{WCI}_{0.25}\) are minimized by splitting the hotdog roughly in half, but \(\text{WCI}_{0.25}\) is also nearly minimized by separating the outliers from the hotdog. It therefore considers both clusterings to be good. \(\text{WCI}_{0.5}\) and \(\text{WCI}_{1}\) are both minimized by separating the outliers from the hotdog, but \(\text{WCI}_{0.5}\) also indicates that splitting the hotdog in two is reasonable. Although \(\text{WCI}_{1}\) has a nice interpretation of minimizing the combined mean squares in each cluster, we do not recommend using it. In practice, \(\text{WCI}_{1}\) tends to pluck off one or two outliers, regardless of the cluster structure.
Figure 5: Bottom row: PC1-2 scatterplots of Hotdog-Plus-Outliers dataset. The dashed lines indicate the minimum of the criterion in question. Top row: CI, \(\text{WCI}_{0.25}\), \(\text{WCI}_{0.5}\), and \(\text{WCI}_{1}\) evaluated at different partitions along PC1.
The key feature of the WCI is that it does not favor balanced clusters as strongly as the CI. We now show how this works directly by clustering simulated Gaussian data in several choices of dimension. For each of \(d\in\{1,4,32,100\}\), and each \(g\) in a grid on \([0,1]\), we simulate 150 iid samples of size 100 from \(N_{d}(\mathbf{0},\mathbf{I}_{d})\), cluster them with WCI clustering, and plot the size of one of the two clusters chosen at random. Figure 6 shows a plot for each choice of dimension \(d\). Each scatterplot has a "wishbone" shape:
The small values of \(g\) produce relatively balanced clusters; as \(g\) increases, a wider range of cluster sizes is produced; and eventually only _unbalanced_ clusters are produced. The wishbone contracts horizontally toward the left margin as the dimension \(d\) increases. Note that a wide distribution of cluster sizes is produced just before the fork in the wishbone. Depending on the dimension, this occurs somewhere between \(g=0\) and \(0.6\). The locations of the wishbones in Figure 6 depend both on the dimension and on the eigenvalues of Gaussian covariance. However, for fixed \(d\), equal eigenvalues produce the right-most wishbone. Therefore examples with unequal eigenvalues do not give
Figure 6: Scatterplot of cluster size resulting from WCI clustering of standard Gaussian samples with different choices of \(g\) and \(d\). Values of \(g\) between 0 and 0.5 produce the widest range of cluster sizes.
additional insight.
In conclusion, the WCI is a clustering criterion that is like the CI but allows smaller clusters to play a greater role. In particular, when the data are Gaussian, the WCI with appropriate choice of \(g\) is more or less impartial to how balanced the cluster sizes are.
### Weighted SigClust
SigClust has low power when facing unbalanced clusters because its test statistic, the CI, is not sensitive to small clusters. Therefore, to increase SigClust's power in this setting, we propose _Weighted SigClust_, which replaces the test statistic with the WCI (5), a criterion that _is_ sensitive to small clusters. Any value of the tuning parameter \(g\) in the WCI may be used; we recommend trying 0, 0.25, and 0.5 and using the choice with the strongest z-score. In particular, Weighted SigClust compares the WCI of the clustering of interest to a null distribution formed by minimizing the WCI on simulated datasets drawn under \(H_{0}\), as discussed in Section 2.2. As in conventional SigClust, we reject \(H_{0}\) if the sample WCI is small enough with respect to this null distribution.
We now apply the proposed Weighted SigClust to the Hotdog-Plus-Outliers data introduced in Section 3.1, and compare the result to the conventional SigClust. The diagnostic plots are in Figure 7.
We see that the conventional SigClust is not able to validate this clustering: the diagnostic shows that the sample CI is in the range of the typical values under the null hypothesis. This is quantified by the z-score of 1.09. Weighted SigClust, on the other hand, finds decisive evidence of clustering with both \(g=0.25\) and 0.5. The sample WCIs are smaller than any of the simulated null statistics; and the associated z-scores are very strong: -3.98 and -7.21 respectively. In the Appendix we extend this example, providing Sigclust diagnostics for more values of \(g\) along with plots of the distributions of the cluster sizes in the simulation as in
Figure 7: Comparison of conventional and Weighted SigClust for Hotdog-Plus-Outliers data. Conventional SigClust cannot reject \(H_{0}\), while the Weighted SigClust does, with a very strong z-score, for both \(g=0.25\) and 0.5.
the "wishbone plots" in Figure 6. We conjecture that power is maximized when these distributions are closest to uniform.
While Weighted SigClust is much more powerful than the conventional SigClust in the previous example, we also provide an example showing that our method does not _lose_ power in the balanced cluster setting. In Figure 7(a) we show a scatterplot of two simulated round clusters, each of thirty points in \(\mathbb{R}^{2}\).
Like Hotdog-Plus-Outliers, these data are a subset of the _Four Clusters_ dataset in Marron and Dryden (2021). In Figure 7(b) we compare the diagnostic plots of Weighted SigClust with \(g=0.5\) and conventional SigClust on this dataset. Both test statistics are far below any of the simulated null statistics; indicating that both methods are very powerful on this balanced example. To save space, we do not show the plot for \(g=0.25\); its visual impression is in between the two shown.
We now return to the Kidney Cancer Data from Section 2.2, and its two candidate labelings: 2-means and outlier-inlier. As was demonstrated in Figure 2, conventional SigClust finds strong support for 2-means but not outlier-inlier. However, using Weighted SigClust with \(g=0.5\), we do find strong support for both labelings. In Figure 9 we show the Weighted SigClust diagnostic plots for this dataset for \(g=0.25\) and \(0.5\), with the WCIs and z-scores for both labelings. For completeness, we also include the exploratory-mode WCI and z-score, associated with the labeling that minimizes WCI on the sample. Not only has outlier-inlier become strongly significant using Weighted Sigclust with \(g=0.25\), it is also more significant than 2-means, and rivals the exploratory-mode WCI.
Figure 8: An example of balanced clusters. Both conventional and Weighted SigClust with \(g=0.5\) recognize this clustering as strongly significant.
## 4 Implementation details
In this section we discuss the optimization task of finding a clustering that minimizes the WCI. The key difficulty is the combinatorial search space: there are \(2^{n-1}-1\) ways to assign \(n\) points to two clusters. The \(k\)-means procedure is typically minimized using the iterative algorithms of Lloyd (1982) or MacQueen (1967). These algorithms monotonically improve the objective with each iteration, so they always find at least a local minimum of the CI. However, this does not apply to the WCI, so a different approach is needed.
Our approach is to limit the \(\mathcal{O}(2^{n})\) search space to a \(\mathcal{O}(n)\) space of partitions, where each partition is induced by a hyperplane normal to one of the top principal components of the data. The idea is to slide a hyperplane along each of the top \(P\) PCs and record the WCI for each of the resulting \(P(n-1)\) partitions. Figure 5 provides a visual example of this scheme using the first principal component of the Hotdog-Plus-Outliers data. Since a (near)-optimal partition might not be determined by PC1, the process may be repeated with PC2, 3, etc., taking the lowest overall WCI as the output. The number \(P\) of PCs with which to repeat this process is up to the user. There is certainly no guarantee that the optimal clustering will be defined by a hyperplane normal to a top PC; we therefore make two notes in defense of this approach. First, in classical 2-means clustering, the optimal clusters can always be separated by a hyperplane. We hypothesize that this is true for WCI clustering as well. Second, among approaches using hyperplanes, defining the hyperplanes using the PC directions can benefit from the variance structure of the data; this idea was also considered by Tibshirani et al. (2001) in their development of the gap statistic. The routine described above is straightforward to implement; pseudocode is found in Algorithm 1. However, in practice we recommend using an accelerated version of this
Figure 9: Weighted SigClust diagnostic plots for Kidney Cancer Data with \(g=0.25\) and \(0.5\). With \(g=0.5\), all three labelings provide strongly significant z-scores.
routine, which we describe next.
Accelerating Algorithm 1 is accomplished by rewriting the within-cluster sum-of-squares terms as functions of the pairwise squared distances \(d_{ij}=||\mathbf{x}_{i}-\mathbf{x}_{j}||^{2}\), using the following well-known identity:
**Lemma 1**.: _Let \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\in\mathbb{R}^{d}\), let \(\mathbf{\bar{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\), and let \(d_{ij}=||\mathbf{x}_{i}-\mathbf{x}_{j}||^{2}\). Then_
\[\sum_{i=1}^{n}||\mathbf{x}_{i}-\mathbf{\bar{x}}||^{2}=\frac{1}{2n}\sum_{i,j=1}^{n}d_{ij}. \tag{6}\]
This lemma will allow us to organize the calculations such that checking the next partition can be performed via an update of previous calculations. This update step is what provides the acceleration.
To search along the \(p^{\text{th}}\) PC, we first define the following. Let \(d_{ij}\) and \(\mathbf{\bar{x}}\) be defined as in Lemma 1 for the observed data \(\{\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\}\), and let \(r_{i}=||\mathbf{x}_{i}-\mathbf{\bar{x}}||^{2}\) be the squared distances to the overall mean. Sort the data indices by PC \(p\) score, so that \(\pi(\cdot)\) represents the permutation of \(\{1,\cdots,n\}\) that sorts \(\{\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\}\) by PC \(p\) score. Then, any two clusters formed by splitting along this PC will have sizes \(k\) and \(n-k\) for some \(1\leq k<n\), and can be written as \(C_{1}=\{\pi(1),\cdots,\pi(k)\}\) and \(C_{2}=\{\pi(k+1),\cdots,\pi(n)\}\).
The the WCI for these clusters would be written
\[\frac{k^{-0.5}\sum_{i=1}^{k}||\mathbf{x}_{\pi(i)}-\mathbf{\bar{x}}^{(1)}||^{2}+(n-k)^{ -0.5}\sum_{i=k+1}^{n}||\mathbf{x}_{\pi(i)}-\mathbf{\bar{x}}^{(2)}||^{2}}{k^{-0.5}\sum_ {i=1}^{k}r_{\pi(i)}+(n-k)^{-0.5}\sum_{i=k+1}^{n}r_{\pi(i)}}. \tag{7}\]
Applying Lemma 1 to the sums in the numerator, (7) becomes
\[\frac{2k^{-1.5}\sum_{i,j=1}^{k}d_{\pi(i),\pi(j)}+2(n-k)^{-1.5}\sum_{i,j=k+1}^{ n}d_{\pi(i),\pi(j)}}{k^{-0.5}\sum_{i=1}^{k}r_{\pi(i)}+(n-k)^{-0.5}\sum_{i=k+1}^{ n}r_{\pi(i)}}. \tag{8}\]
For our updating algorithm, we will label the four summations in (8):
\[\alpha_{k} =\sum_{i,j=1}^{k}d_{\pi(i),\pi(j)}\] \[\beta_{k} =\sum_{i,j=k+1}^{n}d_{\pi(i),\pi(j)}\] \[\gamma_{k} =\sum_{i=1}^{k}r_{\pi(i)}\] \[\delta_{k} =\sum_{i=k+1}^{n}r_{\pi(i)}.\]
Then, (8) is succinctly written
\[\frac{2k^{-1.5}\alpha_{k}+2(n-k)^{-1.5}\beta_{k}}{k^{-0.5}\gamma_{k}+(n-k)^{-0. 5}\delta_{k}}. \tag{9}\]
To check the next partition on this PC, we must find the \((k+1)^{\text{th}}\) values of \(\alpha\), \(\beta\), \(\gamma\), \(\delta\). Next we will show that these values can be specified in terms of an efficient update of their \(k^{\text{th}}\) values.
We explain the update step visually using Figure 10, which shows the matrix \(\{d_{ij}\}\) with rows and columns ordered by \(\pi(\cdot)\).
The values of \(\alpha_{k}\) and \(\beta_{k}\) are given by the sums of the values in the orange and blue-bordered boxes respectively. We can see that \(\alpha_{k+1}\) and \(\beta_{k+1}\) would be found by expanding the orange-bordered box to include the orange-shaded values while shrinking the blue-bordered box to exclude the blue-shaded values. Therefore, to perform the update, we need only add the terms shaded in orange to \(\alpha_{k}\), and subtract the terms shaded in blue from \(\beta_{k}\). The values of \(\gamma_{k}\) and \(\delta_{k}\) are updated by adding \(r_{\pi(k+1)}\) to the former and subtracting it from the latter. The pseudocode to search along PC \(p\) is given in Algorithm 2.
Figure 10: Distance matrix with rows and columns ordered by PC score, illustrating the updating step of the algorithm.
## 5 Discussion
The discussion section collects several elements. Section 5.1 points out and resolves a potential lack of statistical rigor concerning confirmatory-mode SigClust. Section 5.2 remarks on a recent asymptotic study of SigClust and concludes the paper.
### Rigorous statistical inference issues in SigClust
We raise a fine statistical point about the soundness of SigClust that has not yet been addressed in the literature: SigClust is only a valid hypothesis test when both the sample and the synthetic datasets are clustered using the same procedure, that is, when SigClust is used in exploratory mode. Here we discuss why this is the case and offer a resolution.
In exploratory mode (recall Definition 2), the same clustering procedure is used to cluster both the sample _and_ the synthetic datasets used in the null distribution. In conventional SigClust this is 2-means clustering, and in Weighted SigClust it is WCI clustering. In confirmatory mode, by contrast, the procedure used to generate the candidate labeling may not be the same as the one used for generating the null CIs. Hence, the labeling may not provide the minimal achievable CI on the sample. In fact, the candidate labeling is often produced by visual inspection, as with the outlier-inlier labels, or another ad hoc procedure that cannot be specified as a pure function of the data. This issue can be better examined by rigorously defining
SigClust's test statistic. Without loss of generality, we discuss conventional SigClust; the same discussion applies to Weighted SigClust, substituting the WCI for the CI. Recall that \(\mathcal{P}_{n}\) denotes the binary partitions of \(\{1,\cdots,n\}\). In exploratory mode, we use the sample \(\mathbf{X}\) to produce the test statistic \(T(\mathbf{X}):=\min_{(C_{1},C_{2})\in\mathcal{P}_{n}}\mathrm{CI}(C_{1},C_{2})\), and the distribution of \(T\) under \(H_{0}\) can be estimated using the simulation procedure described in Section 2.2. However, in confirmatory mode, the sample uses \(\tilde{T}(\mathbf{X},\mathbf{f}):=\mathrm{CI}(\mathbf{f}(\mathbf{X}))\), where \(\mathbf{f}:\mathbb{R}^{n\times d}\to\mathcal{P}_{n}\) is the clustering procedure used to generate the candidate labels, but the CIs in the null distribution are still produced by \(T\), not \(\tilde{T}\). This means that, in the sense of strict mathematical statistics, SigClust is only a valid hypothesis test when used in exploratory mode, because the sample and the null use the same test statistic. A rigorous recipe for SigClust might be to specify the clustering routine \(\mathbf{f}\) in advance, and use that same routine for both the sample and the null distribution.
However, this discussion does not invalidate the use of SigClust in confirmatory mode, as we will show using the kidney cancer example. The diagnostic plot in Figure 9, shows WCIs for three labelings: the exploratory mode labeling, i.e., the labeling that minimizes the WCI statistic, the outlier-inlier labeling, and the 2-means labeling. The latter two are confirmatory-mode uses of SigClust. The diagnostic plot shows that all three WCIs are far smaller than the WCIs simulated under \(H_{0}\). Rejecting \(H_{0}\) for the exploratory-mode CI is absolutely justified, because the same clustering procedure was used for both the sample and the synthetic datasets. On the other hand, the outlier-inlier labeling came from visual inspection of the PCA scatterplot in Figure 1, and the 2-means labels came from minimizing the CI, not the WCI. The discussion above would suggest that rejecting \(H_{0}\) for these labelings would not be statistically valid because the same function was not used for both the sample and the synthetic datasets. We offer the following resolution. The WCIs simulated under the null are explicitly trying to minimize the criterion. The WCIs of our candidate labelings are smaller than all of these simulated ones, without even trying to be minimal. _They are both therefore much stronger clusterings with respect to WCI than the strongest that could be found under the null._ This is exactly the sort of conclusion that SigClust is intended to provide. We are therefore justified in using SigClust in confirmatory mode, in spite of the subtle issue discussed above.
### Additional issues
The original SigClust paper (Liu et al., 2008) described a parametric bootstrap method to estimate the null distribution of the CI, but recently, Chakravarti et al. (2019) proved an \(n\to\infty\) central limit theorem for this distribution. This raises the possibility of using asymptotic arguments in SigClust instead of simulation, which could provide a big improvement in speed, especially in high-dimensional settings. We compared their CLT to results from simulation, and found that the sample size needs to be in the tens of thousands before the asymptotic approximation begins to match the simulated distribution. Details will appear in the forthcoming dissertation (Keefe,
2023).
An important question, then, is whether the rate of convergence in this CLT can be improved, because SigClust is often used with much smaller samples. Furthermore, we conjecture that a similar CLT would apply to WCI as well. If so, it would be helpful to understand its asymptotics not only as \(n\) grows, but also as the clusters become more imbalanced, i.e., \(|C_{1}|/|C_{2}|\to 0\). Such a CLT would be useful for large sample sizes with highly imbalanced clusters: the large \(n\) would make simulation slow, and the imbalanced clusters would mean that subsampling to a more manageable \(n\) would lose too much signal from the minority cluster.
We believe there are much better ways to minimize the WCI criterion. Since it is hard to minimize the WCI globally, i.e., over all binary partitions, we have chosen instead to minimize it over a particular subset of partitions using the "sliding hyperplane" scheme in Section 4. Minimizing over this subset already provides a dramatic increase in SigClust's power in unbalanced examples, but we would prefer a more principled approach, as well as one that could readily extend to \(k>2\). The forthcoming dissertation (Keefe, 2023) will examine an approach based on semidefinite programming.
An open theoretical question involves the relationship between the power of the test and the sizes of the clusters that SigClust produces when simulating the null distribution of WCI. This is further discussed in the Appendix. In our examples, it seems that power is maximized when the distribution of cluster sizes is uniform; see, e.g., Figure 11. This supports the idea that being more impartial to cluster size is what gives our method power in unbalanced settings.
## Appendix A Extended Hotdog-Plus-Outliers SigClust example
Here we devote more discussion to the Hotdog-Plus-Outliers example, whose SigClust diagnostic plots are in Figure 7. In Figure 11 we collect Sigclust diagnostic plots for more values of \(g\in[0,0.7]\) (left), and histograms of the cluster sizes produced in the null simulation (right). Each row of the figure corresponds to a choice of \(g\). The left column contains the SigClust diagnostic plot, comparing the \(\text{WCI}_{g}\) of the sample (red line) to the null distribution of \(\text{WCI}_{g}\)s (gray histogram). As in the previous SigClust diagnostic plots, we annotate the plot with the z-score as well. The right column of the figure shows the distribution of the size of the simulated clusters produced within each of the parametric bootstrap samples during the SigClust simulation. In particular, for each bootstrap sample, one of the two clusters is chosen at random and its size is recorded in the histogram. This distribution is comparable to the
distributions visualized in the "wishbone plots" in Figure 6. This tells us whether the simulated clusters in the SigClust simulation tended to be balanced or unbalanced. For example, in the top-right plot, we see that when \(g=0\), the clusters tended to be roughly balanced, roughly 1:2 to 1:1. For \(g=0.4\) on the other hand, a wide range of balances were seen. For \(g\geq 0.55\) the clusters were always very unbalanced, producing bimodal histograms.
We have two takeaways from this figure. From the left column, we see that values of \(g\in[0.3,0.7]\) all produce very negative z-scores, indicating high power against \(H_{0}\) for this example. In fact, \(g=0.5\) gives the highest power, although this is not the case for all datasets. From the right column we see that for \(g=0\) the clustering method is very partial to balanced clusters, and for \(g\geq 0.6\) very partial to unbalanced clusters. We conjecture that high power tends to be produced when the cluster size distribution in the right-column histograms is close to uniform. In this example, which here occurs for \(g\) around 0.4.
Figure 11: Weighted SigClust results for Hotdog-Plus-Outliers dataset using different exponents \(g\). The column of SigClust diagnostic plots on the left shows that \(g\in[0.3,0.7]\) produce high power against \(H_{0}\) for this example. The histograms on the right show that the clusters in the SigClust null simulation tend to be balanced for \(g=0\), very unbalanced for \(g\geq 0.6\), and exhibit a range of balances when \(g\in[0.3,0.5]\).
Following our conjecture above, we pose the open problem of finding a _Gaussian-impartial_ clustering method, which we define as a clustering method that would produce a uniform distribution of cluster sizes given Gaussian data. The following definition makes this precise.
**Definition 3** (Gaussian impartial).: Let \(T_{n}:(\mathbb{R}^{d\times n}\times\mathcal{P}_{n})\rightarrow\mathbb{R}\) be a (binary) clustering criterion, recalling that \(\mathcal{P}_{n}\) denotes the two-set partitions of \(\{1,\cdots,n\}\). Let \(f_{n}:\mathbb{R}^{d\times n}\rightarrow\mathcal{P}_{n}\) be the clustering method that seeks to minimize \(T_{n}\), i.e.,
\[f_{n}(\mathbf{X})=\underset{(C_{1},C_{2})\in\mathcal{P}_{n}}{\operatorname{ argmin}}\ T_{n}(\mathbf{X},(C_{1},C_{2})). \tag{10}\]
Let \(X_{1},X_{2},\cdots\in\mathbb{R}_{d}\) be drawn iid from \(N(\boldsymbol{\mu},\boldsymbol{\Sigma})\) for some mean \(\mu\) and covariance \(\boldsymbol{\Sigma}\), and let \(\mathbf{X}_{n}=[X_{1},\cdots,X_{n}]^{T}\). Let \(C\) be a cluster chosen uniformly at random from the two in \(f_{n}(\mathbf{X}_{n})\), and independently of \(\mathbf{X}_{n}\). Then we say \(f_{n}\) is _Gaussian impartial with respect to \(d,n,\boldsymbol{\mu},\boldsymbol{\Sigma}\)_ if the distribution of \(|C|\) is the uniform distribution on \(\{1,\cdots,n-1\}\). \(\diamond\)
A Gaussian-impartial clustering method would be even more impartial about where to split the Gaussian measure than the criteria in this work, and therefore should exhibit _no_ bias toward balanced or unbalanced clusters. Such a method could be even more powerful against alternatives of small clusters than the WCI method presented in this work.
## Funding
The authors gratefully acknowledge funding from NSF DMS-2113404 and NIAMS P30AR072580.
## Disclosure Statement
The authors report there are no competing interests to declare.
|
2302.01426 | Whitham modulation theory for the defocusing nonlinear Schrodinger
equation in two and three spatial dimensions | The Whitham modulation equations for the defocusing nonlinear Schrodinger
(NLS) equation in two, three and higher spatial dimensions are derived using a
two-phase ansatz for the periodic traveling wave solutions and by
period-averaging the conservation laws of the NLS equation. The resulting
Whitham modulation equations are written in vector form, which allows one to
show that they preserve the rotational invariance of the NLS equation, as well
as the invariance with respect to scaling and Galilean transformations, and to
immediately generalize the calculations from two spatial dimensions to three.
The transformation to Riemann-type variables is described in detail; the
harmonic and soliton limits of the Whitham modulation equations are explicitly
written down; and the reduction of the Whitham equations to those for the
radial NLS equation is explicitly carried out. Finally, the extension of the
theory to higher spatial dimensions is briefly outlined. The multidimensional
NLS-Whitham equations obtained here may be used to study large amplitude
wavetrains in a variety of applications including nonlinear photonics and
matter waves. | Asela Abeya, Gino Biondini, Mark A. Hoefer | 2023-02-02T21:33:23Z | http://arxiv.org/abs/2302.01426v1 | Whitham modulation theory for the defocusing nonlinear Schrodinger equation in two and three spatial dimensions
###### Abstract
The Whitham modulation equations for the defocusing nonlinear Schrodinger (NLS) equation in two, three and higher spatial dimensions are derived using a two-phase ansatz for the periodic traveling wave solutions and by period-averaging the conservation laws of the NLS equation. The resulting Whitham modulation equations are written in vector form, which allows one to show that they preserve the rotational invariance of the NLS equation, as well as the invariance with respect to scaling and Galilean transformations, and to immediately generalize the calculations from two spatial dimensions to three. The transformation to Riemann-type variables is described in detail; the harmonic and soliton limits of the Whitham modulation equations are explicitly written down; and the reduction of the Whitham equations to those for the radial NLS equation is explicitly carried out. Finally, the extension of the theory to higher spatial dimensions is briefly outlined. The multidimensional NLS-Whitham equations obtained here may be used to study large amplitude wavetrains in a variety of applications including nonlinear photonics and matter waves.
6 February 2023
## 1 Introduction
The nonlinear Schrodinger (NLS) equation in one, two and three spatial dimensions is a ubiquitous model in nonlinear science. One reason is its universality as a model for the evolution of weakly nonlinear dispersive wave trains [9, 18, 53]. The NLS equation arises as the governing equation in a broad variety of physical contexts, ranging from water waves to optics, acoustics, Bose-Einstein condensates and beyond [6, 36, 39, 44]. As a result, enormous attention has been devoted over the last half century to the study of its solutions. It is also the case that in many physical situations, dispersive effects are much weaker than nonlinear ones and these scenarios, which can be formulated as small dispersion limits of the governing equations, give rise to a variety of interesting physical phenomena [25]. In particular, the small dispersion limits often lead to the formation of dispersive shock waves, a coherent, slowly modulated and expanding train of nonlinear oscillations.
A powerful tool in the study of small dispersion limits is Whitham modulation theory (also simply called Whitham theory) [56, 57]. Whitham theory is an asymptotic framework within which one can derive the Whitham modulation equations or Whitham equations for brevity. The Whitham equations are a system of first-order, quasi-linear partial differential equations (PDEs) that govern the evolution of the periodic traveling wave solutions of the original PDE over spatial and temporal scales that are larger than the traveling wave solution's wavelength and period, respectively. Whitham theory does not require integrability of the original PDE, and therefore it can also be applied to non-integrable PDEs. Thanks to Whitham theory and, when applicable, the inverse scattering transform (IST), much is known about small dispersion limits for (1+1)-dimensional nonlinear wave equations (e.g., see [13, 21, 25, 30, 38, 45] and references therein). On the other hand, small dispersion limits for (2+1)-dimensional systems have been much less studied and (3+1)-dimensional systems apparently have
not been studied at all. Recently, the Whitham modulation equations for the Kadomtsev-Petviashvili (KP) and two-dimensional Benjamin-Ono equations and, more generally, a class of (2+1)-dimensional equations of KP type were derived [2; 3; 1]. The properties of the resulting KP-Whitham equations were then studied in [14] and the soliton limit of these equations was used in [48; 50; 49] to study the time evolution of a variety of piecewise-constant initial conditions in the modulation equations and, in the process, characterize the resulting dynamics of the solutions of the KP equation. Recently, the Whitham equations for the radial NLS equation [4] and those for focusing and defocusing two-dimensional nonlinear Schrodinger (2DNLS) equations [5] were also derived using a multiple scales approach.
The goal of this work is to derive and study the Whitham modulation equations for the defocusing multi-dimensional nonlinear Schrodinger equation, which we write in the semiclassical scaling as
\[i\varepsilon\psi_{t}+\varepsilon^{2}\nabla^{2}\psi-2|\psi|^{2}\psi=0 \tag{1.1}\]
for a complex-valued field \(\psi(\mathbf{x},t)\), where \(\mathbf{x}=(x_{1},\ldots,x_{N})^{T}\) and \(\nabla^{2}\psi=\psi_{x_{1},x_{1}}+\cdots+\psi_{x_{NN}}\) is the spatial Laplacian, and subscripts \(x_{j}\) and \(t\) denote partial differentiation throughout. Equation (1.1) arises as a governing equation in water waves [6], optics [44], plasmas [36], Bose-Einstein condensates [39], magnetic materials [59] and beyond. The small parameter \(0<\varepsilon\ll 1\) quantifies the relative strength of dispersive effects compared to nonlinear ones and sets a spatial and temporal scale for oscillatory solutions. In the (1+1)-dimensional case, the Whitham modulation equations have been shown to provide quantitative predictions for experiments in ultracold quantum fluids [34; 35] and nonlinear optics [55; 58; 10; 8].
While the Whitham equations for the two-dimensional version of (1.1) (hereafter referred to as the 2DNLS equation) were obtained in [5], this work differs from [5] in several important respects. First, our derivation employs a two-phase ansatz for the periodic solutions of the 2DNLS equation, which has several practical advantages. For one thing, it immediately yields a second conservation of waves equation in vector form that was missed in [5]. It is well known that several methods can be used to derive the Whitham equations: averaged conservation laws, averaged Lagrangian, and multiple scales perturbation theory. Our derivation employs averaged conservation laws which are directly tied to the physical symmetries of the NLS equation, rather than secularity conditions as used in [5]. Moreover, the ability to take advantage of the second conservation of waves equation also simplifies the calculations. In contrast, one of the secularity conditions obtained in [5] is equivalent to the averaged energy equation, which is more complicated and requires more significant manipulation than the second conservation of waves equation. Our approach dramatically simplifies the calculations and enables us to carry out the whole derivation in vector form. Consequently, the resulting NLS-Whitham equations are obtained in a simpler way, which lays the groundwork for generalizations to other NLS-type equations and higher dimensions.
In this work, we also show how our approach allows one to easily generalize the derivation of the Whitham equations to the NLS equation in an arbitrary number of spatial dimensions. We primarily concentrate on the two and three dimensional cases, though some of our results apply to an arbitrary number of spatial dimensions. This generalization to higher dimensions is particularly relevant because the NLS equation in three spatial dimensions is the zero-potential version of the Gross-Pitaevski equation, and is therefore of fundamental importance in describing the dynamics of Bose-Einstein condensates [39], so we expect our results to be directly applicable in that context.
We use our representation of the NLS-Whitham equations to identify several symmetries and reductions of the Whitham equations. For example, we verify that the Whitham equations preserve the invariance of the (N+1)-dimensional NLS equation with respect to scaling and Galilean transformations, and we take advantage of the vector formulation of the modulation equations, which we use to show that they preserve the rotation symmetry of the multidimensional NLS equation. We also explicitly write down both the harmonic and soliton limits of the Whitham equations in a mathematically convenient set of independent variables (which we refer to as Riemann-type variables)
and in physical variables. We identify the self-consistent reduction of the 2DNLS-Whitham equations to the Whitham equations for the radial NLS equation.
The outline of this work is as follows. In section 2 we write the NLS equation in hydrodynamic form, write down its conservation laws, and obtain a representation for the periodic solutions. In section 3 we average the conservation laws to obtain the Whitham equations in physical variables. In section 4 we begin to study the reductions of the Whitham equations in physical variables, including one-dimensional reductions as well as the harmonic and soliton limits. In section 5 we discuss two different transformations to Riemann-type variables. In section 6 we derive further symmetries and reductions of the Whitham equations, including the reduction to the Whitham equations of the radial NLS equation and the harmonic and soliton limits of the Whitham equations in Riemann-type variables. In section 7 we present the generalization of the results to the NLS equation in three spatial dimensions, and in section 8 we end this work with a discussion of the results and some final remarks. The details of various calculations are relegated to the Appendix.
## 2 Hydrodynamic form, conservation laws and periodic solutions of the NLS equation
### Madelung form of the NLS equation and its conservation laws
We begin by writing down the first few conservation laws of the NLS equation (1.1) in an arbitrary number of dimensions. It is convenient to introduce the Madelung transformation
\[\psi({\bf x},t)=\sqrt{\rho({\bf x},t)}\,e^{i\phi({\bf x},t)}\,,\]
\[{\bf u}({\bf x},t)=\varepsilon{\bf\nabla}\!\Phi({\bf x},t)\,.\]
where \({\bf u}=(u_{1},\ldots,u_{N})^{T}\), \({\bf x}=(x_{1},\ldots,x_{N})^{T}\) and \({\bf\nabla}=(\partial_{x_{1}},\ldots,\partial_{x_{N}})^{T}\). Substituting (2.1) into the NLS equation (1.1), separating into real and imaginary parts, and differentiating the real part with respect to each of the spatial variables yields the following dispersive hydrodynamic system of PDEs:
\[\rho_{t}+2{\bf\nabla}\!\cdot\!(\rho{\bf u})=0\,,\]
\[{\bf u}_{t}+2({\bf u}\!\cdot\!{\bf\nabla}){\bf u}+2{\bf\nabla}\!\rho-{{1 \over 4}}\,e^{2}{\bf\nabla}\!\left(\nabla^{2}\ln\rho+{1\over\rho}\nabla^{2} \rho\right)=0\,.\]
The conservation laws for (1.1) for the mass \(E\), momentum \({\bf P}\) and energy \(H\) in integrated form are:
\[{dE\over dt}=0\,,\qquad\quad{d{\bf P}\over dt}=0\,,\qquad\quad dH\over dt}=0\,,\]
where
\[E=\int_{{\mathbb{R}}^{N}}|\psi|^{2}\,({\rm d}{\bf x})\,,\quad{\bf P}={{i\over 2 }}\varepsilon\int_{{\mathbb{R}}^{N}}(\psi{\bf\nabla}\psi^{*}-\psi^{*}{\bf \nabla}\psi)\,({\rm d}{\bf x})\,,\quad H=\int_{{\mathbb{R}}^{N}}\left( \varepsilon^{2}\|{\bf\nabla}\psi\|^{2}+|\psi|^{4}\right)({\rm d}{\bf x})\,,\]
\(\|{\bf v}\|^{2}=|\nu_{1}|^{2}+\cdots+|\nu_{N}|^{2}\) is the Euclidean vector norm and \(({\rm d}{\bf x})={\rm d}x_{1}\cdots{\rm d}x_{N}\) is the volume element in \({\mathbb{R}}^{N}\). These conservation laws correspond, via Noether's theorem, to the invariance of the NLS equation (1.1) with respect to phase rotations, space and time translations, respectively [53]. In differential form, and in terms of the Madelung variables, these conservation laws become [37]
\[\rho_{t}+2{\bf\nabla}\!\cdot\!(\rho{\bf u})=0\,,\]
\[(\rho{\bf u})_{t}+2{\bf\nabla}\!\cdot\!(\rho{\bf u}\otimes{\bf u})+{\bf\nabla }\!(\rho^{2})={{1\over 2}}\varepsilon^{2}\!\left({\bf\nabla}\!\nabla^{2}\rho \right)-{\bf\nabla}\!\cdot\!\left({1\over\rho}\nabla\rho\otimes{\bf\nabla}\rho \right)\right),\]
\[h_{t}+2{\bf\nabla}\!\cdot\!\left((h+\rho^{2}){\bf u}\right)=\varepsilon^{2}{ \bf\nabla}\!\cdot\!\left({\bf u}\nabla^{2}\rho-{1\over\rho}({\bf\nabla}\!\cdot \!\rho{\bf u})\nabla\rho\right),\]
where \(\otimes\) denotes the dyadic [namely, \({\bf v}\otimes{\bf w}={\bf w}{\bf w}^{T}\), so that \(({\bf v}\otimes{\bf w})_{i,j}=v_{i}\,w_{j}\)] and the mass density, momentum density and energy density of (1.1) are, respectively
\[\rho=|\psi|^{2},\quad\rho{\bf u}=i2\varepsilon(\psi{\bf\nabla}\psi^{*}-\psi^{ *}{\bf\nabla}\psi),\qquad\quad h=\varepsilon^{2}\|{\bf\nabla}\psi\|^{2}+| \psi|^{4}=\rho|{\bf u}|^{2}+\rho^{2}+{\varepsilon^{2}\over 4\rho}\|{\bf\nabla}\rho\|^{2}\,.\]
The first two of the conservation laws (2.4) are equivalent to the real and imaginary parts of NLS equation in hydrodynamic form (2.2), but only up to an extra differentiation, an issue that we will return to later.
### Periodic solutions of the NLS equation via a two-phase ansatz
The Whitham modulation equations govern the slow dynamics of the parameters of the periodic solutions of the PDE of interest. Next, we therefore write down the periodic solutions of the hydrodynamic system (2.2) in arbitrary dimensions. We begin by looking for solutions in the form of the following two-phase ansatz:
\[\rho({\bf x},t)=\rho(Z)\,,\hskip 28.452756pt\Phi({\bf x},t)=\phi(Z)+S\,,\]
where \(\rho(Z)\) and \(\phi(Z)\) are periodic function of \(Z\) with period one, and the "fast phases" \(Z\) and \(S\) are
\[Z({\bf x},t)=({\bf k}\cdot{\bf x}-\omega t)/\varepsilon\,,\hskip 28.452756ptS({ \bf x},t)=({\bf v}\cdot{\bf x}-\mu t)/\varepsilon\,.\]
where \({\bf k}=(k_{1},\ldots,k_{N})^{T}\) and \({\bf v}=(v_{1},\ldots,v_{N})^{T}\). The reason for using a two-phase ansatz is the fact that the solution \(\psi({\bf x},t)\) of the NLS equation (1.1) is complex-valued, unlike that of the Korteweg-deVries (KdV) equation (of which the KP equation mentioned earlier is a two-dimensional generalization), which is real-valued. Therefore, a one-phase ansatz (e.g., as in [5]) leads only to a subclass of all periodic solutions, and one would need to apply a Galilean boost a posteriori in order to capture the most general family of periodic solutions of the NLS equation. Two-phase ansatzes are standard when deriving the Whitham equations using Lagrangian averaging (e.g., see [57]); the novelty here is that such a two-phase ansatz is combined with the use of averaged conservation laws. A key benefit of this approach is the immediate deduction of an additional conservation law compared to [5].
In light of (2.6), the definition (2.1) yields
\[{\bf u}(Z)={\bf k}\phi^{\prime}(Z)+{\bf v}\,,\]
using primes to denote derivatives with respect to \(Z\) for brevity. The fact that \(\phi(Z)\) is periodic implies
\[\overline{{\bf u}}={\bf v}\,,\]
where throughout this work the overbar will denote the integral of a quantity with respect to \(Z\) over the unit period. Moreover, the definition (2.1) implies the irrotationality condition
\[\nabla\wedge{\bf u}=0\,.\]
Hereafter, \({\bf v}\wedge{\bf w}\) is the \(N\)-dimensional wedge product, which in two and three spatial dimensions can be replaced by the standard cross product [17, 29]. We substitute the two-phase ansatz (2.6) into the hydrodynamic equations (2.2a) and (2.2b) and collect the leading-order terms, obtaining:
\[-\omega\rho^{\prime}+2{\bf k}\cdot(\rho{\bf u})^{\prime}=0\,,\]
Integrating (2.10a) and using (2.7) yields
\[\phi^{\prime}(Z)=\frac{1}{\|{\bf k}\|}\left(U+\frac{J}{\rho}-\hat{\bf k}\cdot \tilde{\bf u}\right),\]
where \(U=\omega/(2\|{\bf k}\|)\) is the phase speed, \(\hat{\bf k}={\bf k}/\|{\bf k}\|\), and the integration constant \(J\) will be determined later. Using (2.11), we can rewrite (2.7) as:
\[{\bf u}(Z)=\left(\frac{J}{\rho}+U\right)\hat{\bf k}+\tilde{\bf u}_{\perp}\,,\]
where \(\mathbf{u}_{\perp}=\mathbf{u}-(\hat{\mathbf{k}}\cdot\mathbf{u})\,\hat{\mathbf{k}}\). Importantly, the requirement that \(\phi(Z)\) is periodic implies that \(\phi^{\prime}(Z)\) must have zero mean. Taking the inner product of (2.12) with \(\hat{\mathbf{k}}\) and averaging the result over the wave period yields a relation between \(\bar{\mathbf{u}}\) and \(U\), and therefore determines \(\omega=2\|\mathbf{k}\|U\):
\[U=\hat{\mathbf{k}}\cdot\bar{\mathbf{u}}-J\overline{\rho^{-1}}\,. \tag{2.13}\]
Next, substituting (2.11) into (2.10) and simplifying yields two ODEs for \(\rho\). Note that the two ODEs are consistent thanks to the constraint (2.9), which becomes, to leading order,
\[\mathbf{k}\wedge\mathbf{u}^{\prime}=0\,. \tag{2.14}\]
Integrating the resulting ODE for \(\rho\) one obtains [see Appendix A.2 for details]
\[(\rho^{\prime})^{2}=P_{3}(\rho)\,, \tag{2.15}\]
with
\[P_{3}(\rho)=\frac{4}{\|\mathbf{k}\|^{2}}(\rho-\lambda_{1})(\rho-\lambda_{2}) (\rho-\lambda_{3})\,, \tag{2.16}\]
whose solution is
\[\rho(Z)=A+4m\|\mathbf{k}\|^{2}K_{m}^{2}\,\mathrm{sn}^{2}(2K_{m}z|m)\,, \tag{2.17}\]
where \(A\) is a free parameter, and with
\[J^{2}=A\,\big{(}A+4\|\mathbf{k}\|^{2}\,K_{m}^{2}\big{)}\big{(}A+4m\|\mathbf{ k}\|^{2}\,K_{m}^{2}\big{)}\,, \tag{2.18}\]
The roots \(\lambda_{1},\ldots,\lambda_{3}\) are related to the coefficients in the solution (2.17) as
\[\lambda_{1}=A,\qquad\lambda_{2}=A+4mK_{m}^{2}\,\|\mathbf{k}\|^{2},\qquad \lambda_{3}=A+4K_{m}^{2}\,\|\mathbf{k}\|^{2}. \tag{2.19}\]
Conversely, when \(\lambda_{1},\lambda_{2},\lambda_{3}\) are known, \(A\), \(\|\mathbf{k}\|\) and \(m\) are given by
\[A=\lambda_{1}\qquad\quad\|\mathbf{k}\|^{2}=(\lambda_{3}-\lambda_{1})/4K_{m}^{ 2}\,,\qquad\quad m=(\lambda_{2}-\lambda_{1})/(\lambda_{3}-\lambda_{1})\,. \tag{2.20}\]
The amplitude of the periodic oscillations of the density is \(\lambda_{2}-\lambda_{1}\). The requirements \(\rho\geqslant 0\), \(\|\mathbf{k}\|\geqslant 0\) and \(0\leqslant m\leqslant 1\) immediately yield the constraints \(A\geqslant 0\) as well as
\[0\leqslant\lambda_{1}\leqslant\lambda_{2}\leqslant\lambda_{3}\,. \tag{2.21}\]
The symmetric polynomials \(e_{1},\ldots,e_{3}\) defined by the roots \(\lambda_{1},\ldots,\lambda_{3}\) will also be useful later:
\[e_{1}=\lambda_{1}+\lambda_{2}+\lambda_{3}\,,\qquad\quad e_{2}=\lambda_{1} \lambda_{2}+\lambda_{2}\lambda_{3}+\lambda_{3}\lambda_{1}\,,\qquad\quad e_{3} =\lambda_{1}\lambda_{2}\lambda_{3}=J^{2}\,. \tag{2.22}\]
Note that (2.22) only determines \(J\) up to a sign, i.e., \(J=\sigma\sqrt{\lambda_{1}\lambda_{2}\lambda_{3}}\), with \(\sigma=\pm 1\). Both sign choices lead to valid solutions of the NLS equation (1.1). Some care is deserved when determining the value of \(\sigma\) in the presence of modulations of the periodic solutions, as discussed in section 3.2.
The leading-order periodic solution of the hydrodynamic system (2.2) defined by (2.11) and (2.17) contains the following independent parameters: \(A\), \(m\), \(\mathbf{k}\), \(\bar{\mathbf{u}}\) and \(\mu\). However, recall that, to derive the hydrodynamic equation (2.2) from the NLS equation (1.1), one differentiates the real part with respect to the spatial variables. Imposing that the solution of the dispersive hydrodynamic system (2.2) also solves the NLS equation (by substituting into the undifferentiated imaginary part of the NLS equation (1.1)) yields a constraint that determines \(\mu\) in terms of the other constants. Deriving this relation directly from the above expressions is a bit cumbersome, but seeking a periodic solution of (1.1) without writing it in hydrodynamic form [cf. Appendix A.1], one obtains
\[\mu=4(1+m)\|\mathbf{k}\|^{2}K_{m}^{2}+3A+\|\bar{\mathbf{u}}\|-\big{(}J \overline{\rho^{-1}}\big{)}^{2}\,. \tag{2.23}\]
One can now verify that adding this relation to the above solution of the hydrodynamic system does indeed yield a solution of the NLS equation (1.1). Alternatively, one can obtain (2.23) using the undifferentiated version of (2.10); see Appendix A.2. Thus, the periodic solutions of the NLS equation (1.1) in \(N\) spatial dimensions contain \(2N+2\) scalar independent parameters: \(A\), \(m\), \(\mathbf{k}\) and \(\mathbf{v}=\bar{\mathbf{u}}\), as one would expect based on the invariances of the PDE (cf. [53]).
### Harmonic and soliton limits of the periodic solutions
Recall that the harmonic (\(m=0\)) and soliton (\(m=1\)) limits of the Whitham equations for the one-dimensional NLS (1DNLS) equation have special significance [25]. The same will be true for the multi-dimensional NLS equation. It is therefore useful to evaluate the corresponding limits of the above periodic solutions.
In the limit \(m\to 0\) (i.e., \(\lambda_{2}\to\lambda_{1}^{+}\)), the solution (2.1) reduces to a plane wave. Indeed, in this limit, we have
\[\rho(Z)=A,\qquad\quad B=0\,,\qquad\quad\mu=2A+\|\tilde{\bf u}\|^{2}\,,\qquad \quad J^{2}=A^{2}(\pi^{2}\|{\bf k}\|^{2}+A)\,,\]
and
\[\psi({\bf x},t)=\sqrt{A}\,{\rm e}^{i({\bf\hat{u}}\cdot{\bf x}-(\|{\bf\hat{u}}\|^ {2}+2A)t)}\,.\]
Therefore, the only independent parameters in this case are \(A\) and \(\tilde{\bf u}\).
In the opposite limit (\(m\to 1\), i.e., \(\lambda_{2}\to\lambda_{3}^{-}\)), the solution (2.1) reduces to the soliton solution of the NLS equation. Indeed, in this limit, (2.17) and (2.20) yield
\[\rho(Z)=\lambda_{1}+(\lambda_{3}-\lambda_{1})\tanh^{2}\big{[} \sqrt{\lambda_{3}-\lambda_{1}}\big{(}\hat{\bf k}\cdot{\bf x}-\omega\,t/\|{\bf k }\|\big{)}\big{]},\]
\[B=\lambda_{3}-\lambda_{1}\,,\quad J^{2}=\lambda_{1}\lambda_{3}^{2},\quad U= \hat{\bf k}\cdot\tilde{\bf u}-\sigma\sqrt{\lambda_{1}},\quad\mu=2\lambda_{3}+ \|\tilde{\bf u}\|^{2}\,.\]
Note that \(\|{\bf k}\|\to 0\) as \(m\to 1\), but \(K_{m}\to\infty\) in such a way that their product remains finite: \(\|{\bf k}\|K_{m}\to\sqrt{\lambda_{3}-\lambda_{1}}/2\). Using (2.11) we then obtain
\[\phi(Z)=\arctan\Big{[}\sqrt{\lambda_{3}-\lambda_{1}}\tanh\Big{(}\sqrt{ \lambda_{3}-\lambda_{1}}\big{(}\hat{\bf k}\cdot{\bf x}-\omega\,t/\|{\bf k}\| \big{)}\Big{)}/\sqrt{\lambda_{1}}\big{]}\,,\]
implying
\[e^{i\phi+iS}=e^{iS}\Big{[}\sqrt{\lambda_{1}}+i\sqrt{\lambda_{3}-\lambda_{1}} \tanh\Big{(}\sqrt{\lambda_{3}-\lambda_{1}}\big{(}\hat{\bf k}\cdot{\bf x}- \omega\,t/\|{\bf k}\|\big{)}\Big{)}\Big{]}\Big{/}\sqrt{\rho(Z)}\,,\]
with \(S=\tilde{\bf u}\cdot{\bf x}-\mu\,t\) as before. Putting everything together, we obtain
\[\psi({\bf x},t)=A_{o}{\rm e}^{-2i\lambda_{o}^{2}t}{\rm e}^{i({\bf\hat{u}} \cdot{\bf x}-\|{\bf\hat{u}}\|^{2}t)}\big{\{}\cos\theta+i\sin\theta\tanh[A_{o} \sin\theta\,[\hat{\bf k}\cdot{\bf x}-2(\hat{\bf k}\cdot\tilde{\bf u}-A_{o}\cos \theta)t]]\big{\}}\,,\]
with \(\tilde{\bf u}\) as in (2.6), \(A_{o}=\sqrt{\lambda_{3}}\) and \(\theta=\arctan\big{[}\sqrt{(\lambda_{3}-\lambda_{1})/\lambda_{1}}\big{]}\). The independent parameters of the solution in this case are \(\lambda_{1}\), \(\lambda_{3}\) (or equivalently \(A_{o}\) or \(\theta\)), \(\hat{\bf k}\) and \(\tilde{\bf u}\). One can further reduce (2.29) to the more familiar form of the dark soliton solutions of the defocusing NLS equation by choosing \(\tilde{\bf u}={\bf 0}\).
## 3 Derivation of the NLS-Whitham equation by averaged conservation laws
We are now ready to study slow modulations of the periodic solutions described above and derive the Whitham modulation equations that govern them.
### Nonlinear modulations and averaged conservation laws
We begin by introducing the following multiple scales ansatz for the solution of the NLS equation (1.1):
\[\rho({\bf x},t)=\rho(Z,{\bf X},T)\,,\qquad\Phi({\bf x},t)=\phi(Z,{\bf X},T)+S\,,\]
where \({\bf X}={\bf x}\) and \(T=t\), with \(\rho\) and \(\phi\) periodic in \(Z\) with period one and
\[\boldsymbol{\nabla}Z=\frac{{\bf k}({\bf X},T)}{\varepsilon}\,, \qquad\quad Z_{t}=-\frac{\omega({\bf X},T)}{\varepsilon}\,,\] \[\boldsymbol{\nabla}S=\frac{{\bf v}({\bf X},T)}{\varepsilon}\,, \qquad\quad S_{t}=-\frac{\mu({\bf X},T)}{\varepsilon}\,,\]
where, as per the results of section 2.2, \(\mathbf{v}=\bar{\mathbf{u}}\). The above multiple scales ansatz implies
\[\boldsymbol{\nabla}_{\mathbf{x}}\!\rightharpoonup\!\frac{\mathbf{k}}{\varepsilon} \partial_{Z}+\frac{\mathbf{v}}{\varepsilon}\partial_{S}+\nabla_{\mathbf{X}}, \qquad\partial_{t}\rightharpoonup\!\!-\frac{\omega}{\varepsilon}\partial_{Z} -\frac{\mu}{\varepsilon}\partial_{S}+\partial_{T}\,, \tag{3.3}\]
Substituting (3.1) into (1.1), to leading order we recover the periodic solution (2.1), but where all \(2N+2\) parameters \(A\), \(m\), \(\mathbf{k}\) and \(\bar{\mathbf{u}}\) are now slowly varying functions of \(\mathbf{X}\) and \(t\). We then seek modulation equations to determine the space-time dependence of these parameters. To avoid complicating the notation unnecessarily, below we will write derivatives in \(\mathbf{X}\) and \(T\) as derivatives in \(\mathbf{x}\) and \(t\). Equations (3.2) immediately yield the equations of conservation of waves:
\[\mathbf{k}_{t}+\boldsymbol{\nabla}\omega=\mathbf{0}\,, \tag{3.4a}\] \[\boldsymbol{\nabla}\wedge\mathbf{k}=0\,,\] (3.4b) \[\bar{\mathbf{u}}_{t}+\boldsymbol{\nabla}\mu=\mathbf{0}\,,\] (3.4c) \[\boldsymbol{\nabla}\wedge\bar{\mathbf{u}}=0\,. \tag{3.4d}\]
Of course only \(N\) equations among (3.4a) and (3.4b) are independent, and similarly for (3.4c) and (3.4d). Equations (3.4a) and (3.4c) form the first two vectorial Whitham modulation equations, whereas (3.4b) and (3.4d) are compatibility constraints.
Next, we obtain the remaining Whitham modulation equations by averaging the conservation laws (2.4) over the fast variable \(Z\). Using (3.3) to replace all spatial and temporal derivatives in (2.2) and (2.4), expanding all terms in powers of \(\varepsilon\), and averaging, we obtain at order \(\mathcal{O}(\varepsilon^{0})\)
\[(\bar{\rho})_{t}+2\boldsymbol{\nabla}\cdot(\overline{\rho\mathbf{ u}})=0\,, \tag{3.4e}\] \[(\overline{\rho\mathbf{u}})_{t}+2\boldsymbol{\nabla}\cdot( \overline{\rho\mathbf{u}\otimes\bar{\mathbf{u}}})+\boldsymbol{\nabla}( \overline{\rho^{2}})+\boldsymbol{\nabla}\!\!\left(\frac{(\rho^{\prime})^{2}} {2\rho}\mathbf{k}\otimes\mathbf{k}\right)=\mathbf{0}\,,\] (3.4f) \[\bar{h}_{t}+\boldsymbol{\nabla}\cdot\left(2\overline{h\mathbf{u }}+2\overline{\rho^{2}\mathbf{u}}+\left(\mathbf{k}\cdot\frac{\overline{\rho^{ \prime}}}{\rho}(\rho\mathbf{u})^{\prime}\right)\mathbf{k}-\|\mathbf{k}\|^{2} \overline{\rho^{\prime}\mathbf{u}}\right)=0\,, \tag{3.4g}\]
where \(\bar{h}\) denotes the averaged energy density:
\[\bar{h}=\overline{\rho\|\mathbf{u}\|^{2}}+\overline{\rho^{2}}+\frac{1}{4}\| \mathbf{k}\|^{2}\overline{(\rho^{\prime})^{2}/\rho}\,. \tag{3.5}\]
Together with (3.4a) and (3.4c), equations (3.4e)-(3.4g) are \(3N+2\) scalar PDEs for the \(2N+2\) dependent variables \(A\), \(m\), \(\mathbf{k}\) and \(\mathbf{v}=\bar{\mathbf{u}}\) subject to the \(2N\) spatial constraints (3.4b), (3.4d), and are the desired Whitham modulation equations in physical variables in any number of spatial dimensions. Of course, not all of these equations are independent. We will see later that choosing different subsets of equations still leads to equivalent results, and in the end the number of independent modulation equations is \(2N+2\). At the same time, however, we emphasize the simplicity and directness of this approach compared to [5] in deriving the Whitham equations in multiple spatial dimensions.
### Modified form of the modulation equations
In preparation for further simplification of the above system of Whitham equations, it is convenient to express the periodic solutions in terms of the roots \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\), thereby replacing \(A\), \(m\) and \(\|\mathbf{k}\|^{2}\) as dependent variables. Explicitly, (2.12) and (2.17) become:
\[\rho(Z) =\lambda_{1}+(\lambda_{2}-\lambda_{1})\operatorname{sn}^{2}(2K_{ m}z|m)\,, \tag{3.6a}\] \[\mathbf{u}(Z) =\mathbf{U}+\frac{J}{\rho(Z)}\,\hat{\mathbf{k}}\,, \tag{3.6b}\]
with
\[\mathbf{U}=\bar{\mathbf{u}}-\bar{J}\overline{\rho^{-1}}\,\hat{\mathbf{k}}\,, \tag{3.6c}\]
which also implies
\[\omega=2{\bf k}\cdot{\bf U},\qquad\mu=\lambda_{1}+\lambda_{2}+\lambda_{3}+\|{\bf U} \|^{2}+2UJ\rho^{-1}, \tag{3.6d}\]
with \(\hat{\bf k}={\bf k}/\|{\bf k}\|\) as before and \(J\), \(A\), \(\|{\bf k}\|\) and \(m\) given in terms of \(\lambda_{1}\),...,\(\lambda_{3}\) by (2.18) and (2.20). In turn, using (3.6), we can write the Whitham modulation equations (3.4) as
\[{\bf k}_{t}+2\overline{\bf V}({\bf k}\cdot{\bf U})={\bf 0}\,, \tag{3.7a}\] \[\overline{\bf V}\wedge{\bf k}=0,\] (3.7b) \[\left({\bf U}+J\overline{\rho^{-1}}\hat{\bf k}\right)_{t}+ \overline{\bf V}(e_{1}+\|{\bf U}\|^{2}+2J\overline{\rho^{-1}}\,{\bf U}\cdot \hat{\bf k})={\bf 0}\,,\] (3.7c) \[\overline{\bf V}\wedge\left({\bf U}+J\overline{\rho^{-1}}\hat{ \bf k}\right)=0\,,\] (3.7d) \[\tilde{\rho}_{t}+2\nabla\cdot\left(J\hat{\bf k}+\beta\tilde{\bf U }\right)=0\,,\] (3.7e) \[\left(J\hat{\bf k}+\overline{\rho}{\bf U}\right)_{t}+\overline{ \bf V}(\overline{\rho^{2}})+\overline{\bf V}\left[\left(2\tilde{\rho}{\bf U} +2J\hat{\bf k}\right)\otimes{\bf U}+2J{\bf U}\otimes\hat{\bf k}+\frac{2}{3} \left(2e_{2}-e_{1}\tilde{\rho}\right)\hat{\bf k}\otimes\hat{\bf k}\right]={ \bf 0}\,.\] (3.7f) \[\tilde{h}_{t}+\overline{\bf V}\cdot\left[2J(2\tilde{\rho}+ \overline{\|{\bf u}\|^{2}})\,\hat{\bf k}+2(\overline{\rho^{2}}+\tilde{h})\,{ \bf U}+\left({\bf U}\cdot\hat{\bf k}\left(\frac{\rho^{\prime})^{2}}{\rho} \right)-J\frac{J}{2}\left(\frac{(\rho^{\prime})^{2}}{\rho^{2}}\right)\right] {\bf k}\right]={\bf 0}\,. \tag{3.7g}\]
See Appendix A.2 for details on how to obtain (3.7f). The next step is the evaluation of the elliptic integrals in (3.7). To this end, we have [46]
\[\overline{\rho}=\int_{0}^{1}\rho(Z)\,{\rm d}z=\lambda_{3}-( \lambda_{3}-\lambda_{1})\frac{E_{m}}{K_{m}}, \tag{3.8a}\] \[\overline{\rho^{-1}}=\int_{0}^{1}\rho^{-1}(Z)\,{\rm d}z=\frac{1}{ \lambda_{1}K_{m}}\,\Pi\Big{(}1-\frac{\lambda_{2}}{\lambda_{1}}\Big{|}m\Big{)}\,, \tag{3.8b}\]
where \(K_{m}=K(m)\), \(E_{m}=E(m)\) and \(\Pi(\cdot|m)\) are the complete elliptic integrals of the first, second and third kind respectively. We also note, for convenience, that
\[\bar{\bf u}=\int_{0}^{1}{\bf u}(Z)\,{\rm d}z={\bf U}+\sigma\frac{ \sqrt{\lambda_{2}\lambda_{3}}}{\sqrt{\lambda_{1}K_{m}}}\Pi\Big{(}1-\frac{ \lambda_{2}}{\lambda_{1}}\Big{|}m\Big{)}\hat{\bf k}, \tag{3.9a}\] \[\overline{\rho{\bf u}}=\int_{0}^{1}\rho(Z){\bf u}(Z)\,{\rm d}z= \tilde{\rho}{\bf U}+J\hat{\bf k}=\Big{(}\lambda_{3}-(\lambda_{3}-\lambda_{1} )\frac{E_{m}}{K_{m}}\Big{)}{\bf U}+\sigma\sqrt{\lambda_{1}\lambda_{2}\lambda_ {3}}\hat{\bf k}. \tag{3.9b}\]
We reiterate that not all of the equations (3.7) are independent. For example, one can obtain (3.7d) using (3.7c) and (3.7e). This is relevant because it allows us to work with the most convenient subset of equations among all the PDEs in (3.7), as long as the compatibility constraints (3.7b) and (3.7d) are satisfied. To this end, recall that \(\tilde{h}\) is given by (3.5), and
\[\overline{\|{\bf u}\|^{2}}=\|{\bf U}\|^{2}+2J{\bf U}\cdot\hat{\bf k }\overline{\rho^{-1}}+J^{2}\overline{\rho^{-2}}, \tag{3.10a}\] \[\overline{\rho\|{\bf u}\|^{2}}=J^{2}\overline{\rho^{-1}}+2J{\bf U }\cdot\hat{\bf k}+\rho\|{\bf U}\|^{2}\,. \tag{3.10b}\]
Moreover, the terms \((\rho^{\prime})^{2}/\rho\) and \((\rho^{\prime})^{2}/\rho^{2}\), which appear in (3.7g), can be computed using (2.15). On the other hand, the averaged energy conservation law (3.7g) is the most complicated among all of the equations (3.7). In section 7 we will show that, thanks to the use of the two-phase ansatz and the resulting second conservation of waves equations (3.7c) and (3.7d), one can avoid having to deal with the averaged energy equation (3.7g), which greatly simplifies the transformation to Riemann-type variables.
We also point out that the sign of \(J\), as determined by the initial conditions for the system through the value of \(\sigma\)--see the discussion after (2.22)--affects \(\bar{\bf u}\) via (3.9a) and \(\overline{\rho{\bf u}}\) via (3.9b). Therefore, when considering modulations of the periodic solutions, the value of \(\sigma\) depends on \({\bf x}\) and \(t\), and its value must be chosen in such a way to ensure smoothness of \(\overline{\rho{\bf u}}\). In particular, a sign change of \(J\) occurs when the solution hits a vacuum point, i.e., \(\lambda_{1}=0\). At such a point, \(\bar{\bf u}\) is singular but \(\overline{\rho{\bf u}}\) is not. See [33] for additional discussion.
## 4 Symmetries and reductions of the NLS-Whitham system in physical variables
We now present several reductions of the Whitham modulation system (3.7) in physical variables in arbitrary number of spatial dimensions. Further symmetries and reductions in the two-dimensional case will be discusssed in section 6 after we introduce Riemann-type variables in section 5.
### Unidirectional reductions of the modulation equations
We begin by showing that the NLS-Whitham equations (3.7) reduce to the 1DNLS-Whitham equations (i.e., the Whitham equations for the 1DNLS equation) when \(k_{2}=\cdots=k_{N}=v_{2}=\cdots=v_{N}=0\) and all quantities are independent of \(x_{2},\ldots,x_{N}\). In this case, we have:
\[\|{\bf k}\|^{2}=k_{1}^{2}\,,\qquad\quad u_{1}(Z)=\frac{1}{\rho}+U\,,\qquad \omega=2k\,U\,,\qquad U=\bar{u}_{1}-\bar{J}\overline{\rho^{-1}}\,,\qquad u_{2 }(Z)=0\,.\]
The Whitham equations (3.7b) and (3.7d) and the second components of (3.7a), (3.7c), and (3.7e) are satisfied trivially, while the rest simplify to:
\[k_{t}+2(kU)_{x}=0\,, \tag{4.2a}\] \[\left(U+\bar{J}\overline{\rho^{-1}}\right)_{t}+\left(e_{1}+2JU \overline{\rho^{-1}}+U^{2}\right)_{x}=0\,,\] (4.2b) \[\left(\bar{\rho}\right)_{t}+2(U\bar{\rho}+J)_{x}=0\,,\] (4.2c) \[\left(U\bar{\rho}+J\right)_{t}+\left(\overline{\rho^{2}}+2U^{2} \bar{\rho}+2J^{2}\overline{\rho^{-1}}+\frac{k^{2}}{2}\frac{\overline{(\rho^{ \prime})^{2}}}{\rho}+4U\,J\right)_{x}=0\,,\] (4.2d) \[\left(\overline{\rho^{2}}+U^{2}\bar{\rho}+J^{2}\overline{\rho^{- 1}}+\frac{k^{2}}{4}\frac{\overline{(\rho^{\prime})^{2}}}{\rho}+2JU\right)_{t }+\left(\frac{3U\,k^{2}}{2}\frac{\overline{(\rho^{\prime})^{2}}}{\rho}\right) -\frac{k^{2}}{2}\frac{\overline{(\rho^{\prime})^{2}}}{\rho^{2}}\] \[+4U\overline{\rho^{2}}+(4J+2U^{3})\bar{\rho}+6JU^{2}+6J^{2}U \overline{\rho^{-1}}+2J^{3}\overline{\rho^{-2}}\right)_{x}=0\,, \tag{4.2e}\]
with \(x=x_{1}\). The system (4.2) coincides with the modulation equations for the 1DNLS equation [34] (cf. (4.41) and (4.42) in [34]) upon trivial rescalings resulting from the different normalization of the NLS equation in [34] compared to (1.1). Note that (4.2) comprise five PDEs for the four solution parameters \(A,m,k\&U\) (or equivalently \(\lambda_{1},\lambda_{2},\lambda_{3},U\)). Once again, one can verify that the modulation equation obtained from (4.2e) is consistent with those obtained from the first four PDEs above.
The above scenario is not the only one in which the Whitham modulation system (3.7) reduces to that of the 1DNLS equation. Next we consider so-called "rotated" one-dimensional reductions where the rotated coordinate frame is determined by \(R\), an \(N\times N\) orthogonal matrix. We introduce the rotated vector \({\bf w}^{\sharp}=R{\bf w}\) for any vector \({\bf w}\). Then, the rotated one-dimensional reduction is obtained through the requirement that \({\bf k}\) and \(\bar{\bf u}\) (or equivalently \(\hat{\bf k}\) and \({\bf U}\)) be parallel and that both depend only on \(t\) and the first component of \({\bf x}^{\sharp}\). We choose \(R\) so that \(\hat{\bf k}^{\sharp}=(1,0,\ldots,0)^{T}\), i.e., \(k_{2}^{\sharp}=\ldots=k_{N}^{\sharp}=0\), which also implies \(U_{2}^{\sharp}=\ldots=U_{N}^{\sharp}=0\). Since the Whitham modulation equations (3.7) are invariant under rotations of the coordinate axes (see below), we recover the one-dimensional reduction (4.2) when all quantities are independent of \(x_{2}^{\sharp},\ldots,x_{N}^{\sharp}\) in the rotated coordinate frame, i.e., with \(x\) and all modulation variables in (4.2) replaced by their rotations \(x_{1}^{\sharp}\), etc.
### Invariances of the modulation equations
The Whitham modulation equations (3.7) are manifestly invariant under translations of the spatial and temporal coordinates. Next we show that the Whitham system (3.7) preserves the invariance of the NLS equation under rotations of the Cartesian coordinates. Namely, if \({\bf x}\!\rightarrow\!{\bf x}^{\sharp}=R{\bf x}\), where \(R\) is an arbitrary constant rotation matrix, (3.7) remain unchanged upon \({\bf U}\!\rightarrow\!{\bf U}^{\sharp}=R{\bf U}\) and \({\bf k}\!\rightarrow\!{\bf k}^{\sharp}=R{\bf k}\). One can verify
that this is indeed the case using the following identities:
\[{\bf R}\nabla_{\bf x}=\nabla_{\bf x^{i}}\,,\quad{\bf U}\cdot{\bf k}={ \bf U}^{\sharp}\cdot{\bf k}^{\sharp}\,,\quad\|{\bf U}\|=\|{\bf U}^{\sharp}\,\|\,, \tag{4.3a}\] \[{\bf V}_{\bf x}\cdot(\alpha{\bf k})={\bf V}_{\bf x^{i}}\cdot( \alpha{\bf k}^{\sharp})\,,\quad{\bf V}_{\bf x}\cdot(\alpha{\bf U})={\bf V}_{ \bf x^{i}}\cdot(\alpha{\bf U}^{\sharp})\,,\] (4.3b) \[{\bf R}\nabla_{\bf x}\cdot(\alpha{\bf U}\otimes{\bf U})={\bf V}_{ \bf x^{i}}\cdot(\alpha{\bf U}^{\sharp}\otimes{\bf U}^{\sharp})\,,\quad{\bf R} \nabla_{\bf x}\cdot(\alpha{\bf k}\otimes{\bf k})={\bf V}_{\bf x^{i}}\cdot( \alpha{\bf k}^{\sharp}\otimes{\bf k}^{\sharp})\,,\] (4.3c) \[{\bf R}\nabla_{\bf x}\cdot(\alpha{\bf k}\otimes{\bf U})={\bf V}_{ \bf x^{i}}\cdot(\alpha{\bf k}^{\sharp}\otimes{\bf U}^{\sharp})\,. \tag{4.3d}\]
where \(\alpha\) is an arbitrary real number.
Next we show that the Whitham system (3.7) also preserves the invariance of the NLS equation with respect to scaling and spatial reflections and Galilean transformations. Recall that, if \(q({\bf x},t)\) is any solution of the NLS equation, so are \(q^{\sharp}({\bf x},t)=\alpha q(\alpha{\bf x},\alpha^{2}\,t)\), \(q^{\sharp}({\bf x},t)=q(-{\bf x},t)\) and \(q^{\sharp}({\bf x},t)=q({\bf x}-2{\bf w}t,t)e^{i({\bf w}\cdot{\bf x}-|{\bf w}| ^{2}\,t)}\) where all transformation parameters are real-valued. We next show that the modulation equations (3.7) are invariant under each of these transformations. Specifically, letting \(q^{\sharp}({\bf x},t)=[\rho^{\sharp}({\bf x},t)]^{1/2}\,e^{i\phi^{\sharp}({\bf x },t)}\), we have, for the scaling symmetry,
\[\rho^{\sharp}({\bf x},t)=\alpha^{2}\rho(\alpha{\bf x},\alpha^{2}\,t)\,,\qquad \phi^{\sharp}({\bf x},t)=\phi(\alpha{\bf x},\alpha^{2}\,t)\,, \tag{4.4}\]
and the dependent variables of the Whitham equations become
\[\lambda^{\sharp}_{j}({\bf x},t)=\alpha^{2}\lambda_{j}(\alpha{\bf x },\alpha^{2}\,t)\,,\,j=1,2,3\,, \tag{4.5a}\] \[{\bf k}^{\sharp}({\bf x},t)=\alpha{\bf k}(\alpha{\bf x},\alpha^{2 }\,t)\,,\quad{\bf U}^{\sharp}({\bf x},t)=\alpha{\bf U}(\alpha{\bf x},\alpha^{ 2}\,t)\,,\quad J^{\sharp}({\bf x},t)=\alpha^{3}J(\alpha{\bf x},\alpha^{2}\,t)\,. \tag{4.5b}\]
Using (4.4) and (4.5), one can show that the Whitham modulation equations (3.7) remain unchanged. Similarly, it can be seen that spatial reflections leave the modulation equations invariant upon the following transformation of the dependent variables:
\[\rho^{\sharp}({\bf x},t)=\rho(-{\bf x},t)\,,\quad\lambda^{\sharp} _{j}({\bf x},t)=\lambda_{j}(-{\bf x},t)\,,\,j=1,2,3\,,\quad{\bf k}^{\sharp}({ \bf x},t)=-{\bf k}(-{\bf x},t)\,, \tag{4.6a}\] \[{\bf U}^{\sharp}({\bf x},t)=-{\bf U}(-{\bf x},t)\,,\quad J^{ \sharp}({\bf x},t)=J(-{\bf x},t)\,. \tag{4.6b}\]
Finally, with regards to Galilean transformations, writing \(q^{\sharp}({\bf x},t)=\sqrt{\rho^{\sharp}({\bf x},t)}\,e^{i\phi^{\sharp}({ \bf x},t)}\) implies
\[\rho^{\sharp}({\bf x},t)=\rho({\bf x}-2{\bf w}t,t)\,,\qquad\phi^{\sharp}({\bf x },t)=\phi({\bf x}-2{\bf w}t,t)+{\bf w}\cdot{\bf x}-\|{\bf w}\|^{2}\,t\,. \tag{4.7}\]
The dependent variables in the modulation equations (3.7) become
\[\lambda^{\sharp}_{j}({\bf x},t)=\lambda_{j}({\bf x}-2{\bf w}t,t)\,,\,j=1,2,3\,, \tag{4.8a}\] \[{\bf k}^{\sharp}({\bf x},t)={\bf k}({\bf x}-2{\bf w}t,t)\,,\quad{ \bf U}^{\sharp}({\bf x},t)={\bf U}({\bf x}-2{\bf w}t,t)+{\bf w}\,,\quad J^{ \sharp}({\bf x},t)=J({\bf x}-2{\bf w}t,t)\,. \tag{4.8b}\]
Using (4.8), one can verify that the modulation equations (3.7) remain invariant under the above Galilean transformation. The Riemann-type variables, which will be introduced in section 5, change as follows under the above transformations:
\[r^{\sharp}_{j}({\bf x},t)=\alpha r_{j}(\alpha{\bf x},\alpha^{2}\,t),\quad r^{ \sharp}_{j}({\bf x},t)=r_{j}(-{\bf x},t),\quad r^{\sharp}_{j}({\bf x},t)=r_{j} ({\bf x}-2{\bf w}t,t)+{\bf w}\cdot\hat{{\bf k}}/2,\quad j=1,2,3,4\,. \tag{4.9}\]
### Harmonic and soliton limits of the modulation equations in physical variables
The harmonic and soliton limits of the Whitham equations for the KdV and 1DNLS equations have proven to be quite useful to study various nonlinear dynamical scenarios of practical interest [20, 42, 52]. The same is true for the harmonic and soliton limits of the KP-Whitham equations [14, 48, 50, 49]. We therefore expect that the same will also be true for the harmonic and soliton limit of the NLS equation in multiple spatial dimensions.
Like with the periodic solution, the harmonic limit of the Whitham equations is the limit \(m\to 0\), corresponding to \(\lambda_{2}\to\lambda_{1}^{+}\). Recall that in this limit the solution becomes a plane wave. The integrals in (10) simplify considerably:
\[\overline{\rho}=\lambda_{1}\,,\quad\overline{\rho^{2}}=\lambda_{1}^{2}\,,\quad \overline{\left(\frac{\overline{(\rho^{\prime})^{2}}}{\rho}\right)}=0\,,\quad \overline{\rho^{-1}}=1/\lambda_{1}\,,\quad\overline{\rho^{-2}}=1/\lambda_{1}^ {2}\,,\qquad J=\sigma\lambda_{1}\sqrt{\lambda_{3}}\,. \tag{23}\]
Then, the linear dispersion relation is
\[\omega=2\|{\bf k}\|\left(\hat{\bf k}\cdot\tilde{\bf u}-\sigma\sqrt{\pi^{2}\|{ \bf k}\|^{2}+\tilde{\rho}^{2}}\right)\,, \tag{24}\]
the averaged energy limits to \(\tilde{h}=\tilde{\rho}\|\tilde{\bf u}\|^{2}+\tilde{\rho}^{2}\) and the Whitham equations (11) reduce to:
\[{\bf k}_{t}+\boldsymbol{\nabla}\omega={\bf 0}\,, \tag{25a}\] \[\tilde{\bf u}_{t}+\boldsymbol{\nabla}(2\tilde{\rho}+\|\tilde{\bf u }\|^{2})={\bf 0}\,,\qquad\boldsymbol{\nabla}\times\tilde{\bf u}=0\,,\] (25b) \[\tilde{\rho}_{t}+2\boldsymbol{\nabla}\cdot(\tilde{\rho}\tilde{\bf u })=0\,,\] (25c) \[(\tilde{\rho}\tilde{\bf u})_{t}+\boldsymbol{\nabla}(\tilde{\rho}^ {2})+2\boldsymbol{\nabla}\cdot\left(\tilde{\rho}\tilde{\bf u}\otimes\tilde{ \bf u}\right)={\bf 0}\,,\] (25d) \[\tilde{h}_{t}+\nabla\cdot\left(2(\tilde{h}+\tilde{\rho}^{2}) \tilde{\bf u}\right)=0\,. \tag{25e}\]
Again, not all of these equations are independent. For example, one can derive (25d) using (25b) and (25c). Also, note that the variable \({\bf k}\) is immaterial, since its value does not affect the solution, and (25) is decoupled from the other PDEs. Thus, equations (25b) and (25c), which are equivalent to the shallow water equations, are by themselves a closed system of evolution PDEs for the parameters of the plane wave solution, \(\tilde{\rho}\) and \(\tilde{\bf u}\). Nonetheless, (25a) describes the evolution of a harmonic wave propagating on top of the mean flow.
Finally, we discuss the soliton limit of the Whitham modulation system (11), obtained for \(m\to 1\) corresponding to \(\lambda_{2}\to\lambda_{3}\). In this limit, the integrals in (10) become:
\[\overline{\rho}=\lambda_{3}\,,\quad\overline{\rho^{2}}=\lambda_{3}^{2}\,,\quad \|{\bf k}\|^{2}\left(\frac{(\rho^{\prime})^{2}}{\rho}\right)=0\,,\quad \overline{\rho^{-1}}=1/\lambda_{3}\,,\quad\overline{\rho^{-2}}=1/\lambda_{3}^ {2}\,. \tag{26}\]
Then (11a) and (11b) are trivially satisfied, and the rest simplify to:
\[\tilde{\bf u}_{t}+\nabla\big{(}2\tilde{\rho}+\|\tilde{\bf u}\|^{ 2}\big{)}={\bf 0}\,, \tag{27a}\] \[\tilde{\rho}_{t}+2\nabla\cdot(\tilde{\rho}\tilde{\bf u})=0\,,\] (27b) \[(\tilde{\rho}\tilde{\bf u})_{t}+\nabla(\tilde{\rho}^{2})+2\nabla \cdot\left(\tilde{\bf u}\otimes\tilde{\rho}\tilde{\bf u}\right)={\bf 0}\,,\] (27c) \[\tilde{h}_{t}+\nabla\cdot\big{(}2(\tilde{h}+\tilde{\rho}^{2}) \tilde{\bf u}\big{)}=0\,. \tag{27d}\]
Note that, similar to before, we can derive equation (27c) and (27d) from (27a) and (27b). Therefore, we have a system of \(N+2\) PDEs for the dependent variables \(\tilde{\bf u}\) and \(\tilde{\rho}=\lambda_{2}\). But in this case, we are missing PDEs for \(\lambda_{1}\) and \(\hat{\bf k}\) that define the soliton amplitude and its propagation direction, which are needed to completely determine the soliton solution. This deficiency is also present in the one-dimensional case. The one-dimensional case is simpler, however, because, in that case, \({\bf k}\) is a one-component vector, and therefore \(\hat{\bf k}=\pm 1\), constant. The soliton limit is singular so care must be taken in its calculation. In any case, both in the one-dimensional and higher-dimensional situation, the problem is eliminated by the transformation to Riemann-type variables, as we will see later.
## 5 2DNLS-Whitham equations in Riemann-type variables
In this section and the next one we temporarily restrict our attention to the two-dimensional case and perform suitable changes of dependent variables to simplify the form of the 2DNLS-Whitham equations.
When \(N=2\), the modulation system (11) consists of eight PDEs for six dependent variables in the independent variables \({\bf x}=(x,y)^{T}\) and \(t\), plus the two scalar constraints (11b) and (11d).
We will use the four scalar conservation of waves equations (3.74) and (3.75) together with the averaged conservation of mass (3.7e) and one of the components of the conservation of momentum equations (3.7f), neglecting the compatibility conditions (3.7b) & (3.7d) as well as the conservation of energy (3.7g). Importantly, however, the resulting Whitham equations are equivalent to those obtained by working with a different set of averaged equations [5].
As in the one-dimensional case, the transformation involves two steps. The first step is the change of dependent variables from \((A,k_{1},k_{2},m,\bar{u}_{1},\bar{u}_{2})\) to \(\mathbf{Y}=(\sqrt{\lambda_{1}},\sqrt{\lambda_{2}},\sqrt{\lambda_{3}},U_{1},U_ {2},q)\), with
\[q=k_{2}/k_{1}=\tan\varphi\,,\]
similar to [2], where \(\varphi=\arctan(k_{2}/k_{1})\) [not to be confused with the fast phase \(\phi(Z)\) that was used in sections 2 and 3] identifies the direction of the periodic wave's fronts:
\[\hat{\mathbf{k}}=(\cos\varphi,\sin\varphi)^{T}\,.\]
The second step of the transformation is then defined by the map from \(\lambda_{1},\lambda_{2},\lambda_{3}\) and \(U_{1}\) to the "Riemann-type" variables \(\dot{r}_{1},\dot{r}_{2},\dot{r}_{3},\dot{r}_{4}\) via the transformation
\[U_{1} ={{1\over 2}}\cos\varphi\,(\dot{r}_{1}+\dot{r}_{2}+\dot{r}_{3}+ \dot{r}_{4})\,,\] \[\lambda_{1} ={{1\over 4}}(\dot{r}_{1}-\dot{r}_{2}-\dot{r}_{3}+\dot{r}_{4})^{2},\quad\lambda_{2}={{1\over 4}}(\dot{r}_{1}-\dot{r}_{2}+\dot{r}_{3}-\dot{r}_{4})^{2},\quad\lambda_{3}={{1\over 4}}(\dot{r}_{1}+\dot{r}_{2}-\dot{r}_{3}-\dot{r}_{4})^{2}\,.\]
The variables \(\dot{r}_{1},\ldots,\dot{r}_{4}\) are one possible two-dimensional generalization of the Riemann invariants of the Whitham equations for the 1DNLS equation. Note that in this work the overdot does not denote differentiation with respect to time.
Recall that the existence of Riemann invariants for \((1+1)\)-dimensional hydrodynamic-type systems is intimately tied to the integrability properties of the modulation equations. Using the one-dimensional Riemann invariants as dependent variables in higher-dimensional systems diagonalizes their one-dimensional reductions, and makes the equations more advantageous for analysis (e.g., see [25]). We will show below that, for both the two-dimensional and three-dimensional cases, a suitable generalization of the one-dimensional Riemann invariants allows one to write the modulation equations in a concise and convenient form.
In terms of \(\dot{r}_{1},\ldots,\dot{r}_{4}\), the periodic solution (2.17) becomes
\[\rho(Z)={{1\over 4}}(\dot{r}_{1}-\dot{r}_{2}-\dot{r}_{3}+ \dot{r}_{4})^{2}+(\dot{r}_{2}-\dot{r}_{1})(\dot{r}_{4}-\dot{r}_{3})\sin^{2}(2 K_{m}\,Z|m)\,,\] \[m={(\dot{r}_{2}-\dot{r}_{1})(\dot{r}_{4}-\dot{r}_{3})\over(\dot{r }_{3}-\dot{r}_{1})(\dot{r}_{4}-\dot{r}_{2})}\,.\]
Moreover, \(\dot{\mathbf{R}}=(\dot{r}_{1},\dot{r}_{2},\dot{r}_{3},\dot{r}_{4},U_{\perp},q) ^{T}\) satisfies the hydrodynamic system
\[\dot{\mathbf{R}}_{t}+\mathsf{M}_{1}\dot{\mathbf{R}}_{x}+\mathsf{M}_{2}\dot{ \mathbf{R}}_{y}=0\,.\]
The matrices \(\mathsf{M}_{1}\) and \(\mathsf{M}_{2}\) are rather complicated, and we therefore omit them for brevity. When \(k_{2}=U_{\perp}=0\), however, the last two equations in (5.5) are trivially satisfied, and the first four reduce to the Whitham equations for the 1DNLS equation in Riemann invariant (diagonal) form [28, 47]:
\[{\partial\dot{\mathbf{r}}\over\partial t}+\bigtriangledown{\partial\dot{ \mathbf{r}}\over\partial x}=0\,,\]
with
\[\dot{\mathbf{r}}=(\dot{r}_{1},\ldots,\dot{r}_{4})^{T},\qquad\dot{ \mathsf{V}}=\text{diag}(\dot{\mathsf{V}}),\qquad\dot{\mathsf{V}}=(\dot{V}_{1}, \ldots,\dot{V}_{4})^{T}\,,\]
\[\dot{V}_{1}=2\,V_{0}+{2(\dot{r}_{2}-\dot{r}_{1})(\dot{r}_{4}-\dot{r}_{1})K_{m} \over(\dot{r}_{4}-\dot{r}_{2})E_{m}-(\dot{r}_{4}-\dot{r}_{1})K_{m}}\,,\quad \dot{V}_{2}=2\,V_{0}+{2(\dot{r}_{2}-\dot{r}_{1})(\dot{r}_{3}-\dot{r}_{2})K_{m} \over(\dot{r}_{3}-\dot{r}_{2})K_{m}-(\dot{r}_{3}-\dot{r}_{1})E_{m}}\,,\]
\[\dot{V}_{3}=2\,V_{0}+{2(\dot{r}_{3}-\dot{r}_{2})(\dot{r}_{4}-\dot{r}_{3})K_{m} \over(\dot{r}_{4}-\dot{r}_{2})E_{m}-(\dot{r}_{3}-\dot{r}_{2})K_{m}}\,,\quad\dot{ V}_{4}=2\,V_{0}+{2(\dot{r}_{4}-\dot{r}_{1})(\dot{r}_{4}-\dot{r}_{3})K_{m} \over(\dot{r}_{4}-\dot{r}_{1})K_{m}-(\dot{r}_{3}-\dot{r}_{1})E_{m}}\,,\]
with \(V_{o}=U_{1}\).
The Whitham modulation system (5.5) can be further simplified by introducing a modified set of Riemann-type variables:
\[r_{j}=\cos\varphi\ \dot{r}_{j}\,,\quad j=1,\ldots,4\,,\]
with \(q=\tan\varphi\) as before. Moreover, the curl-free constraint (3.4d) yields [see section 7 for details]
\[p=\sec\varphi\ U_{\perp}\,,\]
where the perpendicular component of \({\bf U}\) is defined by
\[U_{\perp}={\bf U}\cdot\hat{\bf k}_{\perp}\,,\qquad\hat{\bf k}_{\perp}=(-\sin \varphi,\cos\varphi)^{T}\,.\]
The Whitham modulation equations (5.5) then reduce to the following form:
\[{\partial{\bf R}\over\partial t}+{\bf A}\ {\partial{\bf R}\over\partial x}+{\bf B }\ {\partial{\bf R}\over\partial y}=0\,,\]
where \({\bf R}=(r_{1},\ldots,r_{4},q,p)^{T}\),
\[{\bf A}=\left(\begin{array}{cc}A_{4\times 4}&A_{4\times 2}\\ A_{2\times 4}&A_{2\times 2}\end{array}\right)\,,\qquad{\bf B}=\left(\begin{array}{ cc}B_{4\times 4}&B_{4\times 2}\\ B_{2\times 4}&B_{2\times 2}\end{array}\right),\]
with \(g=1+q^{2}\) as in [5] and
\[A_{4\times 4}={\bf V}-q^{2}U_{1}\,\mathbb{1}_{4}+q^{2}({\bf 1}\otimes{\bf r }+{\bf r}\otimes{\bf 1})\,,\quad A_{2\times 2}=2\left(\begin{array}{cc}(1-q^{2}) U_{1}&-q^{2}\\ q^{2}(2U_{1}^{2}-s_{2})&gU_{1}\end{array}\right),\]
with
\[{\bf r}=(r_{1},\ldots,r_{4})^{T},\]
\[{\bf V}={\rm diag}({\bf V}),\quad{\bf V}=(V_{1},\ldots,V_{4})^{T},\]
\[{\bf a}={1\over 3}\big{[}4U_{1}(1-3q^{2}){\bf r}-2U_{1}{\bf V}-(1+3q^{2})((s_ {2}-2U_{1}^{2}){\bf 1}-{\bf V}{\bf r})\,\big{]},\]
where \(U_{1}=(r_{1}+r_{2}+r_{3}+r_{4})/2\), \(V_{1},\ldots,V_{4}\) are as in (5.8) but with \((r_{1},\ldots,r_{4})\) instead of \((\dot{r}_{1},\ldots,\dot{r}_{4})\), \({\bf 1}=(1,\ldots,1)^{T}\), \(\mathbb{1}_{n}\) is the \(n\times n\) identity matrix, \(\mathbb{1}_{n}\) denotes the \(n\times n\) matrix with all entries equal to one, and
\[s_{n}=r_{1}^{n}+r_{2}^{n}+r_{3}^{n}+r_{4}^{n}\,.\]
In component form, the Whitham modulation equations (5.11) are [5]
\[{\partial r_{j}\over\partial t}+V_{j}{\partial r_{j}\over \partial x}+(qV_{j}+2p){\partial r_{j}\over\partial y}+h_{j}=0,\quad j=1,2,3,4,\]
\[{\partial q\over\partial t}+2\big{(}g\,U_{1}+pq\big{)}{\partial q\over \partial x}+2{D\over Dy}\big{[}gU_{1}+pq\,\big{]}=0\,,\]
\[{\partial p\over\partial t}+2g\,U_{1}{\partial p\over\partial x}+2p{\partial p \over\partial y}+{D\over Dy}\big{[}g(s_{2}-2U_{1}^{2})\,\big{]}=0\,,\]
where
\[h_{j}=2q(U_{1}-r_{j}){DU_{1}\over Dy}-{1\over 2}q{Ds_{2}\over Dy}+q(V_{j}-2U_{ 1})\bigg{(}r_{j}{\partial q\over\partial x}+{1\over 2}{\partial p\over \partial x}\bigg{)}+{a_{j}\over g}{Dq\over Dy}-{1-q^{2}\over 2g}(V_{j}-4r_{j}){ Dp\over Dy}\]
and \(D_{y}\) is the "convective" derivative as in [2]:
\[\frac{D}{Dy}=\frac{\partial}{\partial y}-q\frac{\partial}{\partial x}.\]
The steps to obtain (5.14) are just a special case of the ones needed to simplify the Whitham equations for the three-dimensional NLS equation, which will be discussed in section 7. All the calculations in section 7 can be trivially reduced to the two-dimentional case by simply taking \((q_{1},q_{2})=(q,0)\) and \((p_{1},p_{2})=(p,0)\) there. Therefore we omit the details here for brevity.
Note the necessary compatibility condition for equations (5.14) in which the initial data is subject to the curl-free constraints \(\nabla\times\bar{\mathbf{u}}=\nabla\times\mathbf{k}=0\), similarly to the KP equation [2, 5]. In section 7 we will show how these constraints can be written out explicitly in terms of the Riemann-type variables.
## 6 Further symmetries and reductions of the 2DNLS-Whitham equations
Both of the sets of Riemann-type variables \(\dot{\mathbf{R}}\) and \(\mathbf{R}\) introduced in section 5 are useful to study further symmetries of the 2DNLS-Whitham system.
### Reduction to the Whitham equations for the radial NLS equation
The Whitham equations for the 2DNLS equation admit a self-consistent reduction to the Whitham equations for the radial NLS equation, which were recently derived [4]. To show this, we first perform a change of independent variables from the Cartesian coordinates \(x\) and \(y\) to the polar coordinates
\[R=\sqrt{x^{2}+y^{2}}\,,\qquad\theta=\arctan(y/x)\,.\]
Using the definition of the convective derivative \(D_{y}\) in (5.15b), we find
\[\frac{Df}{Dy}=\frac{(y-qx)}{R}\frac{\partial f}{\partial R}+\frac{(x+qy)}{R^{ 2}}\frac{\partial f}{\partial\theta}\,.\]
Equations (5.14b) and (5.14c) in polar coordinates then become, respectively,
\[q_{t}+g\sum_{i=1}^{4}\big{[}(\sin\theta-q\cos\theta)(r_{i})_{R}+ \frac{(\cos\theta+q\sin\theta)}{R}(r_{i})_{\theta}\big{]}+2q(\sin\theta-q\cos \theta)p_{R}+\frac{2q}{R}(\cos\theta+q\sin\theta)p_{\theta}\] \[+\big{[}2U_{1}(1-q^{2})\cos\theta+2(p+2qU_{1})\sin\theta\big{]}q _{R}+\big{[}2(p+2qU_{1})\cos\theta-2U_{1}(1-q^{2})\sin\theta\big{]}\frac{q_{ \theta}}{R}=0\,,\]
\[p_{t}+2g\sum_{i=1}^{4}(r_{i}-U_{1})\big{[}(\sin\theta-q\cos \theta)(r_{i})_{r}+(\cos\theta+q\sin\theta)\frac{(r_{i})_{\theta}}{r}\big{]}+ 2(gU_{1}\cos\theta+p\sin\theta)p_{R}\] \[+2(p\cos\theta-gU_{1}\sin\theta)\frac{p_{\theta}}{R}+2q(s_{2}-2U _{1}^{2})\big{[}(\sin\theta-q\cos\theta)q_{R}+(\cos\theta+q\sin\theta)\frac{q _{\theta}}{R}\big{]}=0\,.\]
We then look for a reduction of (6.3) and the remaining four Whitham equations (5.14a) in which \(q=\tan\theta=y/x\). With this assumption, (6.3) simplify considerably. We also seek solutions in which the Riemann-type variables \(\dot{r}_{1},...,\dot{r}_{4}\) are independent of the angular coordinate \(\theta\). Recall that the variables \(r_{1},...,r_{4}\) appearing in (6.3) are related to \(\dot{r}_{1},...,\dot{r}_{4}\) by (5.9a). Thus
\[\frac{\partial r_{i}}{\partial R}=\frac{1}{\sqrt{g}}\frac{\partial\dot{r}_{i} }{\partial R}-\frac{qr_{i}}{g^{3/2}}\frac{\partial q}{\partial R}\,,\qquad \frac{\partial r_{i}}{\partial\theta}=-\frac{qr_{i}}{g^{3/2}}\frac{\partial q }{\partial\theta}\,.\]
Substituting the above expression into (6.3a) and (6.3b) yields, respectively,
\[p_{\theta}+\cot\theta p=0\,,\]
\[p_{t}+2(U_{1}\sec\theta+p\sin\theta)p_{R}+2(p\cos\theta-\tan\theta\sec\theta U _{1})\,p_{\theta}/R=0\,.\]
Equation (6.5a) yields \(p(R,\theta,t)=C(R,t)\csc\theta\), with \(C(R,t)\) to be determined. Then, substituting this expression into (6.5b) yields \(C_{t}+2(U_{1}\sec\theta+C)\,C_{R}-2(C\cot^{2}\theta-U_{1}\sec\theta)\,C/R=0\), whose only self-consistent solution is \(C=0\), implying \(p(R,\theta,t)=0\).
Now we turn our attention to the reduction of the first four Whitham modulation equations, namely (5.14a). Tedious but straightforward calculations show that, when written in the polar coordinates (6.1), and using \(q=\tan\theta\) and \(p=0\) as well as (6.4), the four modulation equations (5.14a) become exactly the Whitham equations for the radial NLS equation derived in [4]:
\[\frac{\partial\dot{\mathbf{r}}}{\partial t}+\dot{\mathbf{V}}\,\frac{\partial \dot{\mathbf{r}}}{\partial R}+\frac{\dot{\mathbf{b}}}{R}=0\,, \tag{6.6}\]
with \(\dot{\mathbf{r}}=(\dot{r}_{1},\ldots,\dot{r}_{4})^{T}\) and \(\dot{\mathbf{V}}=\text{diag}(\dot{\mathbf{V}})\) as in section 5, with \(\dot{\mathbf{b}}=(\dot{b}_{1},\ldots,\dot{b}_{4})^{T}\),
\[\dot{b}_{1} =2V_{o}^{2}-\tfrac{1}{3}(\dot{r}_{2}+\dot{r}_{3}+\dot{r}_{4})V_{1 }-\tfrac{1}{3}[(\dot{r}_{2}+\dot{r}_{3})^{2}+(\dot{r}_{3}+\dot{r}_{4})^{2}+( \dot{r}_{2}+\dot{r}_{4})^{2}]\,, \tag{6.7a}\] \[\dot{b}_{2} =2V_{o}^{2}-\tfrac{1}{3}(\dot{r}_{1}+\dot{r}_{3}+\dot{r}_{4})V_{2 }-\tfrac{1}{3}[(\dot{r}_{1}+\dot{r}_{3})^{2}+(\dot{r}_{3}+\dot{r}_{4})^{2}+( \dot{r}_{1}+\dot{r}_{4})^{2}]\,,\] (6.7b) \[\dot{b}_{3} =2V_{o}^{2}-\tfrac{1}{3}(\dot{r}_{1}+\dot{r}_{2}+\dot{r}_{4})V_{3 }-\tfrac{1}{3}[(\dot{r}_{1}+\dot{r}_{2})^{2}+(\dot{r}_{2}+\dot{r}_{4})^{2}+( \dot{r}_{1}+\dot{r}_{4})^{2}]\,,\] (6.7c) \[\dot{b}_{4} =2V_{o}^{2}-\tfrac{1}{3}(\dot{r}_{1}+\dot{r}_{2}+\dot{r}_{3})V_{4 }-\tfrac{1}{3}[(\dot{r}_{1}+\dot{r}_{2})^{2}+(\dot{r}_{2}+\dot{r}_{3})^{2}+( \dot{r}_{3}+\dot{r}_{1})^{2}]\,, \tag{6.7d}\]
and \(V_{o}=\tfrac{1}{2}(\dot{r}_{1}+\dot{r}_{2}+\dot{r}_{3}+\dot{r}_{4})\) as before. In terms of the physical variables, the assumption \(q=\tan\theta\) implies that the wavefronts are oriented radially, and the requirement \(p=0\) means that the mean flow has no transversal component either, which are both conditions that are consistent with a radially symmetric reduction.
### Harmonic limit and soliton limit of the 2DNLS-Whitham equations in Riemann-type variables
In section 4 we studied the harmonic limit and the soliton limit of the modulation equations in physical variables, and we saw that the singular soliton limit yields fewer equations than are needed to describe the parameters of the soliton solutions of the 2DNLS equation. We next study the corresponding limits of the Whitham modulation equations in Riemann-type variables, and we show how the transformation to Riemann-type variables eliminates this problem and yields a closed system of equations.
The harmonic limit (\(m\to 0\)) corresponds to either \(r_{2}\to r_{1}^{+}\) or \(r_{3}\to r_{4}^{-}\). In the former case, the PDE (5.14a) with \(j=1\) and the one with \(j=2\) coincide, as needed for the limit to be a self-consistent reduction, and the Whitham modulation system (5.11) then becomes
\[\mathbf{R}_{t}+\mathsf{A}_{o.1}\mathbf{R}_{x}+\mathsf{B}_{o.1}\mathbf{R}_{y}=0\,,\] (6.8a) with \[\mathbf{R}=(r_{1},r_{3},r_{4},q,p)^{T}\,.\] The matrices \[\mathsf{A}_{o.1}\] and \[\mathsf{B}_{o.1}\] are simply the matrices \[\mathsf{A}\] and \[\mathsf{B}\] from section 5 with \[r_{2}=r_{1}\] and the second row and column omitted. Moreover, the Riemann speeds reduce to \[V_{1}=V_{2}=4r_{1}-\frac{(r_{3}-r_{4})^{2}}{2r_{1}-r_{3}-r_{4}}\,,\qquad\,V_{ 3}=3r_{3}+r_{4}\,,\qquad\,V_{4}=r_{3}+3r_{4}\,, \tag{6.8b}\]
while \(h_{1},\ldots,h_{4}\) are still given by (5.15a) with \(r_{2}=r_{1}\). In the latter case (i.e., \(r_{3}\to r_{4}^{-}\)), the PDE (5.14a) with \(j=3\) and the one with \(j=4\) coincide, and the Whitham modulation system (5.11) then becomes
\[\mathbf{R}_{t}+\mathsf{A}_{o.2}\mathbf{R}_{x}+\mathsf{B}_{o.2}\mathbf{R}_{y}=0\,,\] (6.3a) with \[\mathbf{R}=(r_{1},r_{2},r_{3},q,p)^{T}\,.\] The matrices \[\mathsf{A}_{o.2}\] and \[\mathsf{B}_{o.2}\] are just the matrices \[\mathsf{A}\] and \[\mathsf{B}\] from section 5 with \[r_{4}=r_{3}\] and the fourth row and column omitted. The Riemann speeds reduce to \[V_{1}=3r_{1}+r_{2}\,,\qquad\,V_{2}=r_{1}+3r_{2}\,,\qquad\,V_{4}=V_{3}=4r_{3}+ \frac{(r_{1}-r_{2})^{2}}{r_{1}+r_{2}-2r_{3}}\,, \tag{6.3b}\]
with \(h_{1},\ldots,h_{4}\) now given by (5.15a) with \(r_{4}=r_{3}\). In both cases, it is straightforward to verify that, once the transformation to Riemann-type variables is inverted and the modulation equations are written back in terms of the physical variables, one recovers the system (4.12).
The soliton limit \((m\to 1)\) corresponds to \(r_{3}\to r_{2}^{+}\). In this case, the PDEs (5.14a) with \(j=3\) and the one with \(j=2\) coincide, and the remaining equations become
\[{\bf R}_{t}+A_{1}{\bf R}_{x}+B_{1}{\bf R}_{y}=0\,,\]
with \({\bf R}=(r_{1},r_{2},r_{4},q,p)^{T}\). The matrices \(A_{1}\) and \(B_{1}\) are \(A\) and \(B\) from section 5 with \(r_{3}=r_{2}\) and the fourth row and column omitted. The Riemann speeds reduce to
\[V_{1}=3r_{1}+r_{4}\,,\qquad\quad V_{2}=V_{3}=r_{1}+2r_{2}+r_{4}\qquad\quad V_{4 }=r_{1}+3r_{4}\,,\]
where \(h_{1},\ldots,h_{4}\) are still given by (5.15a) with \(r_{3}=r_{2}\). As in the harmonic limit, it is straightforward to verify that, once the transformation to Riemann-type variables is inverted and the modulation equations are written back in terms of the physical variables, one recovers the system (4.14). In this case, however, the equations in Riemann-type variables also allow us to obtain the two previously missing modulation equations, which determine the evolution of \(\hat{\bf k}\) and the soliton amplitude \(a=\lambda_{3}-\lambda_{1}\). One of these equations is immediate, since (5.14b) directly determines \(q=\tan\varphi\) and therefore \(\hat{\bf k}\). As for the amplitude equation, note that (5.3b) yields \(\lambda_{3}-\lambda_{1}=\sec^{2}\varphi\,(r_{4}-r_{2})(r_{3}-r_{1})\). Therefore, the modulation equation for \(r_{1}\), \(r_{2}=r_{3}\), \(r_{4}\), and \(q\) determine the evolution of the soliton amplitude and direction.
**7. Whitham modulation equations for the NLS equation in three spatial dimensions**
We now show how, thanks to the rotation-invariant form of all equations in sections 2 and 3, the results of section 5 are easily generalized to the NLS equation in three spatial dimensions.
_7.1. Set-up and resulting 3DNLS-Whitham system_
The Madelung transformation (2.1) yields the same hydrodynamic system of PDEs (2.2) as well as the mass, momentum and energy conservation laws (2.4) in differential form, now with \({\bf u}=(u_{1},u_{2},u_{3})^{T}\), \({\bf x}=(x,y,z)^{T}\) and \(\boldsymbol{\nabla}=(\partial_{x},\partial_{y},\partial_{z})^{T}\). The two-phase ansatz (2.6) is also the same, now with \({\bf k}=(k_{1},k_{2},k_{3})^{T}\) and \({\bf v}=(v_{1},v_{2},v_{3})^{T}\), and the curl-free condition (2.9) is now \(\nabla\times{\bf u}=0\). The only difference is the number of independent parameters in the periodic solutions: eight in three spatial dimensions as opposed to six in two spatial dimensions. The whole derivation in section 3 also remains the same, including the averaged conservation laws (3.4) and the Whitham modulation equations (3.7), again the only difference being the number of equations, which in three dimensions is eleven evolutionary equations.
The first point at which the derivation for the three-dimensional case diverges from the two-dimensional one is the transformation to Riemann-type variables. Compared to [5], the process here is made much easier by the availability of the second conservation of waves equation (3.4c), which allows us to bypass the averaged conservation of energy, which, in turn, greatly simplifies the calculations even in the presence of a third spatial dimension. We begin with the natural generalization of the parametrization (5.2) for \(\hat{\bf k}\), namely:
\[\hat{\bf k}=(\cos\varphi,\sin\varphi\cos\alpha,\sin\varphi\sin \alpha)^{T}\,.\]
\[q_{1}=k_{2}/k_{1}=\tan\varphi\,\cos\alpha\,,\quad q_{2}=k_{3}/k_{1}=\tan\varphi \,\sin\alpha\,,\]
\[g=1+q_{1}^{2}+q_{2}^{2}=1/k_{1}^{2}=\sec^{2}\varphi\,.\]
The leading-order part (2.14) of the curl-free condition (2.9) now consists of three equations. The first two of them are \(k_{1}u_{2}^{\prime}=k_{2}u_{1}^{\prime}\) and \(k_{1}u_{3}^{\prime}=k_{3}u_{1}^{\prime}\), which, when integrated, yield
\[{\bf u}_{b}(Z)=(u_{2}(Z),u_{3}(Z))^{T}=u_{1}(Z)\,{\bf q}+{\bf p}\,,\]
with \({\bf q}=(q_{1},q_{2})^{T}\), \({\bf p}=(p_{1},p_{2})^{T}\), and \(p_{1}\), \(p_{2}\) are additional modulation variables depending on the slow variables \({\bf x}\) and \(t\) that appear due to integration in \(Z\). For any three-component vector \({\bf w}=(w_{1},w_{2},w_{3})^{T}\) we introduce the "flat" notation \({\bf w}_{b}=(w_{2},w_{3})^{T}\), which we use extensively, to denote the two-component
vector comprised of the second and third components of the vector \(\mathbf{w}\). The third equation, \(k_{2}u_{3}^{\prime}=k_{3}u_{2}^{\prime}\) is automatically satisfied. Also, averaging (7.2), we obtain the two additional relations:
\[\bar{\mathbf{u}}_{y}=\bar{u}_{1}\mathbf{q}+\mathbf{p}\,, \tag{7.3a}\] \[\mathbf{U}_{y}=U_{1}\mathbf{q}+\mathbf{p}\,. \tag{7.3b}\]
Similarly, the first component of (3.6b) yields
\[u_{1}(Z)=U_{1}+Jk_{1}/(\|\mathbf{k}\|p(Z))\,, \tag{7.3c}\]
together with
\[\omega=2k_{1}(gU_{1}+\mathbf{q}\cdot\mathbf{p})\,. \tag{7.3d}\]
Finally, we define the Riemann-type variables \(r_{1},\ldots,r_{4}\) via the same transformation as in section 5, namely:
\[U_{1}=\tfrac{1}{2}(r_{1}+r_{2}+r_{3}+r_{4}), \tag{7.4a}\] \[\lambda_{1}=\tfrac{1}{4}\,g(r_{1}-r_{2}-r_{3}+r_{4})^{2},\quad \lambda_{2}=\tfrac{1}{4}\,g(r_{1}-r_{2}+r_{3}-r_{4})^{2},\quad\lambda_{3}= \tfrac{1}{4}\,g(r_{1}+r_{2}-r_{3}-r_{4})^{2}. \tag{7.4b}\]
Then, in sections 7.2, 7.3 and 7.4 below, we show that the Whitham modulation equations (3.7) yield the eight-component system of equations
\[\frac{\partial\mathbf{r}}{\partial t}+\bigvee\frac{\partial \mathbf{r}}{\partial x}+(\mathbf{q}\otimes\mathbf{V}+2\mathbf{p}\otimes 1_{4}) \cdot\boldsymbol{\nabla}_{y}\mathbf{r}+\mathbf{h}(\mathbf{r},\mathbf{q}, \mathbf{p})=0, \tag{7.5a}\] \[\frac{\partial\mathbf{q}}{\partial t}+2(U_{1}+\mathbf{q}\cdot \mathbf{U}_{y})\,\frac{\partial\mathbf{q}}{\partial x}+2\mathbf{D}_{y}\,(U_{1 }+\mathbf{q}\cdot\mathbf{U}_{y})=0\,,\] (7.5b) \[\frac{\partial\mathbf{p}}{\partial t}+2(U_{1}+\mathbf{q}\cdot \mathbf{U}_{y})\,\frac{\partial\mathbf{p}}{\partial x}+\mathbf{D}_{y}\,(g( \tilde{e}_{1}-U_{1}^{2})+\|\mathbf{p}\|^{2})=0\,. \tag{7.5c}\]
Here, as before, \(\mathbf{r}=(r_{1},\ldots,r_{4})^{T}\), \(\mathbf{V}=\mathrm{diag}(\mathbf{V})\) with \(\mathbf{V}=(V_{1},\ldots,V_{4})^{T}\) as in (5.12g), and the dot product in (7.5a) operates on the two-component vectors to its left and its right. That is, in component form, for each \(j=1,\ldots,4\) the third term in (7.5a) is the dot product between \(\mathbf{q}\,V_{j}+2\mathbf{p}\) and \(\boldsymbol{\nabla}_{y}r_{j}\). Additionally, (7.5b) and (7.5c) contain the three-dimensional generalization of the convective derivative of [2] and section 5, namely:
\[\mathbf{D}_{y}=(D_{y},D_{z})^{T}=\boldsymbol{\nabla}_{y}-\mathbf{q}\,\partial _{x}\,,\] (7.6a) where \[\boldsymbol{\nabla}_{y}=(\partial_{y},\partial_{z})^{T}\] and \[\frac{D}{Dy}=\frac{\partial}{\partial y}-q_{1}\frac{\partial}{\partial x}\,, \qquad\frac{D}{Dz}=\frac{\partial}{\partial z}-q_{2}\frac{\partial}{\partial x}\,. \tag{7.6b}\]
The term \(\mathbf{h}(\mathbf{r},\mathbf{q},\mathbf{p})=(h_{1},\ldots,h_{4})^{T}\) in (7.5a) is given by
\[h_{j}=2(U_{1}-r_{j})\mathbf{q}\cdot\mathbf{D}_{y}U_{1}-\tfrac{1} {2}\mathbf{q}\cdot\mathbf{D}_{y}s_{2}+(V_{j}-2U_{1})\,\mathbf{q}\cdot\Big{(}r _{j}\frac{\partial\mathbf{q}}{\partial x}+\frac{1}{2}\frac{\partial\mathbf{p }}{\partial x}\Big{)}-\tfrac{1}{4}(V_{j}-4r_{j})\,\mathbf{D}_{y}\cdot\mathbf{p}\] \[\qquad+a_{j}\mathbf{D}_{y}\cdot\mathbf{q}+(b_{j}/g)\,\operatorname{ tr}[(\mathbf{q}\otimes\mathbf{q})(\mathbf{D}_{y}\otimes\mathbf{q})]+((V_{j}-4r_{j})/g)\, \operatorname{tr}[(\mathbf{q}\otimes\mathbf{q})(\mathbf{D}_{y}\otimes\mathbf{ p})],\] (7.7a) with \[a_{j}=\tfrac{1}{3}\,[2(2r_{j}-V_{j})U_{1}-s_{2}+2U_{1}^{2}+V_{j}r_{j}]\,,\quad b _{j}=r_{j}(V_{j}-4U_{1})-s_{2}+2U_{1}^{2}+a_{j}, \tag{7.7b}\]
for \(j=1,\ldots,4\). The \(s_{n}\) are as in (5.13), and \(\tilde{e}_{1}=g(\lambda_{1}+\lambda_{2}+\lambda_{3})\) in (7.5c), is similar to (2.22). Equations (7.5a), (7.5b), (7.5c) and (7.7a) should be compared to (5.14a), (5.14b), (5.14c) and (5.15a) in the two-dimensional case. Note that, while \(\mathbf{h}_{4}(\mathbf{r},\mathbf{q},\mathbf{p})\) might give the impression of a forcing term in (7.5a), that is not the case in reality, as (7.7a) shows that \(\mathbf{h}_{4}(\mathbf{r},\mathbf{q},\mathbf{p})\) is in fact a homogenous first-order differential polynomial in \(\mathbf{r}\), \(\mathbf{q}\) and \(\mathbf{p}\), and therefore (7.5) is indeed a system of PDEs of hydrodynamic type like its one-dimensional and two-dimensional counterparts.
Similarly to the two-dimensional case, the 3DNLS-Whitham modulation equations (7.5) are subject to the compatibility conditions \(\nabla\times\tilde{\mathbf{u}}(\mathbf{x},0)=\mathbf{0}\) and \(\nabla\times\mathbf{k}(\mathbf{x},0)=\mathbf{0}\) at \(t=0\). In Appendix A.3 we show that, in terms of the dependent variables defined above, these constraints become, respectively,
\[k_{1}\mathbf{q}_{x}=\mathbf{D}_{\flat}k_{1}\,,\qquad\quad k_{1}\mathbf{p}_{x}= 2((\overline{\mathbf{\nabla}}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
For convenience we also introduce the quantity \(\mathbf{M}=\overline{\rho\mathbf{u}}/g=(M_{1},M_{2},M_{3})^{T}\), with
\[M_{1}:=\overline{\rho u_{1}}/g=U_{1}\overline{\rho}/g+\tilde{J},\qquad\quad \mathbf{M}_{b}:=M_{1}\mathbf{q}+(\overline{\rho}/g)\,\mathbf{p}=(\overline{ \rho}/g)\mathbf{U}_{b}+\tilde{J}\mathbf{q}, \tag{7.16}\]
and \(\tilde{J}=\hat{k}_{1}\,J/g\). Then, using (7.3d) one can rewrite the modulation equations (3.7a) and (3.7e) as follows:
\[k_{1,t}+2[k_{1}(U_{1}+\mathbf{q}\cdot\mathbf{U}_{b})]=0, \tag{7.17}\] \[(\overline{\rho})_{t}+2(gM_{1})_{x}+2\overline{\mathbf{v}}_{b} \cdot(g\,\mathbf{M}_{b})=0, \tag{7.18}\]
while the first component of the second conservation of waves equation (3.7c) becomes
\[(U_{1}+J\overline{\rho^{-1}}/g^{1/2})_{t}+\big{[}g\big{(}\tilde{e}_{1}+U_{1}^ {2}+2J\overline{\rho^{-1}}/g^{1/2}U_{1}\big{)}+\|\mathbf{p}\|^{2}+2(U_{1}+J \overline{\rho^{-1}}/g^{1/2})\mathbf{p}\cdot\mathbf{q}\big{]}_{x}=0. \tag{7.19}\]
Moreover, using the equation (A.19b) we can write the averaged momentum equation (3.7f) in component form as
\[(gM_{1})_{t}+\big{[}g\big{(}2U_{1}(M_{1}+\tilde{J})+\tilde{e}_{2} -\overline{\rho}^{2}/g^{2}\big{)}+\overline{\rho}^{2}\big{]}_{x}\] \[+\overline{\mathbf{v}}_{b}\cdot\big{[}2g(M_{1}+\tilde{J})\mathbf{ U}_{b}-2g\tilde{J}\mathbf{p}+g(\tilde{e}_{2}-\overline{\rho}^{2}/g^{2}) \mathbf{q}\big{]}=0, \tag{7.20a}\] \[(g\mathbf{M}_{b})_{t}+\Big{[}g\big{(}2(M_{1}+\tilde{J})\mathbf{ U}_{b}+(\tilde{e}_{2}-\overline{\rho}^{2}/g^{2})\mathbf{q}-2\tilde{J}\mathbf{p} \big{)}\Big{]}_{x}+\overline{\mathbf{v}}_{b}\overline{\rho}^{2})\] \[+\overline{\mathbf{v}}_{b}\cdot\Big{[}2g\mathbf{M}_{b}\otimes \mathbf{U}_{b}+2g\tilde{J}\mathbf{U}_{b}\otimes\mathbf{q}+g(\tilde{e}_{2}- \overline{\rho}^{2}/g^{2})\,\mathbf{q}\otimes\mathbf{q}\Big{]}=\mathbf{0}. \tag{7.20b}\]
Next, we perform a second, intermediate step to write the Whitham modulation equations in terms of convective derivatives. First, we derive some identities that will be useful later. Equation (3.7b) and the definition of \(\mathbf{q}\) in (7.1b) yield
\[\mathbf{q}_{x}=\frac{1}{k_{1}}\mathbf{D}_{b}\,k_{1}\,,\quad\mathbf{q}_{y}=\frac {1}{k_{1}}\mathbf{D}_{b}(k_{1}\,q_{1})\,,\quad\mathbf{q}_{z}=\frac{1}{k_{1}} \mathbf{D}_{b}(k_{1}\,q_{2})\,. \tag{7.21}\]
Moreover, in Appendix A.3, we show that these relations also yield the two constraints
\[D_{y}\,q_{2}=D_{z}q_{1}\,, \tag{7.22a}\] \[D_{y}\,p_{2}=D_{z}p_{1}\,, \tag{7.22b}\]
which will prove to be useful. We then define the additional convective derivatives
\[D_{x}=\frac{\partial}{\partial x}+\mathbf{q}\cdot\overline{\mathbf{v}}_{b}\,, \qquad\quad D_{t}=\frac{\partial}{\partial t}+2U_{1}\frac{\partial}{\partial x }+2\mathbf{U}_{b}\cdot\overline{\mathbf{v}}_{b}\,. \tag{7.23}\]
Now we rewrite the evolution equations for \(\mathbf{q}\) using these convective derivatives. Specifically, in Appendix A.3 we show that (7.17) and (7.19) yield, respectively,
\[\frac{D_{t}k_{1}}{k_{1}}+2D_{x}U_{1}+W_{1}=0\,, \tag{7.24}\] \[D_{t}(U_{1}+J\overline{\rho^{-1}}/g^{1/2})+(U_{1}-J\overline{ \rho^{-1}}/g^{1/2})\frac{D_{t}k_{1}}{k_{1}}+D_{x}(\tilde{e}_{1}+U_{1}^{2})+2W_ {2}=0\,, \tag{7.25}\]
where
\[gW_{1}=\mathbf{q}\cdot\left[D_{t}\mathbf{q}+2U_{1}\,D_{x}\mathbf{q}+2D_{x} \mathbf{p}\right], \tag{7.26a}\] \[gW_{2}=\mathbf{q}\cdot\left[U_{1}\,D_{t}\mathbf{q}+s_{2}D_{x} \mathbf{q}+\frac{1}{2}D_{t}\mathbf{p}+U_{1}\,D_{x}\mathbf{p}\right]. \tag{7.26b}\]
Moreover, in Appendix A.3 we also show that the conservation of mass equation (7.18) and conservation of momentum equation (7.20a) yield, respectively,
\[g\Big{[}D_{t}(\overline{\rho}/g)-(\overline{\rho}/g)\frac{D_{t}k_ {1}}{k_{1}}+2D_{x}J\Big{]}+(\overline{\rho}/g)(\mathbf{q}\cdot D_{t}\mathbf{q })+6\tilde{J}\mathbf{q}\cdot D_{x}\mathbf{q}\] \[+2M_{1}\big{(}g\overline{\mathbf{v}}_{b}\cdot\mathbf{q}-\mathbf{ q}\cdot D_{x}\mathbf{q}\big{)}+2(\overline{\rho}/g)\big{(}g\overline{\mathbf{v}}_{b} \cdot\mathbf{p}-\mathbf{q}\cdot D_{x}\mathbf{p}\big{)}=0\,, \tag{7.27}\] \[g\Big{[}(\overline{\rho}/g)D_{t}U_{1}+D_{t}\tilde{J}-2J\frac{D_{t }k_{1}}{k_{1}}+D_{x}\tilde{e}_{2}\Big{]}+\mathbf{q}\cdot\left[M_{1}D_{t} \mathbf{q}+(\overline{\rho}/g)D_{t}\mathbf{p}+4\tilde{e}_{2}D_{x}\mathbf{q}\right]\] \[+(\tilde{e}_{2}-(\overline{\rho}^{2}/g^{2})+2U_{1}\tilde{J})\big{(}g \overline{\mathbf{v}}_{b}\cdot\mathbf{q}-\mathbf{q}\cdot D_{x}\mathbf{q}\big{)} +2\tilde{J}\big{(}g\overline{\mathbf{v}}_{b}\cdot\mathbf{p}-\mathbf{q} \cdot D_{x}\mathbf{p}\big{)}=0\,. \tag{7.28}\]
Equations (7.24), (7.25), (7.27) and (7.28) comprise the four modified modulation equations written in terms of the variables \(U_{1}\), \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) and the convective derivatives \(D_{x}\) and \(D_{t}\).
### Derivation of the 3DNLS-Whitham system: Equations for Riemann-type variables
The third and final step in the derivation of (7.5\(a\)) is to express the modulation equations in terms of \(r_{1},\ldots,r_{4}\). Recall the transformation (7.4) to the Riemann-type variables. Note that the arrangement of indices in (7.4) is dictated by the requirement that the constraint (2.21) be satisfied when \(r_{1}\leqslant r_{2}\leqslant r_{3}\leqslant r_{4}\), since
\[\lambda_{2}-\lambda_{1} =g(r_{4}-r_{3})(r_{2}-r_{1})\,,\] (7.29 \[a\] ) \[\lambda_{3}-\lambda_{1} =g(r_{4}-r_{2})(r_{3}-r_{1})\,,\] (7.29 \[b\] ) \[\lambda_{3}-\lambda_{2} =g(r_{4}-r_{1})(r_{3}-r_{2})\,.\] (7.29 \[c\] )
In Appendix A.3, using the above definitions, we show that (7.24), (7.25), (7.27) and (7.28) yield, respectively,
\[(\boldsymbol{\nabla}_{\mathbf{r}}[\log k_{1}])^{T}D_{1}\mathbf{ r}+D_{x}s_{1}+W_{1} =0\,,\] (7.30 \[a\] ) \[2(\boldsymbol{\nabla}_{\mathbf{r}}[\log k_{1}])^{T}\mathbb{R}_{ 4}\,D_{t}\mathbf{r}+D_{x}s_{2}+2W_{2} =0\,,\] (7.30 \[b\] ) \[3(\boldsymbol{\nabla}_{\mathbf{r}}[\log k_{1}])^{T}\mathbb{R}_{ 4}^{2}D_{t}\mathbf{r}+D_{x}s_{3}+3W_{3} =0\,,\] (7.30 \[c\] ) \[4(\boldsymbol{\nabla}_{\mathbf{r}}[\log k_{1}])^{T}\mathbb{R}_{ 4}^{3}D_{t}\mathbf{r}+D_{x}s_{4}+4W_{4} =0\,,\] (7.30 \[d\] )
where \(\mathbf{r}=(r_{1},\ldots,r_{4})^{T}\), \(\boldsymbol{\nabla}_{\mathbf{r}}=(\partial_{t_{1}},\ldots,\partial_{r_{4}})^{T}\) and \(\mathbb{R}_{4}=\mathrm{diag}(r_{1},\ldots,r_{4})\) as before, with \(W_{1}\) and \(W_{2}\) as in (7.26\(a\)) and (7.26\(b\)), and
\[gW_{3} =\tfrac{1}{4}(s_{2}-2U_{1}^{2})gW_{1}+U_{1}gW_{2}+\tfrac{1}{2} \mathbf{q}\cdot[(\overline{\rho}/g)D_{t}\mathbf{q}+6\bar{f}D_{x}\mathbf{q}]\] \[\qquad+(U_{1}\bar{\rho}/g+\bar{f})\big{(}g(\boldsymbol{\nabla}_{ \mathbf{\flat}}\cdot\mathbf{q})-\mathbf{q}\cdot D_{x}\mathbf{q}\big{)}+( \overline{\rho}/g)\big{(}g(\boldsymbol{\nabla}_{\mathbf{\flat}}\cdot\mathbf{p })-\mathbf{q}\cdot D_{x}\mathbf{p}\big{)}\,,\] (7.31 \[a\] ) \[gW_{4} =\tfrac{1}{8}(6\bar{f}-D_{1}s_{2}+2U_{1}^{3})gW_{1}+\tfrac{1}{4} (s_{2}-4U_{1}^{2})gW_{2}+\tfrac{3}{2}U_{1}gW_{3}\] \[\qquad+\tfrac{1}{4}\big{[}\mathbf{q}\cdot(M_{1}D_{t}\mathbf{q}+( \overline{\rho}/g)D_{t}\mathbf{p}+4\tilde{e}_{2}D_{x}\mathbf{q})\] \[\qquad+(\tilde{e}_{2}-(\overline{\rho}^{2}/g^{2})+2U_{1}\bar{f}) \big{(}g(\boldsymbol{\nabla}_{\mathbf{\flat}}\cdot\mathbf{q})-\mathbf{q} \cdot D_{x}\mathbf{q}\big{)}+2\bar{f}\big{(}g(\boldsymbol{\nabla}_{\mathbf{ \flat}}\cdot\mathbf{p})-\mathbf{q}\cdot D_{x}\mathbf{p}\big{)}\big{]}\,.\] (7.31 \[b\] )
Importantly, note that, even though the second conservation of waves equation (7.25) contains the third complete elliptic integral \(\Pi(\cdot,m)\) via \(\overline{\rho^{-1}}\) [cf. (3.8\(b\))], the third elliptic integral does not appear in the resulting modulation equation (7.30\(a\)). Note that \(\Pi(\cdot,m)\) is also contained in the conservation of energy equation. Next, one can collect the four equations (7.30) and rewrite them in matrix form as
\[\mathsf{M}(\mathbf{r})\left(\nabla_{\mathbf{r}}[\log k_{1}]\cdot D_{t}\mathbf{ r}+D_{x}\mathbf{r}\right)+\mathbf{W}=\mathbf{0}\,, \tag{7.32}\]
where \(\mathbf{W}=(W_{1},\cdots,W_{4})^{T}\) and \(\mathsf{M}(\mathbf{r})\) is the Vandermonde matrix
\[\mathsf{M}(\mathbf{r})=\left(\begin{array}{cccc}1&1&1&1\\ r_{1}&r_{2}&r_{3}&r_{4}\\ r_{1}^{2}&r_{2}^{2}&r_{3}^{2}&r_{4}^{2}\\ r_{1}^{3}&r_{2}^{3}&r_{3}^{3}&r_{4}^{3}\end{array}\right)\,. \tag{7.33}\]
Multiplying (7.32) by \(\mathsf{M}^{-1}(\mathbf{r})\), we then finally obtain (7.5\(a\)), with
\[h_{j}=\frac{(-1)^{j+1}\Delta_{ilm}}{|\Delta|(\partial k/\partial r_{j})/k}[r_{ i}r_{1}r_{m}W_{1}-(r_{i}r_{l}+r_{l}r_{m}+r_{m}r_{i})W_{2}+(r_{i}+r_{l}+r_{m})W_{3}-W_{ 4}],\ \ j=1,\ldots,4\,, \tag{7.34}\]
where \(j\neq i,j\neq l,j\neq m\), \(i<l<m\), summation of repeated indices is implied, and
\[|\Delta|=\prod_{j>l}^{4}(r_{j}-r_{l})\,,\quad\Delta_{ilm}=(r_{i}-r_{l})(r_{l}-r _{m})(r_{m}-r_{i})\,. \tag{7.35}\]
Finally, using equations (A.22\(a\)) and (A.22\(b\)), one can simplify \(h_{1},\ldots,h_{4}\) in (7.34) to obtain (7.7\(a\)).
## 8 Discussion and perspectives
In summary, we derived the Whitham modulation equations for the defocusing NLS equation in two, three and higher spatial dimensions using a two-phase ansatz and the averaged conservation laws of the NLS equation written in coordinate-free vector form, and we elucidated various symmetries and reductions of the resulting equations, including the reduction to the Whitham equations of the radial NLS equation as well as the harmonic and soliton limits. We point out that, long after this work was completed, we learned that modulation equations for multi-dimensional equations of NLS type were written down in physical variables using a general framework in [7], and the modulation equations were used to study the stability of the plane wave solutions. On the other hand, no transformation to Riemann-type variables was carried out in [7].
We reiterate that the use of a two-phase ansatz in this work (as opposed to a one-phase ansatz as in [5]) greatly simplifies the derivation, since it results in a second conservation of waves equation that allows us to avoid using the conservation of energy equation, which is much more complicated in comparison. Moreover, the advantage of using a two-phase ansatz increases with the number of spatial dimensions. This is because the number of modulation equations needed is \(2N+2\). Therefore, if one tried to derive the modulation equations in three spatial dimensions with a one-phase ansatz, one would need to use additional conservation laws for the NLS equation. This would not only lead to a much more complicated derivation, but one would quickly exhaust the number of available conservation laws, since the NLS equation in more than one spatial dimensions is not completely integrable, and therefore does not have hidden symmetries resulting in an infinite number of conservation laws.
In contrast, the results of section 7 can be generalized in a straightforward way to obtain the Whitham modulation equations in simplified form in an arbitrary number of spatial dimensions. The system of modulation equations (7.5) is already written in vectorial, dimension-independent form, with the only caveat that, with \(N\) spatial dimensions, \(\mathbf{q}\) and \(\mathbf{p}\) have \(N-1\) components. Moreover, all the steps of the derivation in section 7 are written in a way that generalizes to any number of spatial dimensions. Indeed, one can introduce spherical coordinates in \(N\) spatial dimensions by generalizing (7.1a) as \(\hat{k}_{1}=\cos\varphi_{1},\ \hat{k}_{2}=\sin\varphi_{1}\cos\varphi_{2},\ \hat{k}_{3}=\sin\varphi_{1}\sin\varphi_{2}\cos \varphi_{3}\), etc., up to \(\hat{k}_{N-1}=\sin\varphi_{1}\cdots\sin\varphi_{N-2}\cos\varphi_{N-1}\) and \(\hat{k}_{N}=\sin\varphi_{1}\cdots\sin\varphi_{N-2}\sin\varphi_{N-1}\). Then, one introduces \(q_{1},\ldots,q_{N}\) via the generalization of (7.1b), namely, \(q_{1}=k_{2}/k_{1},\ q_{2}=k_{3}/k_{1}\), etc., up to \(q_{N-1}=k_{N}/k_{1}\), as well as \(p_{1},\ldots,p_{N}\) via the natural generalization of (7.2). In this way, one obtains the generalization of (7.1c) as \(g=1+q_{1}^{2}+\cdots+q_{N}^{2}=\sec\varphi_{1}\), and all the calculations and equations in section 7 remain valid as long as one also redefines the operators \(\boldsymbol{\nabla}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
One direction for future work is the derivation of the Whitham equations for the focusing NLS equation in three spatial dimensions. We expect that this will be straightforward. Indeed, the Whitham equations in the two-dimensional focusing case were already written in [5] (although not in rotation-invariant form). Once the derivation of the one-phase solutions of the NLS equation is done in dimension-invariant form, as was the case in section 2.2, the rest of the machinery presented in this work will carry over to the case of the focusing case in three and higher dimensions without significant changes. Of course, as in the one-dimensional case, the resulting Whitham equations will be elliptic (i.e., the characteristic velocities will be complex), and therefore require suitable interpretation of initial value problems; see [15, 23, 32, 33, 34, 40] as well as [13, 25] and references therein. The Whitham equations for the NLS equation in one spatial dimension have also proved to be useful in some situations, even in the focusing case [11, 16, 24], so one can expect that those in two and three spatial dimensions will be useful as well.
Another important direction for future work is a study to determine whether the Whitham modulation system derived here, or any of its reductions, are completely integrable. A notion of integrability for multidimensional systems was put forth in [26, 27], based on the existence of infinitely many \(N\)-component reductions. Of course, the NLS equation in more than one spatial dimension is not integrable, and therefore one would have no reason to expect that the corresponding NLS-Whitham systems are. Still, the reductions to one-dimensional NLS-Whitham equations are indeed integrable, and therefore it is a natural question whether there are other integrable reductions. In this regard, we should point out that, even for the KP equation (which is integrable), the original Whitham system derived in [2] appears not to be integrable, but its harmonic and soliton limits are [14]. Moreover, so are various less-trivial one-dimensional reductions beyond the obvious reduction to the Whitham system for the KdV equation, once one properly takes into account the analogue of the compatibility conditions (3.4_a_) and (3.4_b_) [12].
Yet another interesting problem for future work is the issue of whether one can establish a precise relation between the 2DNLS-Whitham system and the KP-Whitham system. It is well known that the 1DNLS-Whitham system admits a reduction to the KdV-Whitham system [32]. It is also well known that the 2DNLS equation admits a reduction to the KP equation [41]. A natural question is therefore whether the 2DNLS-Whitham system admits a reduction to the KP-Whitham system. It is straightforward to see that, if one considers the same reduction as in [32], the PDEs for \(r_{1},\ldots,r_{4}\) in the 2DNLS-Whitham system naturally reduce to those for \(r_{1},\ldots,r_{3}\) in the KP-Whitham system. The PDE for \(q\) also reduces to the corresponding equation in the KP-Whitham system, since it just comes from the second component of the conservation of waves equation in both systems. The open question, however, is how one can obtain a PDE for \(p\) that does not contain a time derivative, as prescribed in the KP-Whitham system.
Finally, and most importantly from a practical point of view, an obvious opportunity for future work will be the use of the modulation equations derived here to characterize the dynamical behavior in physically significant scenarios. One important application is to the description of dispersive shock waves (DSWs) [32, 13]. Some of the earliest experiments on DSWs in nonlinear optics and Bose-Einstein condensates (BECs)--where the defocusing NLS equation is an excellent model--involved inherently multidimensional nonlinear wave propagation [22, 51, 34, 55]. One intriguing feature, observed in both BEC and optics [34, 55], is the coherent propagation of multidimensional DSWs with stable ring/spherical and elliptical/ellipsoidal patterns. These observations are at odds with the known transverse instability of planar cnoidal wave solutions of (1.1) [54]. Further analysis of the 2D and 3DNLS-Whitham modulation equations may provide some analytical insight in this. Moreover, BECs are three-dimensional, so the (3+1)-dimensional modulation equations derived here are needed to describe large amplitude matter waves. Three-dimensional effects have been shown to be decisive in some BEC DSW experiments [19, 43].
Various applications of the Whitham equations for the focusing and defocusing NLS equations in one spatial dimension were already mentioned above. We should also note that, while the full modulation system composed of equations (7.5_a_), (7.5_b_) and (7.5_c_) might appear complicated, even
its reductions can be useful in this regard. For example, of particular interest from an applicative point of view are the harmonic and soliton limits. In the one-dimensional case, soliton modulation theory and its applications were studied for the KdV equation in [42] and for the defocusing NLS equation in [52], while the harmonic limit of the Whitham equations for the KdV equation was studied in [20]. Similarly, the harmonic and soliton limits of the Whitham equations for the KP equation, which were derived and analyzed in [2, 14] have found concrete applications in [48, 50, 49]. These reductions analytically describe the evolution of a soliton or linear waves in the presence of the slowly varying mean field \(\tilde{\rho}\), \(\tilde{\mathbf{u}}\). Obtaining these modulation equations using multiple scales and a soliton ansatz is quite tedious and, to our knowledge, has apparently only been carried out for the KdV equation in [31]. We believe that, like with Whitham equations for the KP equation [2], the modulation equations derived in this work will prove to be an effective tool to study several physically significant problems. The soliton limit should prove to be particularly important in this respect, similar to the KP equation [48, 49, 50].
We hope that the results of this work and the present discussion will provide a stimulus for several further studies on these and related problems.
#### Acknowledgments
We thank Alexandr Chernyavskiy and Dmitri Kireyev for many useful discussions on topics related to this work. This research was partially supported by the National Science Foundation under grant numbers DMS-1816934 and DMS-2009487.
## Appendix
### Direct derivation of the periodic solutions of the NLS equation
Here we derive the periodic solutions of the NLS equation in an arbitrary number of dimensions directly, without using the hydrodynamic system. We start with the one-phase ansatz
\[\psi(\mathbf{x},t)=\sqrt{\rho(z/\varepsilon)}\,\mathrm{e}^{i\Phi(z/ \varepsilon)}\,, \tag{101}\]
where, as before, the "fast variable" is \(Z=\mathbf{k}\cdot\mathbf{x}-\omega t\). Substituting (101) into (1) and separating into real and imaginary parts yields respectively:
\[(\sqrt{\rho})^{\prime\prime}-\sqrt{\rho}(\Phi^{\prime})^{2}+ \frac{\omega}{\|\mathbf{k}\|^{2}}\sqrt{\rho}\,\Phi^{\prime}-\frac{2}{\| \mathbf{k}\|^{2}}\rho^{3/2}=0\,, \tag{102a}\] \[\sqrt{\rho}\,\Phi^{\prime\prime}+\Big{(}2\Phi^{\prime}-\frac{ \omega}{\|\mathbf{k}\|^{2}}\Big{)}(\sqrt{\rho})^{\prime}=0\,, \tag{102b}\]
where, for brevity in this section, we denote \(a=\|\mathbf{k}\|^{2}\). Integrating (102b) yields \(\Phi^{\prime}\) up to an integration constant \(J\)
\[\Phi^{\prime}=\frac{J}{\|\mathbf{k}\|\,\rho}+\frac{\omega}{2a}\,. \tag{103}\]
Substituting the phase relation (103), the real part (102a) reduces to:
\[(\sqrt{\rho})^{\prime\prime}-\frac{J^{2}}{a\,\rho^{3/2}}+\Big{(} \frac{\omega}{2a}\Big{)}^{2}\sqrt{\rho}-\frac{2}{a}\rho^{3/2}=0. \tag{104}\]
Multiplying by \(2(\sqrt{\rho})^{\prime}\) and integrating with respect to \(Z\) and letting \(f=\rho\) yields:
\[(f^{\prime})^{2}=\frac{4}{a}f^{3}-4\Big{(}\frac{\omega}{2a}\Big{)}^{2}f^{2}+4 c_{1}f-\frac{4J^{2}}{a}\,. \tag{105}\]
By substituting \(f(Z)=A+By^{2}(Z)\), we get the following ODE for \(y\):
\[(y^{\prime})^{2}=\frac{1}{B^{2}}\Big{[}\,\frac{A^{3}}{a}-A^{2}\Big{(}\frac{ \omega}{2a}\Big{)}^{2}+Ac_{1}-\frac{J^{2}}{a}\,\Big{]}\,\frac{1}{y^{2}}+\frac{ 1}{B}\Big{[}\,\frac{3A^{2}}{a}-2A\Big{(}\frac{\omega}{2a}\Big{)}^{2}+c_{1}\, \Big{]}+\Big{[}\,\frac{3A}{a}-\Big{(}\frac{\omega}{2a}\Big{)}^{2}\,\Big{]}\,y ^{2}+\frac{B}{a}\,y^{4}. \tag{106}\]
Now recall that the Jacobian elliptic sine \(y(Z)=\mathrm{sn}(cz|m)\) solves the ODE \((y^{\prime}/c)^{2}=(1-y^{2})(1-my^{2})\). By requiring that (A.6) matches the ODE for the elliptic sine, one then obtains (2.17), with \(B=4m\|\mathbf{k}\|^{2}K_{m}^{2}\) as before, and with
\[J^{2}=4aK_{m}^{2}A\bigg{(}1+\frac{A}{4K_{m}^{2}a}\bigg{)}(A+4mK_{ m}^{2}a)\,,\] (A.7a) \[\Big{(}\frac{\omega}{2a}\Big{)}^{2}=4K_{m}^{2}(1+m)+\frac{3A}{a},\quad c_{1}=\frac{1}{a}\left[(4mK_{m}^{2}a+A)(4K_{m}^{2}a+2A)+A\big{(}4K_{m}^{ 2}a+A\big{)}\right].\] (A.7b)
Similar to section 2.2, we write the ODE (A.5) as \((f^{\prime})^{2}=P_{3}(f)\), where
\[P_{3}(f)=\frac{4}{a}\left[f^{3}-a\Big{(}\frac{\omega}{2a}\Big{)}^{2}f^{2}+c_{ 1}\,a\,f-J^{2}\right]=\frac{4}{a}(f-\lambda_{1})(f-\lambda_{2})(f-\lambda_{3})\,,\] (A.8)
with \(\lambda_{1},\ldots,\lambda_{3}\) given by (2.19). Note that the requirements \(a\approx 0\) and \(0\leq m\leq 1\) again immediately imply (2.21). The symmetric polynomials defined by \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are related to the above constants as
\[e_{1}=\lambda_{1}+\lambda_{2}+\lambda_{3}=\omega^{2}/4a\,,\quad e_{2}=\lambda_ {1}\lambda_{2}+\lambda_{2}\lambda_{3}+\lambda_{3}\lambda_{1}=c_{1}\,a\,,\quad e _{3}=\lambda_{1}\lambda_{2}\lambda_{3}=J^{2}.\] (A.9)
which also allow one to recover \(A\), \(a\), and \(m\) when \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) are known. The above solution contains \(N+2\) independent parameters: \(A\), \(m\) and \(\mathbf{k}\) [since \(J\) and \(\omega\) are determined by (A.7a), (A.7b)]. Next we employ the Galilean invariance of the NLS equation to apply a Galilean boost and thereby obtain the more general family of solutions
\[\bar{\psi}(\mathbf{x},t)=\psi(\mathbf{x}-2\mathbf{v}t,t)\,\mathrm{e}^{i( \mathbf{v}\cdot\mathbf{x}-|\mathbf{v}|^{2}\,t)/\varepsilon}=\sqrt{\rho(\bar{z} /\varepsilon)}\,\mathrm{e}^{i\phi(z/\varepsilon,\mathbf{x},t)}\,,\] (A.10a) where \[\bar{z}=\mathbf{k}\cdot\mathbf{x}-\tilde{\omega}\,t\,,\] with \[\tilde{\omega}=\omega+2\mathbf{k}\cdot\mathbf{v}\,,\] and where \[\tilde{\Phi}(\bar{z}/\varepsilon,\mathbf{x},t)=\Phi(\bar{z}/\varepsilon)+( \mathbf{v}\cdot\mathbf{x}-\|\mathbf{v}\|^{2}\,t)/\varepsilon\,.\] (A.10b)
The transformation adds the \(N\) new independent parameters \(v_{1},\ldots,v_{N}\). Therefore, the periodic solution of the NLS equation (1.1) in \(N\) spatial dimensions contains \(2N+2\) independent real parameters: \(A\), \(m\), \(\mathbf{k}\) and \(\mathbf{v}\), as expected.
### Calculation of the solution amplitude and second frequency and simplification of certain terms
Here we give a few additional details on the calculation of the fluid density. Starting from (2.10b), using (2.12) and simplifying the resulting ODE, one has
\[a\rho^{\prime\prime\prime}+a\rho^{\prime}\frac{(\rho^{\prime})^{2}-2\rho\rho^{ \prime\prime}}{\rho^{2}}-4\rho\rho^{\prime}+\frac{4J^{2}\rho^{\prime}}{\rho^ {2}}=0\,,\] (A.11)
where \(a=\|\mathbf{k}\|^{2}\) for brevity. Integrating w.r.t. \(Z\) yields
\[a\rho^{\prime\prime}-a\frac{(\rho^{\prime})^{2}}{\rho}-2\rho^{2}+2c_{1}-\frac{ 4J^{2}}{\rho}=0\,,\] (A.12)
where \(c_{1}\) is an arbitrary integration constant. Multiplying (A.12) by \(2\rho^{\prime}/\rho^{2}\) and integrating with respect to \(Z\) again yields
\[a(\rho^{\prime})^{2}=4\rho^{3}-4c_{2}\rho^{2}+4c_{1}\rho-4J^{2}\,,\] (A.13)
with \(c_{2}\) another arbitrary integration constant. Letting \(\rho(Z)=A+By^{2}(Z)\) yields the following ODE:
\[(y^{\prime})^{2}=\frac{1}{B^{2}a}(A^{3}-A^{2}c_{2}+Ac_{1}-J^{2})\frac{1}{y^{2} }+\frac{1}{Ba}(3A^{2}-2Ac_{2}+c_{1})+\Big{(}\frac{3A}{a}-\frac{c_{2}}{a} \Big{)}y^{2}+\frac{B}{a}y^{4}.\] (A.14)
Now recall that the Jacobi elliptic sine \(y(Z)=\mathrm{sn}(cz|m)\) solves the ODE \((y^{\prime}/c)^{2}=(1-y^{2})(1-my^{2})\). By requiring that (A.14) matches the ODE for the elliptic sine, one obtains (2.17), with the coefficients as in (A.7a).
Next we obtain (2.23), which determines the frequency \(\mu\) of the second phase. As mentioned in section 2, to this end one can use the undifferentiated version of (2.10b) [obtained from the real part of (1.1) using (2.1) and (2.6)], which is
\[-\omega\phi^{\prime}+2(\mathbf{k}\cdot\bar{\mathbf{u}})\,\phi^{\prime}+a(\phi^{ \prime})^{2}+2\rho-\mu+\|\bar{\mathbf{u}}\|^{2}-\frac{a}{4}\Big{(}(\ln\rho)^{ \prime\prime}+\frac{\rho^{\prime\prime}}{\rho}\Big{)}=0\,.\] (A.15)
Differentiating (A.15) w.r.t. \(x\) and \(y\) and collecting leading-order terms yields (2.10b). However, (A.15) allows us to determine \(\mu\) in a more straightforward manner. Indeed, substituting (2.11) into equation (A.15) and simplifying yields,
\[2ap^{\prime\prime}-a\frac{(\rho^{\prime})^{2}}{\rho}-8\rho^{2}+C\rho-\frac{4J ^{2}}{\rho}=0\,,\] (A.16a) where \[C=4\mu-4\big{(}\|\bar{u}\|^{2}-(J\overline{\rho^{-1}}/g^{1/2})^{2}\big{)}\] (A.16b)
Multiplying (A.16\(a\)) by \(\rho^{\prime}/\rho\) and integrating with respect to \(Z\) yields
\[a(\rho^{\prime})^{2}=4\rho^{3}-C\rho^{2}+4c_{3}\rho-4J^{2}\,,\] (A.17)
with an arbitrary integration constant \(c_{3}\). Comparing the coefficients in (A.13) and (A.17) we have \(C=4c_{2}\) [as well as \(c_{1}=c_{3}\)], which, when inserted in (A.16b), finally yields (2.23) for \(\mu\).
Finally, we provide further details on how to simplify the modulation equations (3.4) and in particular on how to obtain (3.7f). The averaged conservation of momentum equation (3.4f), when written in terms of \(\lambda_{1},...,\lambda_{3}\) and \(\mathbf{U}\), is
\[(J\,\hat{\mathbf{k}}+\bar{\rho}\mathbf{U})_{t}+\nabla(\overline{\rho^{2}})+ \nabla\cdot\bigg{[}2\bar{\rho}\mathbf{U}\otimes\mathbf{U}+2J\,(\hat{\mathbf{ k}}\otimes\mathbf{U}+\mathbf{U}\otimes\hat{\mathbf{k}})+\bigg{(}\bigg{(} \frac{(\overline{\rho^{\prime})^{2}}}{2\rho}\bigg{)}+2\frac{J^{2}\overline{ \rho^{-1}}}{\|\mathbf{k}\|^{2}}\bigg{)}\bigg{]}\otimes\mathbf{k}\bigg{]}= \mathbf{0}\,.\] (A.18)
Notice that averages containing \(\rho_{z}\) can be evaluated by recalling that \(\rho(Z)\) satisfies the ODE (2.15). Differentiating (2.15) and using the definition of the symmetric polynomials yields
\[\frac{\|\mathbf{k}\|^{2}}{2}\rho^{\prime\prime}=3\rho^{2}-2e_{1}\rho+e_{2}\,,\] (A.19a) and averaging over the fast variable \[Z\] gives \[\overline{\rho^{2}}=\frac{2e_{1}\overline{\rho}-e_{2}}{3}\,.\] (A.19b)
Reordering the ODE (2.15) gives us
\[\frac{\|\mathbf{k}\|^{2}\rho_{2}^{2}+4J^{2}}{\rho}=4\rho^{2}-4e_{1}\rho+4e_{2}\,.\] (A.19c)
Averaging again over the fast variable \(Z\) and using (A.19b) yields
\[\frac{\|\mathbf{k}\|^{2}}{4}\Big{(}\frac{(\overline{\rho^{\prime})^{2}}}{\rho} \Big{)}+\frac{4J^{2}}{\|\mathbf{k}\|^{2}}\overline{\rho^{-1}}\Big{)}=\frac{1} {3}(2e_{2}-e_{1}\overline{\rho})\,.\] (A.19d)
Finally, using (A.19d), equation (A.18) yields (3.7f).
### Detailed steps in the derivation of the 3DNLS-Whitham system
We begin by expressing the modulation equations in terms of convective derivatives. Using (7.21) one can see that
\[D_{y}(q_{2}) =q_{2,y}-q_{1}q_{2,x}=\frac{D_{z}(k_{1}q_{1})}{k_{1}}-q_{1}\frac{D_ {z}k_{1}}{k_{1}}=D_{z}(q_{1})\,,\] (A.20)
which proves (7.22). Moreover, using (3.4d) and the fact that \(\mathbf{p}=\tilde{\mathbf{u}}_{b}-\tilde{u}_{1}\mathbf{q}\), one has
\[D_{y}(p_{2}) =(\tilde{u}_{3}-q_{2}\tilde{u}_{1})_{y}-q_{1}(\tilde{u}_{3}-q_{2} \tilde{u}_{1})_{x}=\tilde{u}_{3,y}-q_{1}\tilde{u}_{3,x}-q_{2}\tilde{u}_{1,y}+q_ {1}q_{2}\tilde{u}_{1,x}-\tilde{u}_{1}D_{y}(q_{2})\,,\] \[=\tilde{u}_{2,z}-q_{1}\tilde{u}_{1,z}-q_{2}\tilde{u}_{2,x}+q_{1}q _{2}\tilde{u}_{1,x}-\tilde{u}_{1}D_{z}(q_{1})=D_{z}(p_{1})\,.\] (A.21a)
which yields (7.22).
Using the identity (7.22) and straightforward algebra, we can rewrite (7.5b) and (7.5c) as
\[D_{t}\mathbf{q}+2g\mathbf{D}_{b}U_{1}+2q_{1}(U_{1}\mathbf{D}_{y }q_{1}+\mathbf{D}_{y}p_{1})+2q_{2}(U_{1}\mathbf{D}_{y}q_{2}+\mathbf{D}_{y}p_{2 })=0\,,\] (A.22a) \[D_{t}\mathbf{p}-2q_{1}U_{1}D_{y}\mathbf{p}-2q_{2}U_{1}D_{z} \mathbf{p}+\mathbf{D}_{y}(g(\tilde{e}_{1}-U_{1}^{2}))=0\,,\] (A.22b)
where \(D_{t}\) is as in (7.23).
Next, we express the first conservation of waves equation in convective derivative form. Recalling equations (7.17) and using (7.21) one can obtain the following,
\[\frac{D_{t}k_{1}}{k_{1}}+2(U_{1})_{x}+2\mathbf{q}\cdot(\mathbf{U}_{b})_{x}=0\,.\] (A.23)
Simplifying further we have (7.24), with \(D_{x}\) as in (7.23) and with
\[W_{1}=U_{1}(\|\mathbf{q}\|^{2})_{x}-2\mathbf{q}\cdot\mathbf{D}_{y}U_{1}+2 \mathbf{q}\cdot\mathbf{p}_{x}\,.\] (A.24)
Moreover, using (A.22) one can simplify \(W_{1}\) further and obtain (7.26).
Next, it can be easily seen that (7.19) becomes
\[D_{t}(U_{1}+J\overline{\rho^{-1}}/g^{1/2})+g(\tilde{e}_{1})_{x} +2gJ\overline{\rho^{-1}}/g^{1/2}(U_{1})_{x}+2U_{1}\|\mathbf{q}\|^{2}(J \overline{\rho^{-1}}/g^{1/2}+U_{1})_{x}\] \[\quad-2(U_{1}\mathbf{q}+\mathbf{p})\cdot\mathbf{\nabla}_{\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Simplifying further we obtain
\[D_{t}(\overline{\rho})+2\overline{\rho}D_{x}U_{1}+2D_{x}(g\bar{J})+2gM_{1}( \overline{\psi}_{\flat}\cdot{\bf q})+2\overline{\rho}(\overline{\psi}_{\flat} \cdot{\bf p})=0\,.\] (A.30)
To rewrite (A.30) in simpler form, we consider the combination (A.30) - \(g(\overline{\rho}/g)\)(7.24), which yields (7.27).
Finally, we consider the first component of the averaged momentum equation (7.20\(a\)), using a similar approach as before one can rewrite it as follows:
\[D_{t}(gM_{1})+2g(M_{1}+J)D_{x}U_{1}+2U_{1}D_{x}(g\bar{J})+D_{x}(g \bar{e}_{2})-{\bf q}\cdot{\bf D}_{\flat}(g(\overline{\rho}^{2}/g^{2}))\] \[+2(\overline{\rho}^{2}/g^{2})g({\bf q}\cdot{\bf q}_{x})+g\big{(} \bar{e}_{2}-(\overline{\rho}^{2}/g^{2})+2U_{1}(M_{1}+\bar{J})\big{)}\overline {\psi}_{\flat}\cdot{\bf q}+2M_{1}g\overline{\psi}_{\flat}\cdot{\bf p}\,.\] (A.31)
Next, taking the combination (A.31) - \(U_{1}\)(A.30) yields
\[g(\overline{\rho}/g)D_{t}U_{1}+D_{t}(g\bar{J})+4gJD_{x}U_{1}+D_{ x}(g\bar{e}_{2})-{\bf q}\cdot{\bf D}_{\flat}(g(\overline{\rho}^{2}/g^{2}))\] \[+2g(\overline{\rho}^{2}/g^{2})({\bf q}\cdot{\bf q}_{x})+g(\bar{e} _{2}-(\overline{\rho}^{2}/g^{2})-2U_{1}\bar{J})+2g\overline{\psi}_{\flat} \cdot{\bf p}=0\,.\] (A.32)
To simplify this equation more we consider the combination (7.20\(b\)) - (7.20\(a\)) \({\bf q}\) - (7.18) \({\bf p}\) and obtain the following vector equation:
\[M_{1}D_{t}{\bf q}+(\overline{\rho}/g)D_{t}{\bf p}+(2U_{1}\bar{J} +\bar{e}_{2})D_{x}{\bf q}+2JD_{x}{\bf p}\] \[-(\overline{\rho}^{2}/g^{2})D_{x}{\bf q}+gD_{\flat}(\overline{ \rho}^{2}/g^{2}))+2(\overline{\rho}^{2}/g^{2}){\bf D}_{\flat}g=0\,.\] (A.33)
Finally, we consider the combination (A.32) - \(g\bar{J}\) (7.24) + \({\bf q}\) - (A.33). Using (7.22\(a\)) and after extensive simplifications, we obtain (7.28).
Next we show that, using the transformation to Riemann-type variables (7.4) (7.24), (7.25), (7.27) and (7.28) yield (7.30). Note first that, using (7.4), we have the following identities:
\[U_{1}=\tfrac{1}{2}s_{1}\,,\] (A.34 \[a\] ) \[\bar{e}_{1}=s_{2}-\tfrac{1}{4}s_{1}^{2}\,,\] (A.34 \[b\] ) \[\bar{e}_{2}=s_{4}+\tfrac{1}{16}\{-16s_{3}s_{1}-4s_{2}^{2}+8s_{1} ^{2}s_{2}-s_{1}^{4}\}\,,\] (A.34 \[c\] ) \[\bar{J}=\tfrac{1}{3}s_{3}-\tfrac{1}{24}s_{1}(6s_{2}-s_{1}^{2})\,,\] (A.34 \[d\] )
[where again the \(s_{n}\) are as in (5.13)], which allow us to express \(\bar{e}_{1},\dots,\bar{e}_{3}\) in terms of the Riemann invariants via \(s_{1},\dots,s_{4}\). The identity (A.34\(d\)) is especially important, since it allows us to eliminate square roots from the modulation equations. Recall that (2.22) only determines \(J^{2}\), and \(J=\sigma(\lambda_{1}\lambda_{2}\lambda_{3})^{1/2}\), with \(\sigma=\pm 1\). On the other hand, (7.4) yields \(\sigma\lambda_{1}^{1/2}=\tfrac{1}{2}\sqrt{g}(r_{1}-r_{2}-r_{3}+r_{4})\), where the sign \(\sigma\) here is needed because but one needs \(\lambda_{3}\Rightarrow\lambda_{2}\Rightarrow\lambda_{1}\Rightarrow 0\) (cf. section 2.2), but \(r_{1}-r_{2}-r_{3}+r_{4}\) can be either positive or negative depending on the relative magnitude of \(r_{1},\dots,r_{4}\). (In contrast, no ambiguity arises for \(\lambda_{2}^{1/2}\) and \(\lambda_{3}^{1/2}\) when \(r_{1},\dots,r_{4}\) are well-ordered.) One can verify that, with this choices, the sign of both the left-hand side and right-hand side of (A.34\(d\)) equal \(\sigma\). The following formulae are also useful:
\[k_{1}=\frac{\sqrt{(r_{4}-r_{2})(r_{3}-r_{1})}}{2K_{m}}\,,\] (A.35 \[a\] ) \[\left(\frac{\partial k_{1}}{\partial r_{1}},\dots,\frac{\partial k _{1}}{\partial r_{4}}\right)^{T}=\frac{\sqrt{(r_{4}-r_{2})(r_{3}-r_{1})}}{4K_{ m}^{2}}\left(\frac{(r_{1}-r_{4})K_{m}+(r_{4}-r_{2})E_{m}}{(r_{2}-r_{1})(r_{4}-r_{1})}\,,\] \[\frac{(r_{3}-r_{2})K_{m}+(r_{1}-r_{3})E_{m}}{(r_{2}-r_{1})(r_{3}-r _{2})},\frac{(r_{3}-r_{2})K_{m}+(r_{2}-r_{4})E_{m}}{(r_{3}-r_{2})(r_{3}-r_{4})},\frac{(r_{4}-r_{1})K_{m}+(r_{1}-r_{3})E_{m}}{(r_{4}-r_{1})(r_{4}-r_{3})} \right)^{T}\,,\] (A.35 \[b\] )
We are now ready to present the final steps of the derivation. We begin by deriving (7.24), which is the simplest of the four equations. In this case we simply need to express \(D_{t}k_{1}\) in terms of the Riemann invariants, i.e.,
\[D_{t}k=\sum_{j=1}^{4}\frac{\partial k_{1}}{\partial r_{j}}D_{t}r_{j}\,,\] (A.36)
which immediately yields (7.30\(a\)), with \(W_{1}\) as in (7.26\(a\)). Next, equation (7.25) simplifies due to the identity
\[D_{t}(U_{1}+J\overline{\rho^{-1}}/g^{1/2})+(U_{1}-J\overline{\rho^{-1}}/g^{1/2}) \frac{D_{t}k_{1}}{k_{1}}=\frac{2}{k_{1}}\sum_{j=1}^{4}r_{j}\frac{\partial k_{1} }{\partial r_{j}}D_{t}r_{j}\,,\] (A.37)
and takes the form of (7.30\(b\)), with \(W_{2}\) as in (7.26\(b\)). Next, taking the combination (7.27)/2 \(+\ gU_{1}/2\times(7.25)\ +\ g(s_{2}-2U_{1}^{2})/4\times(7.24)\) and using identities (A.34) and (A.37), yields (7.30c), where \(W_{3}\) is as in (7.31\(a\)). Finally, considering the linear combination (7.28)/2 \(+\ 3U_{1}/2\times(7.27)\)\(g(s_{2}+2U_{1}^{2})/4\times(7.25)\ +\ g(3J+U_{1}s_{2}-2U_{1}^{3})/2\times(7.24)\), and using identities (A.34), (A.37) again, and after some tedious algebra, one finds (7.30\(d\)), with \(W_{4}\) as in (7.31\(b\)).
Our last task is to show that the compatibility relations \(\boldsymbol{\nabla}\times\mathbf{k}=\boldsymbol{\nabla}\times\tilde{\mathbf{ u}}=\mathbf{0}\), when written in terms of the Riemann-type variables \(\mathbf{r}=(r_{1},\ldots,r_{4})^{T}\) as well as \(\mathbf{q}\) and \(\mathbf{p}\), yield (7.8). To this end, we first use the definition of \(\mathbf{q}\) as in (7.1\(b\)) along with the compatibility condition \(\boldsymbol{\nabla}\times\mathbf{k}=0\). It can be easily seen that
\[k_{1}\mathbf{q}_{x}=(\mathbf{k}_{0})_{x}-k_{1,x}\,\mathbf{q}= \boldsymbol{\nabla}_{y}k_{1}-k_{1,x}\,\mathbf{q}=\mathbf{D}_{y}k_{1}\] (A.38)
(cf. the third equation in (7.9)), which yields the first half of (7.8). Next, using the compatibility condition \(\boldsymbol{\nabla}\times\tilde{\mathbf{u}}=0\) with the definition of \(\tilde{u}_{1}\) as in (7.3c) one can derive (7.12), namely,
\[\mathbf{p}_{x}=\mathbf{D}_{y}\big{(}U_{1}+J\overline{\rho^{-1}}/g^{1/2}\big{)} +\big{(}U_{1}-J\overline{\rho^{-1}}/g^{1/2}\big{)}\frac{\mathbf{D}_{y}k_{1}}{ k_{1}}-2U_{1}\frac{\mathbf{D}_{y}k_{1}}{k_{1}}\,.\] (A.39)
Using the identity (A.37), one then obtains the second half of (7.8).
|
2303.16273 | Vacuum birefringence and dichroism in a strong plane-wave background | In the present study, we consider the effects of vacuum birefringence and
dichroism in strong electromagnetic fields. According to quantum
electrodynamics, the vacuum state exhibits different refractive properties
depending on the probe photon polarization and one also obtains different
probabilities of the photon decay via production of electron-positron pairs.
Here we investigate these two phenomena by means of several different
approaches to computing the polarization operator. The external field is
assumed to be a linearly polarized plane electromagnetic wave of arbitrary
amplitude and frequency. Varying the probe-photon energy and the field
parameters, we thoroughly examine the validity of the locally-constant field
approximation (LCFA) and techniques involving perturbative expansions in terms
of the external-field amplitude. Within the latter approach, we develop a
numerical method based on a direct evaluation of the weak-field Feynman
diagrams, which can be employed for investigating more complex external
backgrounds. It is demonstrated that the polarization operator depends on two
parameters: classical nonlinearity parameter $\xi$ and the product $\eta =
\omega q_0 / m^2$ of the laser field frequency $\omega$ and the photon energy
$q_0$ ($m$ is the electron mass). The domains of validity of the approximate
techniques in the $\xi \eta$ plane are explicitly identified. | I. A. Aleksandrov, V. M. Shabaev | 2023-03-28T19:31:13Z | http://arxiv.org/abs/2303.16273v1 | # Vacuum birefringence and dichroism in a strong plane-wave background
###### Abstract
In the present study, we consider the effects of vacuum birefringence and dichroism in strong electromagnetic fields. According to quantum electrodynamics, the vacuum state exhibits different refractive properties depending on the probe photon polarization and one also obtains different probabilities of the photon decay via production of electron-positron pairs. Here we investigate these two phenomena by means of several different approaches to computing the polarization operator. The external field is assumed to be a linearly polarized plane electromagnetic wave of arbitrary amplitude and frequency. Varying the probe-photon energy and the field parameters, we thoroughly examine the validity of the locally-constant field approximation (LCFA) and techniques involving perturbative expansions in terms of the external-field amplitude. Within the latter approach, we develop a numerical method based on a direct evaluation of the weak-field Feynman diagrams, which can be employed for investigating more complex external backgrounds. It is demonstrated that the polarization operator depends on two parameters: classical nonlinearity parameter \(\xi\) and the product \(\eta=\omega q_{0}/m^{2}\) of the laser field frequency \(\omega\) and the photon energy \(q_{0}\) (\(m\) is the electron mass). The domains of validity of the approximate techniques in the \(\xi\eta\) plane are explicitly identified.
## I Introduction
According to quantum electrodynamics (QED), the physical vacuum state contains quantum fluctuations of the electromagnetic and electron-positron fields, which can be viewed as spontaneous creation and annihilation of electron-positron pairs interacting with each other via virtual photons. Although these virtual particles are not observable themselves, their existence can manifest itself while interacting with external fields and real particles giving rise to a number of remarkable nonlinear phenomena such as light-by-light scattering [1; 2; 3; 4], Sauter-Schwinger effect [2; 5; 6], and so on (for review, see, e.g., Refs. [7; 8; 9]). In this investigation, we consider propagation of a probe photon in vacuum in the presence of a strong external background. The latter polarizes the physical vacuum, so the probe photon effectively interacts with a nonlinear medium, which leads to the phenomena of vacuum birefringence and dichroism [10; 11; 12; 13; 14] which are in the focus of the present study (we note that the nontrivial properties of the vacuum state in the presence of real photons give also rise to recently discussed stimulated photon emission [15]).
Observing these processes in the laboratory represents currently an intriguing and challenging task. There are mainly two different approaches to probing vacuum birefringence. First, one can rely on unprecedented accuracy of experimental measurements in the optical domain, i.e., in the regime of relatively low probe-photon energies (see, e.g., Refs. [16; 17; 18; 19; 20; 21]). From the theoretical viewpoint, this domain allows one to employ local approximations, i.e., to treat the external (laser) field as a locally constant background. The corresponding locally-constant field approximation (LCFA) has basically two different implementations based either on employing the exact expressions for the Heisenberg-Euler effective Lagrangian [22] or on using the local values of the polarization operator derived in constant crossed fields [23; 24]. The second approach to vacuum birefringence involves high-energy probe photons [24; 25; 26]. The advantage of this technique appears due to large probabilities of the corresponding quantum processes resulting in large values of the experimental signal. On the other hand, it is significantly more difficult to perform measurements in the high-energy domain as, e.g., the Heisenberg-Euler approximation is only valid in the low-energy domain. To properly assess the feasibility of the corresponding scenarios, one has to obtain accurate and reliable theoretical predictions.
In order to avoid approximate local treatment of the external electromagnetic field, one can model it with a plane-wave background allowing one to deduce explicit analytical expressions for the polarization tensor [13; 14; 23]. On the other hand, this simplified setup may not properly reflect the properties of real experimental conditions.
In the present study, we have two primary aims. First, we will thoroughly examine the plane-wave scenario by means of analytical nonperturbative expressions derived in Refs. [13; 14; 23]. We will compute the polarization tensor in a wide range of physical parameters governing the process under consideration: laser-field amplitude, laser frequency, and probe-photon energy. Expanding the nonperturbative result in powers of the external-field amplitude, we will assess the accuracy of the calculations based on perturbation theory (PT). Besides, we will quantitatively analyze the validity of the LCFA in the two forms described above. Second, the polarization tensor will be directly evaluated via the corresponding Feynman diagrams. This approach is very important since it can allow one to consider other field configurations, which differ from a simple plane-wave scenario. In what follows, we will benchmark our direct computational procedures and also provide an additional insight into the analytical properties of the integrands involved in the Feynman diagrams. For instance, it will be demonstrated that the overlap between the branch cuts that appears for sufficiently high photon energies is closely related to the decay of the probe photon via production of
electron-positron pairs. We also mention that \(e^{+}e^{-}\) pairs can be produced directly by a classical strong field, i.e., via the Sauter-Schwinger mechanism. The validity of the LCFA in this context was recently examined in Refs. [27; 28; 29; 30].
The paper has the following structure. In Sec. II we describe the setup under consideration involving a probe photon and external plane-wave background. In Sec. III we present nonperturbative expressions which we employ in our numerical computations. In Sec. IV we calculate the leading-order contribution with respect to the external-field amplitude. Section V is devoted to the description of the two possible implementations of the LCFA. In Sec. VI we discuss how one can directly evaluate the leading-order Feynman diagrams. Section VII contains our numerical results obtained by means of the various techniques. Finally, we conclude in Sec. VIII.
Throughout the text, we employ the units \(\hbar=c=1\), \(\alpha=e^{2}/(4\pi)\approx 1/137\).
## II Setup and notation
We assume that the external plane wave is polarized along the \(x\) axis and propagates in the \(z\) direction, i.e., it depends on \(\varphi=\omega n^{\mu}x_{\mu}=\omega(t-z)\), where \(\omega\) is the laser frequency. The null vector \(n\) obeys \(n_{0}=1\), \(n^{2}=0\). The corresponding vector potential has the following form:
\[\mathbf{A}(x) = \mathcal{A}(\omega(t-z))\mathbf{e}_{x}, \tag{1}\] \[\mathcal{A}(\varphi) = \frac{E_{0}}{\omega}\sin\varphi, \tag{2}\]
where \(E_{0}\) is the field strength amplitude. We also introduce a dimensionless parameter \(\xi=|eE_{0}|/(m\omega)\). The initial photon momentum \(\mathbf{q}\) points in the opposite direction to \(\mathbf{n}=\mathbf{e}_{z}\), \(\mathbf{q}=-q_{0}\mathbf{e}_{z}\). Accordingly, the initial 4-momentum of the photon is \(q^{\mu}=q_{0}(1,0,0,-1)\)! The final momentum will be denoted by \(k^{\mu}\). In what follows, we will also employ the light-cone components which for arbitrary 4-vector \(v^{\mu}\) read
\[v_{+} = \frac{v_{0}+\mathbf{n}\mathbf{v}}{2}, \tag{3}\] \[v_{-} = v_{0}-\mathbf{n}\mathbf{v},\] (4) \[\mathbf{v}_{\perp} = \mathbf{v}-(\mathbf{n}\mathbf{v})\mathbf{n}. \tag{5}\]
The scalar product of two vectors can be evaluated via
\[vw\equiv v^{\mu}w_{\mu}=v_{+}w_{-}+v_{-}w_{+}-\mathbf{v}_{\perp}\mathbf{w}_{\perp}. \tag{6}\]
For instance, \(n_{+}=1\), \(n_{-}=0\), \(\mathbf{n}_{\perp}=0\), and \(\varphi=\omega x_{-}\).
The amplitude \(\mathcal{S}(q,k)\) of the process described by the diagram in Fig. 1 involves two photon wavefunctions defined as
\[f_{q}^{\mu}(x)=\frac{1}{\sqrt{2q_{0}}}\mathrm{e}^{-iqx}\varepsilon^{\mu}(q), \tag{7}\]
where \(\varepsilon^{\mu}(q)\) is the polarization 4-vector. The amplitude can be represented in the form
\[\mathcal{S}(q,k)=\frac{1}{\sqrt{4q_{0}k_{0}}}\varepsilon_{\mu}(q)i\big{[} \Pi_{0}^{\mu\nu}(q,k)+\Pi^{\mu\nu}(q,k)\big{]}\varepsilon_{\nu}^{*}(k). \tag{8}\]
Here \(\Pi_{0}^{\mu\nu}(q,k)\) denotes the zero-field contribution to the polarization operator, which corresponds to the diagram with the free-electron Green's functions describing vacuum polarization in the absence of external fields. This contribution diverges and requires a usual renormalization procedure. Since this term does not affect the processes of vacuum birefringence and dichroism, our task is to compute the field-dependent part \(\Pi^{\mu\nu}(q,k)\), which is finite.
In what follows, we will evaluate \(\Pi^{\mu\nu}(q,k)\) by means of several different techniques mentioned above. As will be seen below, the polarization operator involving \(\xi\), \(\omega\), and \(q_{0}\) depends, in fact, only on \(\xi\) and the product \(\omega q_{0}\). We will consider \(\xi\) and \(\eta\equiv\omega q_{0}/m^{2}\) as two independent dimensionless parameters governing the processes of vacuum birefringence and dichroism. We will also introduce the so-called quantum nonlinearity parameter \(\chi=2\xi\eta\) which will be considered as a derived quantity \(\chi(\xi,\eta)\).
## III Nonperturbative analytical formulas
In the case of a plane-wave external background, it is possible to compute the polarization tensor analytically. In Ref. [13] it was done by means of the operator approach. In Ref. [14] the calculations were performed in the case of a monochromatic plane wave. Recently, in Ref. [23] the results of Refs. [13; 14] were confirmed by direct computations of the Feynman diagram in Fig. 1 with the aid of the exact Green's functions, which can be constructed from the Volkov solutions.
Here we will first employ the general expressions presented in Refs. [13; 14; 23]. Due to the symmetry of the external plane-wave field, it can only change the \(q_{+}\) component of the photon momentum, so the amplitude corresponding to the Feynman diagram in Fig. 1 contains \(\delta(k_{-}-q_{-})\delta(\mathbf{k}_{\perp}-\mathbf{q}_{\perp})\). It turns out that the cumbersome expressions for the amplitude derived in Refs. [13; 14; 23] become relatively simple in the particular case of a circularly polarized plane-wave background. Due to the helicity conservation, the momentum component \(q_{+}\) can change only by \(\pm 2\omega\) or remain the same. It is not the case if the external field has a linear polarization since such a plane wave does not possess a well-defined helicity quantum number. Accordingly, the \(q_{+}\) momentum component of the photon may change by an arbitrary integer number of \(\omega\). The general expression for the setup described above
Figure 1: Feynman diagram describing the leading-order contribution to the photon polarization operator. The amplitude of the process is proportional to the fine-structure constant \(\alpha\) and exactly takes into account the interaction with the classical external background (double lines represent the dressed Green’s functions).
has the following form:
\[\Pi^{\mu\nu}(q,k)=-\frac{4\pi^{2}\alpha}{\omega}\delta(k_{-}-q_{-}) \delta(\mathbf{k}_{\perp}-\mathbf{q}_{\perp})\int\limits_{-1}^{1}dv\int\limits_{0}^{ \infty}\frac{d\tau}{\tau}\int\limits_{-\infty}^{\infty}d\varphi\,\mathrm{e}^{i \Phi}\begin{pmatrix}c&0&0&0\\ 0&b+\Delta b&0&0\\ 0&0&b&0\\ 0&0&0&c\end{pmatrix}, \tag{9}\]
where
\[b =\Big{(}\frac{i}{\tau}+\frac{1}{2}kq\Big{)}(1-\mathrm{e}^{i\tau \beta})+\frac{2m^{2}\tau\xi^{2}}{\mu}\,\mathrm{e}^{i\tau\beta}\sin^{2}(\mu \omega q_{0})\cos^{2}\varphi, \tag{10}\] \[\Delta b =2m^{2}\xi^{2}\Big{[}\mathrm{sinc}^{2}(\mu\omega q_{0})\sin^{2} \varphi-2\,\mathrm{sinc}(2\mu\omega q_{0})\sin^{2}\varphi-\sin^{2}(\mu\omega q _{0})+\sin^{2}\varphi\Big{]}\mathrm{e}^{i\tau\beta},\] (11) \[c =\frac{k_{0}q_{0}\mu}{\tau}\,(1-\mathrm{e}^{i\tau\beta}),\] (12) \[\mu =\frac{1}{2}\tau(1-v^{2}),\] (13) \[\Phi =\frac{k_{+}-q_{+}}{\omega}\varphi+\frac{1}{2}\mu kq-m^{2}\tau,\] (14) \[\beta =m^{2}\xi^{2}\bigg{[}\mathrm{sinc}^{2}(\mu\omega q_{0})\sin^{2} \varphi-\frac{1}{2}+\frac{1}{2}\,\mathrm{sinc}(2\mu\omega q_{0})\cos 2\varphi \bigg{]}. \tag{15}\]
In what follows, we will be interested only in the elastic process, where \(k_{+}=q_{+}\) as the other channels are significantly suppressed (actually, they rather represent reactions involving photon merging or splitting than the phenomenon of birefringence). To extract the particular process of elastic scattering, one has to isolate the zeroth-order Fourier harmonics with respect to \(\varphi\) dependence in the functions \(b\), \(\Delta b\), and \(c\), so the integration of \(\exp(i\Phi)\) yields the necessary delta-function. This can be straightforwardly attained with the aid of the Jacobi-Anger identity. The result reads
\[\Pi^{\mu\nu}_{\text{elastic}}(q,k)=-(2\pi)^{3}\alpha\delta(k-q) \int\limits_{-1}^{1}dv\int\limits_{0}^{\infty}\frac{d\tau}{\tau}\,\mathrm{e}^ {-im^{2}\tau}\begin{pmatrix}\tilde{c}&0&0&0\\ 0&\tilde{b}+\Delta\tilde{b}&0&0\\ 0&0&\tilde{b}&0\\ 0&0&0&\tilde{c}\end{pmatrix}, \tag{17}\]
where
\[\tilde{b} =\frac{i}{\tau}[1-\Xi J_{0}(A)]+\frac{m^{2}\tau\xi^{2}}{\mu}\sin ^{2}(\mu\omega q_{0})\Xi[J_{0}(A)+iJ_{1}(A)], \tag{18}\] \[\Delta\tilde{b} =m^{2}\xi^{2}\Xi\big{\{}-2\sin^{2}(\mu\omega q_{0})J_{0}(A)+[ \mathrm{sinc}^{2}(\mu\omega q_{0})-2\,\mathrm{sinc}(2\mu\omega q_{0})+1][J_{0 }(A)-iJ_{1}(A)]\big{\}},\] (19) \[\tilde{c} =\frac{q_{0}^{2}\mu}{\tau}\,[1-\Xi J_{0}(A)],\] (20) \[\Xi =\exp\biggl{\{}\frac{i}{2}m^{2}\tau\xi^{2}[\mathrm{sinc}^{2}(\mu \omega q_{0})-1]\biggr{\}},\] (21) \[A =\frac{1}{2}m^{2}\tau\xi^{2}[\mathrm{sinc}(2\mu\omega q_{0})- \mathrm{sinc}^{2}(\mu\omega q_{0})]. \tag{22}\]
Here \(J_{n}\) are the Bessel functions of the first kind. We will assume hereinafter \(k^{\mu}=q^{\mu}\). We also note that the elements \(\Pi^{00}\) and \(\Pi^{33}\) are equal, which preserves the gauge invariance and the Ward-Takahashi identity [31]. These components will not be evaluated in our study as they do not affect the phenomena under consideration.
The birefringent and dichroic properties of the vacuum in the presence of strong fields manifest themselves in the difference between \(\Pi^{11}\) and \(\Pi^{22}\) elements: photon polarizations along the \(x\) and \(y\) axes correspond to different refractive and absorption indexes. In what follows, we will compute these elements. As was stated above, these quantities involve the three parameters \(\xi\), \(\omega\), and \(q_{0}\), but they depend, in fact, on \(\xi\) and \(\eta=\omega q_{0}/m^{2}\) as becomes evident from Eqs. (17)-(22).
## IV Perturbation theory
Here we will consider the leading-order term of Eq. (17) with respect to the small-\(\xi\) expansion. This contribution is proportional to \(\xi^{2}\) and corresponds to the three Feynman diagrams displayed in Fig. 2. Expanding the function \(\Xi\) and the Bessel functions in Taylor series, one obtains
\[\tilde{b}_{\text{LO}} = m^{2}\xi^{2}\bigg{\{}\frac{1}{2}[\text{sinc}^{2}(\mu\omega q_{0} )-1]+\frac{\tau}{\mu}\,\sin^{2}(\mu\omega q_{0})\bigg{\}}, \tag{23}\] \[\Delta\tilde{b}_{\text{LO}} = m^{2}\xi^{2}[-2\sin^{2}(\mu\omega q_{0})+\text{sinc}^{2}(\mu \omega q_{0})\] (24) \[- 2\,\text{sinc}(2\mu\omega q_{0})+1],\] \[\tilde{\alpha}_{\text{LO}} = -\frac{i}{2}q_{0}^{2}\mu m^{2}\xi^{2}[\text{sinc}^{2}(\mu\omega q _{0})-1]. \tag{25}\]
Here "LO" stands for "low order". It turns out that one can replace \(\mu\) with Eq. (13) and perform the \(\tau\) integration analytically. Let us first introduce the following general representation:
\[\Pi^{\mu\nu}_{\text{elastic}}(q,k)=-(2\pi)^{3}\alpha\delta(k-q)m^{2}\xi^{2}M^{ \mu\nu}. \tag{26}\]
Within PT we find
\[M_{\text{LO}}^{11} = \int\limits_{-1}^{1}dv\bigg{[}\frac{2v^{2}}{1-v^{2}}\,I_{1}(v)+ \frac{1}{2}\,I_{2}(v)+I_{3}(v)\bigg{]}, \tag{27}\] \[M_{\text{LO}}^{22} = \int\limits_{-1}^{1}dv\bigg{[}\frac{2}{1-v^{2}}\,I_{1}(v)+\frac{1 }{2}\,I_{2}(v)\bigg{]}, \tag{28}\]
where
\[I_{1}(v) = \int\limits_{0}^{\infty}\frac{dt}{t}\,\sin^{2}(\gamma t)\text{e} ^{-it}=\frac{1}{4}\ln\big{|}1-4\gamma^{2}\big{|}-\frac{i\pi}{4}\theta(\gamma- 1/2), \tag{29}\] \[I_{2}(v) = \int\limits_{0}^{\infty}\frac{dt}{t}\,[\text{sinc}^{2}(\gamma t) -1]\text{e}^{-it}=\frac{3}{2}-\frac{1}{2}\bigg{(}1+\frac{1}{4\gamma^{2}} \bigg{)}\ln\big{|}1-4\gamma^{2}\big{|}-\frac{1}{2\gamma}\,\ln\bigg{|}\frac{1+2 \gamma}{1-2\gamma}\bigg{|}\] (30) \[+ \frac{i\pi}{2}\bigg{(}1-\frac{1}{\gamma}+\frac{1}{4\gamma^{2}} \bigg{)}\theta(\gamma-1/2),\] \[I_{3}(v) = \int\limits_{0}^{\infty}\frac{dt}{t}\,[1+\text{sinc}^{2}(\gamma t )-2\,\text{sinc}(2\gamma t)]\text{e}^{-it}=-\frac{1}{2}+\frac{1}{2}\bigg{(}1- \frac{1}{4\gamma^{2}}\bigg{)}\Big{[}\ln\big{|}1-4\gamma^{2}\big{|}-i\pi\theta( \gamma-1/2)\Big{]},\] (31) \[\gamma = \gamma(v)=\frac{\omega q_{0}}{2m^{2}}(1-v^{2})=\frac{1}{2}\eta(1 -v^{2}). \tag{32}\]
The expressions (27) and (28) depend only on \(\eta=\omega q_{0}/m^{2}\), while the nonperturbative values of \(M^{\mu\nu}\) [see Eq. (26)] also involve \(\xi\). Below we will compare the leading-order terms with the nonperturbative results. Let us now present the low- and high-energy asymptotic expressions for \(M_{\text{LO}}^{11}\) and \(M_{\text{LO}}^{22}\). In the low-energy case \(\varepsilon\equiv 2\eta=2\omega q_{0}/m^{2}\ll 1\),
\[M_{\text{LO}}^{11} = -\frac{4}{45}\varepsilon^{2}-\frac{17}{3150}\varepsilon^{4}+ \mathcal{O}(\varepsilon^{6}), \tag{33}\] \[M_{\text{LO}}^{22} = -\frac{7}{45}\varepsilon^{2}-\frac{131}{9450}\varepsilon^{4}+ \mathcal{O}(\varepsilon^{6}). \tag{34}\]
Figure 2: Feynman diagrams corresponding to the leading-order contribution within the PT expansion in terms of the external field (the amplitudes are proportional to \(\xi^{2}\)). The interaction with the classical external field is denoted by the cross. Depending on the energy-momentum transfer at the cross vertices, the process is either elastic (2-to-2 process) or corresponds to \(k\neq q\).
In the high-energy limit, we obtain [\(\varepsilon\equiv 1/(2\eta)=m^{2}/(2\omega q_{0})\ll 1\)]
\[M_{\text{LO}}^{11} =\frac{1}{2}\ln^{2}\varepsilon+\left(1-\ln 2+\frac{i\pi}{2} \right)\ln\varepsilon+\bigg{[}\frac{5}{2}-\ln 2+\frac{1}{2}\ln^{2}2-\frac{\pi^{2}}{4}+ \frac{i\pi}{2}(1-\ln 2)\bigg{]}\] \[+i\pi\varepsilon\ln\varepsilon+\bigg{[}-\frac{\pi^{2}}{2}+\frac{ i\pi}{2}(3-2\ln 2)\bigg{]}\varepsilon+\mathcal{O}(\varepsilon^{2}\ln^{2} \varepsilon), \tag{35}\] \[M_{\text{LO}}^{22} =\frac{1}{2}\ln^{2}\varepsilon+\left(1-\ln 2+\frac{i\pi}{2} \right)\ln\varepsilon+\bigg{[}\frac{7}{2}-\ln 2+\frac{1}{2}\ln^{2}2-\frac{\pi^{2}}{4}+ \frac{i\pi}{2}(1-\ln 2)\bigg{]}\] \[+i\pi\varepsilon\ln\varepsilon+\bigg{[}-\frac{\pi^{2}}{2}+\frac{ i\pi}{2}(1-2\ln 2)\bigg{]}\varepsilon+\mathcal{O}(\varepsilon^{2}\ln^{2} \varepsilon). \tag{36}\]
While the low-energy result (33), (34) is real, the expressions (35) and (36) possess imaginary parts, which describe the process of photon decay. The imaginary part of the difference \(\delta M_{\text{LO}}\equiv M_{\text{LO}}^{11}-M_{\text{LO}}^{22}\approx-1+i\pi\varepsilon\) governs the dichroic properties of the vacuum and appears once \(\eta>1\). In Sec. VI we will discuss how the imaginary part appears in a direct evaluation of the Feynman diagrams in Fig. 2.
## V Locally-constant field approximation
Here we will employ relatively simple closed-form expressions treating the external background as locally constant. There are basically two different approaches. The first one is based on calculating the polarization tensor in constant crossed fields and the using the actual spatiotemporal dependence of the plane-wave field (1) when integrating over \(\varphi\). The second method employs the Heisenberg-Euler effective Lagrangian computed in a constant electromagnetic field and takes into account the leading-order quantum correction with respect to the field amplitude \(E_{0}\). The first approach is generally more accurate as it incorporates the higher-order terms in \(E_{0}\) and involves the expression for the polarization operator which is derived for arbitrary photon energies \(q_{0}\). The second technique based on the Heisenberg-Euler Lagrangian is only valid for sufficiently low photon energies, when there is only a small momentum transfer into the \(e^{+}e^{-}\) loop in the diagram in Fig. 1. Besides, the applicability of this method is limited since it involves the PT expansion with respect to the field amplitude. In what follows, we will describe the both approaches and then thoroughly analyze their validity.
### Polarization operator in constant crossed fields
In the setup under consideration, the vector potential (1) is assumed to be a monochromatic plane wave (2). If one replaces \(\sin~{}\varphi\) in Eq. (2) with \(\varphi\), the external background will obviously become a combination of _constant crossed_ electric and magnetic fields, \(E_{x}=B_{y}=-E_{0}\). In this case, one can also perform nonperturbative calculations of the polarization tensor [32; 33; 34] and then locally approximate a generic external background by constant crossed fields [23]. Applying this technique to the field configuration (2), one obtains
\[M_{\text{LCFA}}^{11} =\frac{1}{3\pi\xi^{2}}\int\limits_{-1}^{1}dv\left(\frac{\chi}{w} \right)^{2/3}\bigl{(}w-1\bigr{)}g(v), \tag{37}\] \[M_{\text{LCFA}}^{22} =\frac{1}{3\pi\xi^{2}}\int\limits_{-1}^{1}dv\left(\frac{\chi}{w} \right)^{2/3}\bigl{(}w+2\bigr{)}g(v), \tag{38}\]
where \(\chi=2\xi\eta\), \(w=4/(1-v^{2})\), and
\[g(v) =\int\limits_{-\pi}^{\pi}d\varphi f^{\prime}(u)(\cos\varphi)^{2 /3}, \tag{39}\] \[u =\left(\frac{w}{\chi\cos\varphi}\right)^{2/3},\] (40) \[f(u) =i\int\limits_{0}^{\infty}d\tau\mathrm{e}^{-i(u\tau+\tau^{3}/3)} =\pi\mathrm{Gi}(u)+i\pi\mathrm{Ai}(u). \tag{41}\]
Here \(\mathrm{Gi}\) and \(\mathrm{Ai}\) are the Scorer and Airy functions, respectively.
Note that the integrals in Eqs. (37) and (38) depend only on \(\chi\), i.e. the product \(\xi\eta\), which simplifies the further analysis. This fact is a well-known property of the LCFA [35]. This approximation is well justified if the parameter \(\xi\) is sufficiently large, so one can expect that the predictions (37) and (38) significantly differ from the exact nonperturbative result given in Eq. (17) once \(\xi\lesssim 1\). This issue will be discussed in detail in Sec. VII.
Finally, we present the asymptotic forms of Eqs. (37) and
(38) in the case \(\chi\ll 1\). One obtains
\[\mathrm{Re}\,M_{\mathrm{LCFA}}^{11} =-\frac{4\chi^{2}}{45\xi^{2}}\bigg{[}1+\frac{1}{4}\,\chi^{2}+ \mathcal{O}(\chi^{4})\bigg{]}, \tag{42}\] \[\mathrm{Re}\,M_{\mathrm{LCFA}}^{22} =-\frac{7\chi^{2}}{45\xi^{2}}\bigg{[}1+\frac{13}{49}\,\chi^{2}+ \mathcal{O}(\chi^{4})\bigg{]},\] (43) \[\mathrm{Im}\,M_{\mathrm{LCFA}}^{11} =-\frac{3\chi^{3/2}}{8\xi^{2}}\sqrt{\frac{\pi}{2}}\,\mathrm{e}^{- 8/(3\chi)}\big{[}1+\mathcal{O}(\chi)\big{]},\] (44) \[\mathrm{Im}\,M_{\mathrm{LCFA}}^{22} =-\frac{3\chi^{3/2}}{4\xi^{2}}\sqrt{\frac{\pi}{2}}\,\mathrm{e}^{- 8/(3\chi)}\big{[}1+\mathcal{O}(\chi)\big{]}. \tag{45}\]
For small \(\chi\) the imaginary part is exponentially suppressed corresponding to tiny probabilities of the photon decay. Note that the ratio \(\chi/\xi\) coincides with \(\varepsilon=2\eta\) in Eqs. (33) and (34), so the leading-order contribution is reproduced by the LCFA. Nevertheless, the validity of the LCFA and that of the PT expansion correspond to substantially different domains of parameters. Whereas for given \(\xi\) they both are accurate for sufficiently small \(\eta<\eta_{\mathrm{max}}(\xi)\), with increasing \(\xi\) the bound \(\eta_{\mathrm{max}}(\xi)\) increases in the case of the LCFA and decreases in the case of PT. This will be quantitatively demonstrated in Sec. VII. Finally, we note that both the LCFA and PT capture the imaginary part of the polarization tensor.
### Heisenberg-Euler approximation
Another approach is based on the PT expansion of the polarization operator derived from the one-loop effective Lagrangian in the presence of a constant electromagnetic background [22]. The approximate formula for the \(\xi^{2}\) contribution to the polarization tensor has the following form:
\[\Pi_{\mathrm{LCFA-HE}}^{\mu\nu}(q,k)=\frac{\alpha}{45\pi}\frac{e^{2}}{m^{4}} \int\!d^{4}x\,\mathrm{e}^{i(k-q)x}\Big{[}4(qF)^{\mu}(kF)^{\nu}+7(qG)^{\mu}(kG) ^{\nu}\Big{]}. \tag{46}\]
Here \((kF)^{\mu}\equiv k_{\rho}F^{\rho\mu}\). The electromagnetic tensor \(F_{\mu\nu}=\partial_{\mu}\mathcal{A}_{\nu}-\partial_{\nu}\mathcal{A}_{\mu}\) and the dual tensor \(G^{\mu\nu}=(1/2)\varepsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}\) are evaluated at the spacetime point \(x\) according to the local treatment of the external field. In the case of the plane-wave background (1), the integrals in Eq. (46) lead to the conservation laws which may change the photon momentum by \(\pm 2\omega\) or keep it the same. We are interested in the latter contribution governing the elastic process. The explicit form of Eq. (46) then reads
\[\Pi_{\mathrm{LCFA-HE, elastic}}^{\mu\nu}(q,k)=\frac{32\pi^{3}\alpha}{45}m^{2} \xi^{2}\bigg{(}\frac{\omega q_{0}}{m^{2}}\bigg{)}^{2}\delta(k-q)\begin{pmatrix} 0&0&0&0\\ 0&4&0&0\\ 0&0&7&0\\ 0&0&0&0\end{pmatrix}. \tag{47}\]
This exactly corresponds to the leading low-energy terms in Eqs. (33) and (34) and to the leading-order terms in Eqs. (42) and (43). In what follows, they will be denoted by \(M_{\mathrm{LCFA-LO}}^{11}\) and \(M_{\mathrm{LCFA-LO}}^{22}\), respectively. Note that the leading-order LCFA expressions completely disregard the imaginary part of the polarization tensor, i.e., fail to describe the process of dichroism.
## VI Direct evaluation of the Feynman diagrams
Here we will directly compute the Feynman diagrams depicted in Fig. 2. The corresponding amplitudes and accordingly the contributions to the polarization tensor are proportional to \(E_{0}^{2}\), i.e. \(\xi^{2}\) [cf. Eq. (26)]. Each interaction vertex involves the energy-momentum transfer with the four-vector \(\pm K\), where \(K^{\mu}\equiv\omega n^{\mu}\) is the four-momentum of the photons that constitute the external plane wave. As we are interested in studying the elastic contributions, the two vertices in each diagram should correspond to one emission and one absorption, so the diagram represents essentially a two-to-two scattering process. Since one has to evaluate three diagrams, the leading-order matrix \(M_{\mathrm{LO}}^{\mu\nu}\) is a sum of three terms, \(M_{\mathrm{LO}}^{\mu\nu}=M_{1}^{\mu\nu}+M_{2}^{\mu\nu}+M_{3}^{\mu\nu}\). Considering, for instance, the first diagram and using the Feynman rules, we obtain the following expression for \(M_{1}^{\mu\nu}\):
\[M_{1}^{\mu\nu}=-\frac{i}{8\pi^{2}}\sum_{s=\pm 1}\int d^{4}p\mathrm{Tr}\, \big{[}\gamma^{\nu}S(p+q/2-sK/2)\gamma^{1}S(p+q/2+sK/2)\gamma^{\mu}S(p-q/2+sK /2)\gamma^{1}S(p-q/2-sK/2)\big{]}. \tag{48}\]
Here \(s\) indicates at which of the two vertices the external-field photon is emitted (absorbed). The integration variables \(p^{\mu}\) are shifted, so that the integrand has a more symmetric form
(cf. Ref. [36]). The electron propagator is given by
\[S(p)=\frac{\gamma^{\mu}p_{\mu}+m}{m^{2}-p^{2}-i\varepsilon}, \tag{49}\]
where \(\varepsilon\to 0\).
One can explicitly verify that the total expression for \(M_{\text{LO}}^{\mu\nu}\) depends only on the product \(\omega q_{0}\), i.e. \(\eta=\omega q_{0}/m^{2}\), in accordance with Eqs. (27) and (28). Therefore, we will assume that \(q_{0}=\omega=\sqrt{\eta}m\), so \(\mathbf{K}=-\mathbf{q}\). Then Eq. (48) takes the form
\[M_{1}^{\mu\nu} = -\frac{i}{8\pi^{2}}\int\limits_{-\infty}^{\infty}dz\int d^{3}\bm {p}\operatorname{Tr}\big{[}\gamma^{\nu}S(z,\mathbf{p}+\mathbf{q})\gamma^{1}S(z+q_{0}, \mathbf{p})\gamma^{\mu}S(z,\mathbf{p}-\mathbf{q})\gamma^{1}S(z-q_{0},\mathbf{p}) \tag{50}\] \[+ \gamma^{\nu}S(z+q_{0},\mathbf{p})\gamma^{1}S(z,\mathbf{p}+\mathbf{q})\gamma^ {\mu}S(z-q_{0},\mathbf{p})\gamma^{1}S(z,\mathbf{p}-\mathbf{q})\big{]}.\]
The trace contains denominators that for each \(\mathbf{p}\) turn to zero at complex points \(z\) with small nonzero imaginary parts for nonzero values of \(\varepsilon\). After the \(\mathbf{p}\) integration, the trace as a function of \(z\) possesses six branch cuts depicted in Fig. 3 for \(q_{0}<m\). The \(z\) integration over the real axis in Eq. (50) can be, in fact, performed over any contour like that displayed in Fig. 3, provided it does not intersect any of the branch cuts. In the case \(q_{0}<m\) (\(\eta<1\)), one can, for instance, rotate the contour, so that it coincides with the imaginary axis. Substituting then \(z=iw\), where \(w\in\mathbb{R}\), one can explicitly demonstrate that the total contribution \(M_{\text{LO}}^{\mu\nu}=M_{1}^{\mu\nu}+M_{2}^{\mu\nu}+M_{3}^{\mu\nu}\) is real in accordance with Eqs. (27) and (28).
In order to address the high-energy case \(\eta>1\), we employ the following numerical procedure. We change the order of the \(z\) and \(\mathbf{p}\) integrations and first integrate over \(z\in\mathbb{R}\). Accordingly, the \(z\) integrand has a number of isolated poles \(\xi_{j}-i\sigma\varepsilon\) where \(\sigma=\pm 1\) and the real parts \(\xi_{j}\) depend on \(\mathbf{p}\). In each vicinity \((\xi_{j}-\delta,\xi_{j}+\delta)\) we perform the integration semi-analytically by means of the Sokhotski-Plemelj identity. This allows us to set \(\varepsilon=0\) while performing the rest integrations numerically and avoid computational singularities.
Our procedure was also generalized to compute the diagrams for arbitrary independent \(q_{0}\) and \(\omega\). The main steps here are generally the same. After that, we confirmed the results obtained by means of the technique described above. Finally, we note that the expression (50) has a similar form to the amplitude of photon emission via the so-called tadpole diagram (see Ref. [37], where it was evaluated in the regime \(\eta<1\)).
## VII Numerical results
We will now perform numerical calculations of the difference \(\delta M\equiv M^{11}-M^{22}\), whose real and imaginary parts govern the effects of vacuum birefringence and dichroism, respectively. First, we will evaluate \(\delta M\) within the leading order with respect to the field amplitude. In this case, the results do not depend on \(\xi\). In Fig. 4 we present \(\delta M\) as a function of \(\eta\). First, one observes that the Heisenberg-Euler approximation within the leading order of perturbation theory can be accurate only in the low-energy regime. If one takes into account the \(1/\eta^{4}\) terms according to Eqs. (33) and (34), the results become slightly more accurate although they completely fail to reproduce the full PT results for \(\eta>1\). Second, the more general expressions (27) and (28) yield a nonzero imaginary part for \(\eta>1\), so the PT approach may allow one to describe the effects of dichroism. Finally, we note that our approach based on direct computations of the Feynman diagrams as described in Sec. VI provides exactly the same results as Eqs. (27) and (28), which benchmarks the corresponding numerical procedures. To judge whether the leading-order approximation is justified, one has to perform nonperturbative calculations for various values of \(\xi\), which will be done next.
In Fig. 5 we display the real and imaginary parts of \(\delta M\) as a function of \(\eta\) for three different values of \(\xi\): 0.1, 1.0, and 10.0. We refer to Eq. (17) as the _exact_ result. First, we observe that the \(\eta\) dependence very nontrivially changes as a function of \(\xi\), which cannot be taken into account by means of the PT approach. Whereas for \(\xi\ll 1\), this approximation provides indeed very accurate results within a broad range of \(\eta\), for \(\xi\gtrsim 1\), it fails to reproduce the exact values unless \(\eta\ll 1\). Second, as was mentioned above, the LCFA predictions have the form \(\delta M_{\text{LCFA}}(\xi,\eta)=(1/\xi^{2})\delta M_{\text{LCFA}}(1,\xi\eta)\), so the
Figure 3: Branch cuts (red) of the electron propagators in the case \(q_{0}<m\) before the \(z\) integration in Eq. (50) and a possible integration contour (blue).
different LCFA curves can be obtained by simply rescaling the plot axes. This approach does not allow one to describe the nontrivial structure that takes place for \(\xi\lesssim 1\), although it is accurate for very small \(\eta\), where the expansions (42) and (43) are valid.
Let us now quantitatively identify the domains of validity of various approximations for describing the vacuum birefringence effects. In Fig. 6 we identify the values of \(\xi\) and \(\eta\) for which the approximate predictions match the exact results with a relative uncertainty on the level of \(10\%\). First, let us discuss the PT approach, which yields the leading-order estimates (27) and (28). In the regime \(\xi\gg 1\), it is only valid for \(\eta\ll 1\). It turns out that in the corresponding domain of parameters \(\chi\lesssim 0.5\). Since for large values of \(\xi\) one can employ the LCFA, it is possible to estimate the exact result for the real part of \(M^{\mu\nu}\) by means of Eqs. (42) and (43). Comparing these with the low-energy asymptotic expansions (33) and (34), one can obtain the threshold value of \(\chi\). For instance, requiring that the relative uncertainty of PT be less than \(10\%\), one obtains \(\chi<\sqrt{(7/2)0.1}\approx 0.59\). According to our numerical analysis, this condition, in fact, reads \(\chi<0.55\). In the regime \(\xi\lesssim 1\), the validity of the LCFA (37), (38) is very limited, so one has to directly compare the leading-order PT results with the nonperturbative predictions. In this domain, the applicability of perturbation theory is not solely governed by \(\chi\) as can be seen in Fig. 6, where the domain of the PT applicability is no longer bounded by a straight line. Finally, we note that in the region \(\xi\lesssim 1\), even if the PT approach fails to reproduce the exact results for \(\eta\sim 1\), it may provide quite accurate predictions for sufficiently large values of \(\eta\), where \({\rm Re}\ \delta M\) becomes close to \(-1\) [see Fig. 5 (middle)]. Moreover, in this region the nonzero imaginary part of the polarization operator can also be obtained by means of perturbation theory.
In order to identify the validity domain of the leading-order Heisenberg-Euler approximation (47), it is sufficient to compare its predictions with the leading-order PT result (27), (28). Since within these approaches the matrix \(M^{\mu\nu}\) is independent of \(\xi\), one should only determine the threshold value of \(\eta\). For the \(10\%\) uncertainty level, it amounts to \(\eta_{\text{max}}\approx 0.44\). The validity domain of the Heisenberg-Euler approximation is then the intersection of the region \(\eta<0.44\) and the validity domain of the PT approach.
The applicability of the LCFA (37), (38) corresponds to a much larger region than that where the Heisenberg-Euler approximation is justified. It not only describes the effect of birefringence in the low-energy domain but is also valid in the case of high-energy probe photons (\(\eta\gtrsim 1\)), provided \(\xi\gg 1\).
As was indicated above, the imaginary part of the polarization tensor, which is responsible for dichroic properties of the vacuum, cannot be estimated by means of the leading-order Heisenberg-Euler approximation (47). Nevertheless, both the PT approach and the LCFA (37), (38) are very useful here -- they can be employed within the corresponding regions indicated in Fig. 6.
According to our results, the validity domain of the Heisenberg-Euler approximation is the smallest. The corresponding results can always be additionally confirmed by either perturbation theory or the LCFA based on the calculation of the polarization operator in constant crossed fields. The advantage of the latter approach is the possibility to consider larger values of \(\eta\) once \(\xi\gtrsim 1\). Note also that a considerable part of the plot in Fig. 6 relates to large values of the parameter \(\chi\), which are not realistic at present. Nevertheless, given the logarithmic scale in the graph, the LCFA covers a domain of parameters which is substantially broader than the validity region of the Heisenberg-Euler approximation. The PT approach is always accurate once the LCFA-HE technique is justified. In addition, the leading-order predictions coincide with the exact results for any values of \(\eta\) if \(\xi\) is sufficiently small.
Figure 4: Real and imaginary parts of the difference \(\delta M\equiv M^{11}-M^{22}\) calculated within the leading-order of perturbation theory [Eqs. (27) and (28)], by means of the Heisenberg-Euler approximation (47) and according to the low-energy expansions (33) and (34). The latter two approaches yield zero imaginary part.
Figure 5: Real and imaginary parts of the difference \(\delta M\equiv M^{11}-M^{22}\) evaluated within the leading-order of perturbation theory (LO), by means of the LCFA [Eqs. (37) and (38)] and exact nonperturbative expression (17) for \(\xi=0.1\) (top), \(\xi=1.0\) (middle), \(\xi=10.0\) (bottom). For \(\xi=0.1\) the “LO” and exact curves coincide.
## VIII Conclusion
In the present investigation, we examined the effects of vacuum birefringence and dichroism in strong plane-wave backgrounds by means of several theoretical methods allowing one to evaluate the leading one-loop contribution to the polarization operator. First, we employed closed-form expressions exactly incorporating the interaction between the electron-positron field and classical external background depending on the spatiotemporal coordinates. Second, we performed calculations within the leading order with respect to the field amplitude, i.e., by means of perturbation theory. This was done by expanding the nonperturbative result and by means of our numerical method based on a direct evaluation of the leading-order Feynman diagrams. It was found that these two approaches yield identical quantitative predictions both for real and imaginary parts of the polarization tensor. Varying the field parameters and the probe-photon energy, we examined the validity of the perturbative methods. Third, we utilized the locally-constant field approximation (LCFA) in two different forms: Heisenberg-Euler approximation and the technique involving exact expressions for the polarization operator in constant crossed fields. By comparing the approximate predictions with the exact results, we evidently identified the field and probe-photon parameters for which each of the approximate techniques is justified.
An important prospect for future studies is the analogous analysis beyond the plane-wave scenario, where the exact analytical expressions are unknown. In this case, for instance, the applicability of the LCFA may be additionally limited if the external electric and magnetic fields are not crossed in contrast to the field configuration examined in the present investigation.
###### Acknowledgements.
The study was funded by RFBR and ROSATOM, project No. 20-21-00098. I.A.A. also acknowledges the support from the Foundation for the advancement of theoretical physics and mathematics "BASIS".
|
2305.14932 | Defect-Defect Interactions in the Buckling of Imperfect Spherical Shells | We perform finite element simulations to study the impact of defect-defect
interactions on the pressure-induced buckling of thin, elastic, spherical
shells containing two dimpled imperfections. Throughout, we quantify the
critical buckling pressure of these shells using their knockdown factor. We
examine cases featuring either identical or different geometric defects and
systematically explore the parameter space, including the angular separation
between the defects, their widths and amplitudes, and the radius-to-thickness
ratio of the shell. As the angular separation between the defects is increased,
the buckling strength initially decreases, then increases before reaching a
plateau. Our primary finding is that the onset of defect-defect interactions,
as quantified by a characteristic length scale associated with the onset of the
plateau, is set by the critical buckling wavelength reported in the classic
shell-buckling literature. Beyond this threshold, within the plateau regime,
the buckling behavior of the shell is dictated by the largest defect. | Fani Derveni, Arefeh Abbasi, Pedro M. Reis | 2023-05-24T09:14:47Z | http://arxiv.org/abs/2305.14932v1 | # Defect-Defect Interactions in the Buckling of Imperfect Spherical Shells
###### Abstract
We perform finite element simulations to study the impact of defect-defect interactions on the pressure-induced buckling of thin, elastic, spherical shells containing two dimpled imperfections. Throughout, we quantify the critical buckling pressure of these shells using their knockdown factor. We examine cases featuring either identical or different geometric defects and systematically explore the parameter space, including the angular separation between the defects, their widths and amplitudes, and the radius-to-thickness ratio of the shell. As the angular separation between the defects is increased, the buckling strength initially decreases, then increases before reaching a plateau. Our primary finding is that the onset of defect-defect interactions, as quantified by a characteristic length scale associated with the onset of the plateau, is set by the critical buckling wavelength reported in the classic shell-buckling literature. Beyond this threshold, within the plateau regime, the buckling behavior of the shell is dictated by the largest defect.
**Dedication:** We dedicate this manuscript to Prof. Kyung-Suk Kim, a truly inspiring scholar in our Mechanics community and a beacon of inspiration, rigor, creativity, and intellectual generosity. Prof. Kim's mastery of opening new research directions and revisiting classic problems, always with fresh eyes, have been a constant source of inspiration for us. The corresponding author is especially grateful to Prof. Kim for the exceptional support, guidance, and mentoring he received over the years.
## I Introduction
The buckling of elastic shell structures is highly sensitive to imperfections [1; 2; 3]; a problem that is relevant across length scales, from viruses [4] and colloidal capsules [5] to large storage tanks [6]. Even if this is a long-standing classic subject [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], the past decade has seen a revival in the study of the buckling of shells and their imperfection sensitivity [19]. For a historical perspective and a more thorough contextualization of the modern account of single-defect shell buckling, we direct the reader to Refs. [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33].
The canonical question, which remains challenging despite decades of research, is: _What are the critical conditions for the buckling of an imperfect shell?_ Recently, in an effort to address this question, an experimental technique has been developed for fabricating spherical shells containing a single dimpled imperfection, which can be engineered precisely [34]. Subsequent buckling studies utilizing this model system demonstrated that if the geometry of the imperfection is characterized in detail, the critical pressure can be predicted accurately, either using the Finite Element Method (FEM) or via numerical solutions of the shell-theory equations [35]. The knockdown factor, defined as the ratio between the critical buckling pressure of the imperfect shell and that of the equivalent perfect one[7], is the commonly used metric in these studies. For realistic shells, predicting the knockdown factor, which is always less than unity, is notoriously challenging.
Beyond the model system of a single-defect shell, we have recently investigated the more realistic case of a large number of geometric imperfections distributed randomly over the surface of a spherical shell [36]. Importantly, we evidenced that given an input log-normal distribution for the amplitude of defects, the resulting knockdown factor is described by a 3-parameter Weibull distribution, a finding that places shell buckling in the broader class of extreme-value statistics phenomena [37; 38; 39; 40; 41; 42]. In that study, we also found that interactions between two adjacent defects, depending on the defect-to-defect separation, can potentially strengthen or weaken the shell in comparison to the single-defect case. There is a similar problem for cylindrical shells, both with a single defect [43; 44; 45; 29; 25; 42] or distribution of defects [46; 47; 48; 49]. Even though there have been some studies on the buckling of cylindrical shells containing two defects [50; 51], to the best of our knowledge, a systematic exploration of defect-defect interactions in the buckling of _spherical_ shells has not been tackled to date.
Here, we study the buckling of imperfect hemispherical shells containing two dimpled defects. The geometric properties of these two imperfections can be either identical or different. Methodologically, we conduct FEM simulations, which have been previously validated thoroughly against experiments [36]. First, we focus on how the angular separation between the two defects affects the knockdown factor, characterizing how the interaction regime is impacted by the width and amplitude of the imperfections. Then, we compare the threshold of the defect-defect separation for the onset of interactions to the theoretical prediction of the full wavelength of the classic critical buckling wavelength for a spherical shell [52]. Our main finding is that the arc length associated with the defect-defect interaction threshold depends
directly on the radius-to-thickness ratio of the shell, scaling linearly with this critical buckling wavelength.
Our paper is organized as follows. First, in Sec. II, we define the problem at hand and outline the research questions. Next, in Sec. III, we describe the FEM simulations employed in our study. In Sec. IV, we present a first set of results on the influence of the radius-to-thickness ratio on the buckling behavior of shells containing two defects. More detailed results for shells with identical defects are provided in Sec. V and with different defects in Sec. VI. Finally, in Sec. VII, we summarize the conclusions of our study and offer suggestions for future research directions.
## II Problem definition
We consider a thin, elastic, and hemispherical shell of radius, \(R\), and thickness, \(t\), as illustrated in Fig. 1(a,b). The shell is clamped at the equator and contains _two_ geometric imperfections. In their undeformed configuration, each defect is shaped as a Gaussian dimple, with the following radial deviation from the perfect spherical geometry:
\[\hat{w}_{i}(\alpha)=-\delta_{i}e^{-(\alpha/\alpha_{i})^{2}}, \tag{1}\]
where the indices \(i=\{1,2\}\) represent each of the two defects, \(\alpha\) is the local angular distance corresponding to each defect (measured from their centers), \(\alpha_{i}\) is the half-angular width of the \(i\)th defect, and \(\delta_{i}\) is its amplitude (maximum radial deviation of the mid-surface of the shell). The global angular (zenith) coordinate, \(\beta\), is defined from the pole (\(\beta=0\)), where the first defect (\(i=1\)) is always located. The other defect is at \(\beta_{2}\). Following conventional practice in shell-buckling studies [53; 10], the defect amplitude of each defect is normalized as \(\overline{\delta}_{i}=\delta_{i}/t\), while the width is normalized as \(\lambda_{i}=\left[12(1-\nu^{2})\right]^{1/4}(R/t)^{1/2}\,\alpha_{i}\). Here, \(\nu\) is the Poisson's ratio of the material. The shell thickness, \(t\), is kept constant throughout so that we focus only on geometric imperfections, unlike previous work on through-thickness defects [28] or elasto-plastic dents [54].
First, we will analyze shells containing two identical defects: \(\lambda=\lambda_{1}=\lambda_{2}\) and \(\overline{\delta}=\overline{\delta}_{1}=\overline{\delta}_{2}\). Subsequently, we will consider the scenario of two different defects; \(\lambda_{1}\neq\lambda_{2}\) and/or \(\overline{\delta}_{1}\neq\overline{\delta}_{2}\). Since the \(i=1\) defect is always positioned at the shell pole (\(\beta=0\)) and the \(i=2\) defect is at \(\beta_{2}\), the angular separation (center-to-center) between the two defects is \(\varphi_{(1,2)}=\beta_{2}\). To facilitate the discussion on defect-defect interactions later in the manuscript, it is important to define an alternative angular separation:
\[\varphi_{(1,2)}^{*}=\varphi_{(1,2)}-m\frac{\alpha_{1}+\alpha_{2}}{\sqrt{2}}, \tag{2}\]
where \(m=\{1,\,2,\,3\}\) is an integer. The different values of \(m\) correspond to successively excluding wider portions from the core of the defects when considering their angular separation. A more comprehensive discussion on this point will be provided in Sec. V. Finally, recalling Eq. (1), the combined profile of a shell with two dimples is
\[\hat{w}(\beta,\,\theta)=\hat{w}_{1}(0,\,0)+\hat{w}_{2}(\varphi_{(1,2)},\, \theta_{2}), \tag{3}\]
where \(\beta\) and \(\theta\) are the _global_ zenith and azimuthal spherical (polar) coordinates, respectively.
Figs. 1(c-f) depict representative examples of the mid-surface profile of a shell with \(R/t=100\). These profiles are visualized within the great plane that intersects the shell and passes through the centers of the two imperfections. Note that, given the localized (dimpled) profile in Eq. (3), the shells are _not_ axisymmetric, and the profiles shown in Fig. 1 are solely for illustration purposes. Figs. 1(c,e) show the Cartesian profiles in the \(y\)-\(x\) great plane; for clarity, all profiles are offset vertically (see caption for details). As an alternative representation, the
Figure 1: Reference geometry of the imperfect hemispherical shell with two dimpled defects. (a) 2D schematic, defining all relevant geometric quantities. (b) 3D representation; the shade (see colorbar) represents the radial deviation \(\hat{w}\) from a perfect sphere. (c,e) Geometric profiles of identical-defect shells for (c) fixed \(\overline{\delta}=1.5\), \(\varphi_{(1,2)}=14^{\circ}\) and varying \(\lambda_{i}\), and (e) fixed \(\overline{\delta}=1.5\), \(\lambda_{i}=1.0\) and varying \(\varphi_{(1,2)}\). (d,f) Radial deflection, \(\hat{w}\), versus zenith angle, \(\beta\), for (d) constant \(\varphi_{(1,2)}=14^{\circ}\) between (dI) identical defects with various \(\lambda_{i}\) or (dII) different defects with various \(\lambda_{2}\). (f) Similar data, with constant \(\lambda_{i}=1\), for (fI) identical defects with various \(\varphi_{(1,2)}\) or (fII) different defects with various \(\varphi_{(1,2)}\). The representative cases for identical defects (dI, fI) have \(\overline{\delta}_{i}=1.5\), and the different-defects cases (dII,fII) have \(\overline{\delta}_{1}=1\), \(\overline{\delta}_{2}=1.5\) and \(\lambda_{1}=1\). For clarity, all profiles are offset in panels (c,d) by 1 mm, in (e) by 2 mm, and in (f) by 5.5 mm downwards. Also, the \(\hat{w}\) profiles in panels (d) are shown with an amplification factor of 10.
\(\hat{w}(\beta)\) curves in Figs. 1(d,f) correspond to the radial deviation from a perfect hemisphere as a function of the global zenith angle, \(\beta\in[-60,\,60]^{\circ}\). These limiting angles are chosen as the maximum location of the defects to avoid interactions with the equator boundary [36]. When their widths, \(\lambda_{i}\), are too large (Figs. 1c,d) or when their angular separation, \(\varphi_{(1,2)}\), is too small (Figs. 1e,f), the two defects can merge to form a single defect.
Following a similar approach as in previous studies [26; 28; 30; 35; 36], we depressurize the clamped hemispherical shell until buckling occurs. Given the actual critical buckling pressure of the imperfect shell, \(p_{\rm max}\), the knockdown factor is defined as \(\kappa=p_{\rm max}/p_{\rm c}\), where \(p_{\rm c}\) is the classic prediction for the respective perfect shell geometry [7; 35]. Our goal is to characterize how \(\kappa\) for a shell with the two-defect geometry specified above depends on the following geometric parameters: \(\overline{\delta}_{i}\), \(\lambda_{i}\), \(\varphi_{(1,2)}\), and \(R/t\). We will give particular attention to identifying the regimes where the interactions between the two defects induce non-trivial changes in \(\kappa\).
Our main contribution will be the definition of a threshold arc length for the separation between the two defects, beyond which their interactions become negligible. We will consider two versions of this separation-arclength threshold: \(l_{p}=R\varphi_{p(1,2)}\), defined from center-to-center of the defect, and \(l_{p}^{*}=R\varphi_{p(1,2)}^{*}\), adjusted to account for edge effects of the defects using \(\varphi_{(1,2)}^{*}\) introduced in Eq. (2). We provide evidence that this latter arclength, with \(m=1\), is set by
\[l_{p}^{*}\approx l_{\rm c}=2\pi[12(1-\nu^{2})]^{-1/4}\sqrt{Rt}, \tag{4}\]
where \(l_{\rm c}\), computed in the seminal work by Hutchinson [52], is the theoretical critical buckling wavelength for a spherical shell. More technically, \(l_{\rm c}\) is the full wavelength of the axisymmetric bifurcation mode at the equator of the shell.
In our previous work [36], we presented preliminary evidence for the result in Eq. (4), but only with a single value of \(R/t=110\). Hence, we were unable to fully test Eq. (4). In the present study, we will change this radius-to-thickness ratio within the range \(R/t\in[100,\,500]\) to examine how \(l_{p}^{*}\) relates to \(l_{\rm c}\). Furthermore, in Ref. [36], we reported evidence for the potential interactions between nearby defects and how they can lead to stronger or weaker shells in comparison to single-defect shells. However, the data in that study was limited to a few specific cases. In the present work, we will explore the various geometric parameters of the system systematically and seek to characterize how defect-defect interactions impact \(\kappa\) for spherical shells containing two imperfections.
## III Methodology: FEM simulations
We performed full 3D simulations using the Finite Element Method (FEM) with the commercial software ABAQUS/Standard [55]. In our prior work [33; 36], we validated this approach against precision experiments similar to the multi-defects geometry considered here. Each quarter of the hemispherical shell is discretized in the meridional and azimuthal using four-noded S4R shell elements: a total of 67500 elements for shells with \(R/t\leq 300\) and 187500 elements for shells with \(R/t\geq 400\). This level of discretization was deemed suitable after conducting a thorough mesh-convergence analysis. To set the initial geometry of the imperfect shell, we initiated with a perfect hemispherical mesh. Subsequently, we introduced nodal displacements according to the desired profiles of the two imperfections, following Eq. (3), with varying values for the geometric parameters (\(\overline{\delta}_{i}\), \(\lambda_{i}\), \(\varphi_{(1,2)}\)). The shell thickness remained constant throughout the simulations.
The shells were subject to uniform live pressure on their outer surface, while their equator was set as a clamped boundary. We employed a Riks (static) solver with the following parameters for the shells with \(R/t\leq 300\): an initial arc length increment of 0.1, a minimum increment of \(10^{-5}\), and a maximum increment of 0.5. For the thinnest shells with \(R/t\geq 400\), the corresponding parameters of the Riks solver were 0.002, \(10^{-10}\), and 0.2, respectively. Geometric nonlinearities were considered throughout the analysis.
The hemispherical shells were modeled using the material properties of vinylpolysiloxane (VPS-32, Elite Double 32, Zhermack) as a neo-Hookean and incompressible solid; the material had a Poisson's ratio of \(\nu\approx 0.5\) and Young's modulus of \(E=1.26\,\)MPa. These material properties were chosen to match those of previous shell-buckling experiments [26; 28; 30; 35; 36] used to validate our FEM-simulation approach. The geometric parameters of the two-defect imperfect shells were varied in the following ranges: \(\overline{\delta}_{i}\in[0.5,\,3]\), \(\lambda_{i}\in[0.25,\,5]\), \(R/t\in[100,\,500]\) (constant \(R=25.4\,\)mm, varying \(t\)) and \(\varphi_{(1,2)}\in[1,\,60]^{\circ}\).
## IV Hypothesis for the defect-defect interaction regime
We start our investigation by quantifying how the knockdown factor, \(\kappa\), of the two-defects shells depends on the radius-to-thickness ratio, \(R/t\). Throughout, we will focus on numerical experiments conducted using the FEM simulation approach described in the preceding section.
In Fig. 2, we plot \(\kappa\) versus the defect-defect angular separation, \(\varphi_{(1,2)}\), for shells comprising either (a) two identical or (b) two different defects, at several values of \(R/t\). For now, we set the amplitudes and widths of the defects as follows. For the case of identical defects (Fig. 2a), we fixed \(\overline{\delta}=1.5\) and \(\lambda=1\). For the case of different defects (Fig. 2b), we fixed \(\overline{\delta}_{1}=1\), \(\overline{\delta}_{2}=1.5\) and \(\lambda_{1}=\lambda_{2}=1\). All curves are non-monotonic as a function of \(\varphi_{(1,2)}\): \(\kappa\) first decreases, reaching a minimum (\(\kappa_{\rm min}\)), then increases to a maximum (\(\kappa_{\rm max}\)), and subsequently
decreases to a constant plateau value (\(\kappa_{\rm p}\)). As suggested in Ref. [36], this non-monotonic behavior at small values of \(\varphi_{(1,2)}\) arises from defect-defect interactions. By contrast, in the plateau region at large values of \(\varphi_{(1,2)}\), the largest defect dominates. Note that the horizontal dashed lines in Fig. 2 correspond to \(\kappa\) values for a single-defect shell with (\(\vec{\delta}\), \(\lambda\)) = (1.5, 1) and \(R/t=100\), aligning with the plateaus of all the two-defects curves. The identical-defects shells (Fig. 2a) exhibit higher values of \(\kappa_{\rm max}\) than the different-defects shells (Fig. 2b), suggesting that defect-defect interactions are less pronounced in the latter case.
To help visualize the buckling process, the insets of Fig. 2 offer representative snapshots of the greater-plane (2D) profiles obtained from the FEM simulations for shells with \(R/t=100\) and various defect-defect angular separations. Near \(\kappa_{\rm min}\) (e.g., \(\varphi_{(1,2)}=8^{\circ}\)), the two defects are almost superimposed, resulting in a reduced knockdown factor (cf. Eq. 3). For intermediate separations (e.g., \(\varphi_{(1,2)}=14^{\circ}\)), near \(\kappa_{\rm max}\), the region between the two defects acts as a constraint for buckling, leading to higher values of \(\kappa\). When the two defects are sufficiently far apart (e.g., \(\varphi_{(1,2)}=29^{\circ}\)), in the plateau region, the largest defect dominates the buckling.
All the plotted data sets in Fig. 2, with varying \(R/t\) values, exhibit the aforementioned non-monotonic behavior of \(\kappa(\varphi_{(1,2)})\). However, as \(R/t\) increases, the interaction regions (before the plateau is reached) progressively shift to lower values of \(\varphi_{(1,2)}\). This observation highlights the influence of the radius and thickness of the shell on the defect-defect interactions. We hypothesize that the threshold angular separation, below which defects interact and above which the plateau begins, is directly related to \(\sqrt{Rt}\); the characteristic length scale associated with the balance between bending and stretching effects [30]. Consequently, we anticipate that the onset of the plateau in the \(\kappa(\varphi_{(1,2)})\) curves is directly related to the critical buckling wavelength, \(l_{c}~{}\sim\sqrt{Rt}\), as expressed in Eq. (4). Without wanting to spoil a surprise, the results in the next section will confirm this hypothesis.
## V Interactions between two identical defects
In this section, we focus solely on imperfect shells with two _identical_ defects. The angular separation between their centers, \(\varphi_{(1,2)}\), can be recast as the defect-defect separation _arc length_, \(l=R\varphi_{(1,2)}\). Our objective is to quantify the dependence of the FEM-computed knockdown factor, \(\kappa\), for these shells on \(l\), \(R/t\), \(\overline{\delta}\), and \(\lambda\).
In Fig. 3, we present \(\kappa(l)\) curves for a shell with \(R/t=100\): in panel (a) for fixed widths (\(\lambda=1\)) while varying their amplitudes (\(\overline{\delta}\in[0.5,3]\)), and, in (b), for fixed defect amplitudes (\(\overline{\delta}=1.5\)) while varying their widths (\(\lambda\in[0.25,5]\)). In both plots, the vertical lines represent the critical buckling wavelength for a spherical shell, \(l_{c}\), provided in Eq. (4) [52], for this shell with \(R/t=100\). Note that \(l_{c}\) does not depend on any of the defect parameters. Fig. 3(a) and Fig. 3(b) both exhibit non-monotonic \(\kappa(l)\), indicative of defect-defect interactions, which consistently occur for \(l\lesssim l_{c}\) (shaded region). For \(l\gtrsim l_{c}\), all curves reach a plateau. Naturally, the specific values of \(\kappa_{\rm min}\), \(\kappa_{\rm max}\), and \(\kappa_{\rm p}\) depend on the actual defect geometry, as extensively investigated in previous studies for single-defect [33; 24; 35] and many-defects[36] scenarios.
We now select some data from Fig. 3(a), for \(\lambda=1\) and \(\overline{\delta}\) = {0.5, 1.0, 1.5}, and from Fig. 3(b), for \(\overline{\delta}\) = 1.5 and \(\lambda=\{0.5, 1.0, 3.0\}\), and present them in Fig. 4(a) and (b) as a function of the normalized arc length \(l/l_{c}\). Additional simulation data for \(R/t=200\) and \(500\) are included. The shaded regions indicate small angular separations where the two defects overlap (cf. the corresponding 2D profiles in Fig. 1). It is remarkable that
Figure 2: Knockdown factor, \(\kappa\), as a function of angular separation, \(\varphi_{(1,2)}\), for (a) identical and (b) different defects. The respective values of \(\lambda_{i}\) and \(\overline{\delta}_{i}\) are provided in the legend of each plot. Shells with varying radius-to-thickness ratio, \(R/t\), are considered, as indicated in the top legend (common to both panels). Insets: Greater-plane profiles of imperfect shells with \(R/t=100\) and different values of \(\varphi_{(1,2)}\) in their original configurations (dotted lines) and at the onset of buckling (solid lines). The radial deviation of the latter is amplified by a factor of 3 for visualization purposes. The horizontal dashed lines correspond to the \(\kappa\) values of a single-defect shell with \(R/t=100\) and (\(\overline{\delta}\), \(\lambda\)) = (1.5, 1).
all the \(\kappa(l/l_{c})\) data collapse, with the emergence of their plateaus past \(l/l_{c}\gtrsim 1\).
The aforementioned observation regarding the onset of the plateau underscores the importance of the critical buckling wavelength, \(l_{c}\), in setting the threshold arc length separation for the defect-defect interaction regime. This finding represents an important step in confirming the hypothesis laid out in Sec. IV. To quantify this threshold, we consider the maximum (\(\kappa_{\text{max}}\)) and plateau (\(\kappa_{\text{p}}\)) values of the \(\kappa(l)\) curves in Figs. 3 and 4. The threshold separation is defined as the arc length corresponding to the \(10\%\) cut-off: \(0.1(\kappa_{\text{max}}-\kappa_{\text{p}})\). An uncertainty of \(\pm 0.05(\kappa_{\text{max}}-\kappa_{\text{p}})\) is assigned to each threshold value to account for the non-sharp onset of the plateau, consistently with the percentual definitions used in previous work [24]. As mentioned in Sec. II, there are two possible definitions for the defects separation arc length, \(l_{p}\) or \(l_{p}^{*}\), depending on whether we consider the center-to-center (\(\phi_{(1,2)}\)) or the adjusted (\(\phi_{(1,2)}^{*}\)) angular separations, respectively. The latter excludes a portion from the core of the defects and was defined in Eq. (2). Schematics illustrating these two definitions are provided in Fig. 5 (top).
At this point, it is important to revisit the Gaussian shape (cf. Eq. 1) of the dimpled imperfections we are considering. Note that, at the local angular coordinate of each defect \(\alpha=m\,\alpha_{i}\), its deviation from the perfect sphere is \(\dot{w}_{i}=-\delta_{i}\,e^{-m}\). Also, \(\alpha_{i}/\sqrt{2}\) can be interpreted as the standard deviation of this Gaussian shape, \(\dot{w}_{i}(\alpha)\). Therefore, \(l_{p}^{*}\) can be seen as excluding some portion of the core of each defect. Taking the values \(m=1\), \(2\), or \(3\) corresponds to excluding \(68.3\%\), \(95.6\%\), and \(99.7\%\) of the defect, respectively [56]. The choice of \(m\) determines the extent to which the core of the defect is excluded, with \(m=3\) effectively considering the edge-to-edge separation between defects. It is important to note that at \(\alpha=\alpha_{i}/\sqrt{2}\), there is an inflection point in Eq. (2) and \(\dot{w}_{i}^{\prime\prime}(\alpha_{i}/\sqrt{2})=0\).
We have measured \(l_{p}\) or \(l_{p}^{*}\) as functions of \(l_{c}\), for shells with \(R/t\in[100,\,500]\) and two identical defects with
Figure 4: Knockdown factor, \(\kappa\), as a function of \(l/l_{c}\), the defect-defect arc length normalized by the critical buckling wavelength defined in Eq. (4). (a) Constant \(\lambda=1\), varying \(\overline{\delta}\). (b) Constant \(\overline{\delta}=1.5\), varying \(\lambda\). The different markers refer to various radius-to-thickness ratios, \(R/t\). The shaded areas indicate the regions where the defects overlap, resulting in a single larger defect.
Figure 3: Knockdown factor, \(\kappa\), for a shell with \(R/t=100\) as a function of the defect-defect arclength, \(l\), for identical defects. Panel (a): fixed \(\lambda=1\), varying \(\overline{\delta}\in[0.5,3]\). Panel (b): fixed \(\overline{\delta}=1.5\), varying \(\lambda\in[0.25,5]\). Different markers and a color bar distinguish the various parameter values. The vertical dotted line presents the theoretical, critical buckling wavelength, \(l_{c}\) (cf. Eq. 4), for \(R/t=100\).
\((\overline{\delta},\,\lambda)=(1.5,\,1.0)\). It is worth noting that the different values of \(R/t\) yield different values of \(l_{c}\) according to Eq. (4); specifically, \(l_{c}\) increases as \(R/t\) decreases. The results shown in Fig. 5 confirm the hypothesis presented in Sec. IV: there is a clear _linear_ scaling between \(l_{p}\) or \(l_{p}^{*}\) with varying \(m\) values (cf. Eq. 2) and \(l_{c}\). What is more, when using the \(l_{p}^{*}\) definition with \(m=1\), the data lie on the line \(l_{p}^{*}=l_{c}\). This remarkable result demonstrates that the threshold separation for defect-defect interactions is set by the critical buckling wavelength of the shell at the inflection point in the Gaussian profile, \(\dot{w}(\alpha_{i})\). Hence, for the remainder of our study, we will adopt the definition of \(l_{p}^{*}\) with \(m=1\).
Having examined the specific geometry for an imperfect shell with \((\overline{\delta},\,\lambda)=(1.5,\,1.0)\) (albeit with different \(R/t\)), we now explore the geometric parameter space more systematically. In Fig. 6(a), we plot \(l_{p}^{*}/l_{c}\) as a function of \(\overline{\delta}\) (with fixed \(\lambda=1.0\)), and in Fig. 6(b) \(\lambda\) (with fixed \(\overline{\delta}=1.5\)), for different \(R/t\) values (see legend). Overall, the data consistently aligns closely with \(l_{p}^{*}/l_{c}=1\) (horizontal dashed line), especially when \(\overline{\delta}\geq 1\) (Fig. 6a) and \(\lambda\leq 2.5\) (Fig. 6b). In Fig. 6(a), \(l_{p}^{*}/l_{c}\) remains approximately constant for all \(\overline{\delta}\in[0.5,3]\) and all \(R/t\in[100,500]\). As also highlighted in Fig. 3(a), the \(l_{p}^{*}/l_{c}\) data lie almost on top of the dashed line, deviating by at most \(20\%\) within the entire range of \(\overline{\delta}\) that we explored. More quantitatively, in Fig. 6(b), for shells with \(\lambda\leq 2.5\), the FEM-measured \(l_{p}^{*}\) is in excellent agreement with the analytical result for \(l_{c}\), within a \(16\%\) difference. For wider defects with \(\lambda\geq 2.5\), \(l_{p}^{*}\) deviates by up to \(\approx 50\%\) from \(l_{c}\). Note that in these shells with wide defects (large \(\lambda\) values), the two defects tend to be nearly juxtaposed, as seen in the profiles in Fig. 1(c) and (d), as well as the shaded region in Fig. 6b (for shells with \(R/t=100\)). We attribute the larger deviations of \(l_{p}^{*}/l_{c}\) from unity for shells with wide defects to their overlap, which leads to a distorted, imperfect shell geometry.
## VI Interactions between two different defects
In the previous section, we examined shells with two identical defects. Now, we shift our focus to the case of different defects (\(\overline{\delta}_{1}\neq\overline{\delta}_{2}\) or \(\lambda_{1}\neq\lambda_{2}\)). We will fix the geometry of the \(i=1\) defect at the pole with \((\lambda_{1},\overline{\delta}_{1})=(1.0,1.0)\), and vary the width (\(\lambda_{2}\)) and amplitude (\(\overline{\delta}_{2}\)) of the second defect.
Figure 6: Normalized threshold defect-defect arclength, \(l_{p}^{*}/l_{c}\), versus (a) normalized amplitude, \(\overline{\delta}\), and (b) normalized width, \(\lambda\), for various values of \(R/t\in[100,500]\). In panel (a), \(\lambda=1\) is kept fixed, and in panel (b), \(\overline{\delta}=1.5\) is fixed. Each marker represents a different value of \(R/t\in[100,500]\), and the horizontal dashed lines correspond to \(l_{p}^{*}=l_{c}\). The shaded area in panel (b) highlights the region where defects tend to overlap, forming a single larger defect.
In Fig. 7, we plot the knockdown factor, \(\kappa\), as a function of defect-defect arc length separation, \(l\), for shells with fixed \(R/t=100\) and \(\lambda_{2}=1.0\), while varying \(\overline{\delta}_{2}\in[0.5,3]\). These \(\kappa(l)\) curves are similar to those for the identical-defects case discussed in Sec. V: \(\kappa\) initially decreases to \(\kappa_{\min}\), then increases \(\kappa_{\max}\), before settling to a plateau (\(\kappa_{\rm p}\)). The exact values of \(\kappa_{\min}\), \(\kappa_{\max}\), and \(\kappa_{\rm p}\) are slightly influenced by the amplitude of the \(i=2\) defect, particularly for \(\overline{\delta}_{2}=\{0.5,\,1.0\}\), but not for \(\overline{\delta}_{2}>1.0\), consistent with the known sensitivity of shell buckling to imperfections [35].
In Fig. 7, we present \(\kappa(l)\) curves for shells with a fixed \(R/t=100\) and \(\overline{\delta}_{2}=1.5\), while varying \(\lambda_{2}\in[0.25,5]\). The response of these shells is qualitatively different from the behavior described in the previous paragraph, exhibiting three distinct regimes. In the first, when \(\lambda_{2}\leq 1\), the \(\kappa(l)\) curves show the same minimum-maximum-plateau dependence described above and in Sec. V. Since \(\lambda_{2}>\lambda_{1}\), the plateau is dictated by the largest (\(i=2\)) defect. In the second regime, for \(1.5\leq\lambda_{2}\leq 3\), the \(\kappa(l)\) curves shift, as a whole, to lower values. While a clear minimum is still observed, the maximum becomes less prominent, tending towards \(\kappa_{\max}\rightarrow\kappa_{\rm p}\). In this regime, the buckling is still dictated by the largest \(i=2\) defect. In the third regime, for \(\lambda\geq 3.5\), the \(\kappa(l)\) curves shift upwards.
In Fig. 7, the vertical dotted lines represent the critical buckling wavelength, \(l_{c}\), defined in Eq. (4), with \(R/t=100\). Similarly to the case of identical defects, we observe that the region (shaded) of interaction for these shells with two different defects lies within \(l<l_{c}\). As in Sec. V, we also compute the normalized threshold for defect-defect interactions (onset of the plateau of the \(\kappa(l)\) curves), \(l_{\rm p}^{\ast}/l_{\rm c}\), for the present case of different defects. These results are presented in Fig. 7,d.
In Fig. 7, when fixing \(\overline{\delta}_{1}\), \(\lambda_{1}\), and \(\lambda_{2}\), we observe that \(l_{\rm p}^{\ast}/l_{c}\approx 1\) (within \(17\%\)) across the whole range of \(\overline{\delta}_{2}\). This finding reinforces that \(\overline{\delta}\) is not critical in determining the onset of defect interactions, consistently with the identical-defects case (Fig. 6a). The behavior becomes less straightforward when varying \(\lambda_{2}\) while fixing \(\overline{\delta}_{1}\), \(\lambda_{1}\), and \(\overline{\delta}_{2}\), (see Fig. 7d). Here, \(l_{\rm p}^{\ast}/l_{c}\) remains near unity for \(\lambda_{2}\leq 3\), with a deviation of around \(22\%\) for \(\lambda_{2}\in[0.25,\,1]\) and \(28\%\) for \(\lambda_{2}\in[1.5,\,3]\). However, when \(\lambda_{2}\geq 3.5\), \(l_{\rm p}^{\ast}/l_{c}\) progressively drops below unity, reaching approximately \(0.4\). Recalling the profiles in Fig. 1, we note that the edges of the narrow \(i=1\) defect overlap with the wider \(i=2\) defect for larger values of \(\lambda_{2}\). Thus, the shell geometry deviates substantially from a perfect sphere, and the
critical buckling wavelength in Eq. (4) no longer sets the edge of the interaction region. This complex behavior, arising from the increasing overlap of the defects and the nontrivial shell geometries, falls beyond the scope of the present work and warrants further investigation.
Note that, in Fig. 7(c,d), while \(l_{p}^{*}/l_{c}\) remains close to unity for intermediate values of \(\lambda_{2}\), the thinnest shells with \(R/t=500\) exhibit notable discrepancies compared to the \(R/t=\{100,\,200\}\) shells (the results for these two are almost overlapping). We have conducted comprehensive mesh-convergence tests, and it appears that the discrepancies are not due to the discretization. Instead, we attribute these deviations to the higher fluctuations observed in the measured \(\kappa(l)\) curves, especially in the plateau region, which in turn affects the measurement of \(l_{p}^{*}\) using the 10% criterion introduced in Sec. V.
## VII Conclusions
Using experimentally validated FEM simulations, we investigated the effect of defect-defect interactions on the buckling of pressurized hemispherical shells containing two dimpled imperfections. We examined cases of identical and different defects, varying their geometric parameters (amplitude, \(\overline{\delta}_{i}\), and width, \(\lambda_{i}\)) and their relative separation. We measured the knockdown factor (the normalized critical buckling pressure), \(\kappa\), for these imperfect shells as a function of the angular separation, \(\varphi_{(1,2)}\), between their two defects. We then used \(\varphi_{(1,2)}\) to define an arc length separation \(l=R\varphi_{(1,2)}\). Our findings revealed significant defect-defect interactions when the two defects are in close proximity, leading to non-monotonic behavior in \(\kappa(l)\), below a threshold in \(l\). We modified the definition of this interaction threshold, denoted as \(l_{p}^{*}\), which corresponds to the inflection point of the Gaussian profile. Beyond \(l_{p}^{*}\), the \(\kappa(l)\) curves reached a plateau, indicating diminished interactions and the dominance of the largest defect in dictating the knockdown factor.
The main contribution of our study lies in establishing that the onset of defect-defect interactions is determined by the critical buckling wavelength [52], as \(l_{p}^{*}\approx l_{c}\) (cf. Eq. 4). This result is valid for defects with \(\lambda_{i}<3\), regardless of whether they are identical or different. However, for wider defects, the dimples tend to overlap, and the shell geometry becomes increasingly distorted. The defect amplitude, \(\overline{\delta}_{i}\), plays a negligible role in setting \(l_{p}^{*}\). It is important to note that \(l_{c}\) depends only on the radius, \(R\), and thickness, \(t\), of the shell (other than the Poisson ratio, which was fixed to \(\nu=0.5\) throughout our study).
We hope that our results will stimulate further interest in harnessing defect-defect interactions to enhance the buckling response of spherical shells or inspire the development of novel functional mechanisms derived from these interactions.
###### Acknowledgements.
We are grateful to John Hutchinson for insightful discussions, which inspired the scope and findings of our study. A comment by him on our previous work [36] was at the source of the hypothesis that \(l_{c}\) dictates the onset of defect-defect interactions. We also thank Michael Gomez for his invaluable feedback on the results presented in this manuscript.
**Disclosure on the usage of large language model (LLM)**
We used the Large Language Model (LLM) - ChatGPT (GPT-4 architecture, May 12 Version) - in the drafting of this manuscript for grammar and language refinement. We only employed the following two prompts: "fix grammar and typos" and "provide alternative phrasing for." Nonetheless, all final decisions and content in the manuscript were made and thoroughly reviewed by the authors. As supplementary information, we have included a commented "ETEXdiff" version that compares the nearly final draft prior to using ChatGPT with the present final version, noting that the latter also includes several minor edits made by the authors _a posteriori_.
|
2303.00411 | Pathwise Uniform Convergence of Time Discretisation Schemes for SPDEs | In this paper, we prove convergence rates for time discretisation schemes for
semi-linear stochastic evolution equations with additive or multiplicative
Gaussian noise, where the leading operator $A$ is the generator of a strongly
continuous semigroup $S$ on a Hilbert space $X$, and the focus is on
non-parabolic problems. The main results are optimal bounds for the uniform
strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0,
\ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$
is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$
is the step size, and $N_k = T/k$. The usual schemes such as the exponential
Euler, the implicit Euler, and the Crank-Nicolson method, etc. are included as
special cases. Under conditions on the nonlinearity and the noise, we show
- $\mathrm{E}_{k}^{\infty}\lesssim k \sqrt{\log(T/k)}$ (linear equation,
additive noise, general $S$);
- $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \sqrt{\log(T/k)}$ (nonlinear
equation, multiplicative noise, contractive $S$);
- $\mathrm{E}_{k}^{\infty}\lesssim k \sqrt{\log(T/k)}$ (nonlinear wave
equation, multiplicative noise)
for a large class of time discretisation schemes. The logarithmic factor can
be removed if the exponential Euler method is used with a (quasi)-contractive
$S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the
existing literature is concerned with bounds for the simpler pointwise strong
error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) -
U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger
equations, and wave equations are included. For these equations, our results
improve and reprove several existing results with a unified method and provide
the first results known for the implicit Euler and the Crank-Nicolson method. | Katharina Klioba, Mark Veraar | 2023-03-01T11:07:30Z | http://arxiv.org/abs/2303.00411v5 | # Pathwise uniform convergence of time discretisation schemes for SPDEs
###### Abstract.
In this paper we prove convergence rates for time discretisation schemes for semilinear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator \(A\) is the generator of a strongly continuous semigroup \(S\) on a Hilbert space \(X\), and the focus is on non-parabolic problems. The main results are optimal bounds for the _uniform strong error_
\[\mathrm{E}_{k}^{\infty}\coloneqq\Big{(}\mathbb{E}\sup_{j\in\{0,\ldots,N_{k}\}} \|U(t_{j})-U^{j}\|^{p}\Big{)}^{1/p},\]
where \(p\in[2,\infty)\), \(U\) is the mild solution, \(U^{j}\) is obtained from a time discretisation scheme, \(k\) is the step size, and \(N_{k}=T/k\). The usual schemes such as splitting/exponential Euler, implicit Euler, and Crank-Nicolson, etc. are included as special cases. Under conditions on the nonlinearity and the noise we show
* \(\mathrm{E}_{k}^{\infty}\lesssim k\log(T/k)\) (linear equation, additive noise, general \(S\));
* \(\mathrm{E}_{k}^{\infty}\lesssim\sqrt{k}\log(T/k)\) (nonlinear equation, multiplicative noise, contractive \(S\));
* \(\mathrm{E}_{k}^{\infty}\lesssim k\log(T/k)\) (nonlinear wave equation, multiplicative noise).
The logarithmic factor can be removed if the splitting scheme is used with a (quasi)-contractive \(S\). The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler _pointwise strong error_
\[\mathrm{E}_{k}\coloneqq\Big{(}\sup_{j\in\{0,\ldots,N_{k}\}}\mathbb{E}\|U(t_{j })-U^{j}\|^{p}\Big{)}^{1/p}.\]
Applications to Maxwell equations, Schrodinger equations, and wave equations are included. For these equations our results improve and reprove several existing results with a unified method.
Key words and phrases:time discretisation schemes, pathwise uniform convergence, SPDEs, stochastic convolutions, stochastic wave equation 2020 Mathematics Subject Classification: Primary: 65C30; Secondary: 47D06, 60H15, 60H35, 65J08, 65M12 The second author is supported by the VICI subsidy VI.C.212.027 of the Netherlands Organisation for Scientific Research (NWO)
### The setting
In the above mentioned literature on the hyperbolic case (and often in the parabolic case), the error considered usually is the _pointwise strong error_
\[\sup_{j\in\{0,\ldots,N_{k}\}}\mathbb{E}\|U(t_{j})-U^{j}\|^{p}, \tag{1.2}\]
where \(U\) is the mild solution to (1.1), and \((U^{j})_{j=0}^{N_{k}}\) is an approximation of the solution given by an explicit temporal discretisation scheme of the form \(U^{0}=u_{0}\),
\[U^{j}=R_{k}U^{j-1}+kR_{k}F(U^{j-1})+R_{k}G(U^{j-1})\Delta W^{j},\ \ j=1,\ldots,N_{k}. \tag{1.3}\]
Here, \(N_{k}=T/k\) is the number of points, \(k=t_{j}-t_{j-1}\) is the uniform step size, \(t_{j}=jk\), and \(\Delta W^{j}=W_{H}(t_{j})-W_{H}(t_{j-1})\). The operator \(R_{k}\) is an approximation of the semigroup \(S\) at time \(k\).
Motivated by the well-known fact that simulations show that the whole path converges, it is a natural question to find convergence rates for the _uniform strong error_
\[\mathbb{E}\sup_{j\in\{0,\ldots,N_{k}\}}\|U(t_{j})-U^{j}\|^{p}. \tag{1.4}\]
It is a widely known open problem in the field to find optimal estimates for (1.4). Estimates where the supremum is inside the expectation are usually called maximal estimates, and there is an enormous literature on maximal estimates for general stochastic processes [56]. However, if the corresponding processes do not have any Gaussian or martingale structure, it can be quite complicated to prove sharp maximal estimates. Even maximal estimates for the mild solution \(U\) to (1.1) with \(F=0\) and \(G(u)\) replaced by a progressively measurable \(g\in L^{2}(\Omega\times(0,T);X)\), are unknown in general (see the survey [59, Section 4] for details).
In the case where \(S\) generates a \(C_{0}\)_-group_, it is known how to estimate the uniform strong error (1.4) in the case of the _splitting scheme_ (i.e. \(R_{k}=S(k)\)). In this case one can use the group structure in the following way
\[\int_{0}^{t}S(t-s)g(s)dW_{H}(s)=S(t)\int_{0}^{t}S(-s)g(s)dW_{H}(s),\]
and similarly, for the discrete approximation. This makes it possible to avoid maximal estimates for stochastic convolutions and use martingale techniques instead. This technique was first applied in [61] to obtain optimal convergence rates for the uniform strong error of the splitting scheme for abstract wave equations. Later this technique was extended to other settings (see [2, 8, 15, 25]), and in particular applied to stochastic Schrodinger and Maxwell equations. However, if \(S\) is not a group or a different scheme than splitting is used, then this technique is no longer applicable.
On the other hand, for other discretisation schemes estimates for the simpler pointwise strong error (1.2) are available (see e.g. the above mentioned papers in the hyperbolic case). Moreover, simulations suggest that optimal rates of convergence for the uniform strong error (1.4) hold as well. The main goal of our work is to prove such optimal bounds for (1.4) for more general semigroups and general schemes. In particular, we prove such bounds under the condition that \(S\) and \(R\) are contractive. This solves the open problem on rates for (1.4) for this class of semigroups and numerical schemes.
Before we turn to our main result, we mention that it was recently shown in [20] that one can transfer (1.2) to (1.4) using some of the Holder continuity in the \(p\)-th moment at the price of decreasing the convergence rate by using the Kolmogorov-Chentsov theorem. However, optimal rates seem to be out of reach using this approach.
### Main result
In order to state our main result, we need an additional definition. Let \(X\) and \(Y\) be Hilbert spaces with \(Y\hookrightarrow X\). For \(\alpha\in(0,1]\) we say that \(R\) approximates \(S\) to order \(\alpha\) on \(Y\) if there is a constant \(C_{\alpha}\geq 0\) such that for all \(x\in Y\), \(k>0\), and \(j\in\{0,\ldots,N_{k}\}\)
\[\|(S(t_{j})-R_{k}^{j})x\|_{X}\leq C_{\alpha}k^{\alpha}\|x\|_{Y}.\]
Our main result on convergence rates for (1.4) is as follows.
**Theorem 1.1**.: _Let \(X\) and \(Y\) be Hilbert spaces such that \(Y\hookrightarrow X\). Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on \(X\) and \(Y\). Suppose that \((R_{k})_{k>0}\) is a time discretisation scheme which is contractive on both \(X\) and \(Y\), that \(R\) approximates \(S\) to order \(\alpha\in(0,1/2]\) on \(Y\), and that \(Y\hookrightarrow D((-A)^{\alpha})\). Suppose that \(F:X\to X\) and \(G:X\to\mathcal{L}_{2}(H,X)\) are Lipschitz continuous, and that \(F:Y\to Y\) and \(G:Y\to\mathcal{L}_{2}(H,Y)\) are of linear growth. Let \(p\in[2,\infty)\), \(u_{0}\in L^{p}(\Omega;Y)\). Let \(U\) be the mild solution to (1.1). Let \(k\in(0,T/2]\) and let \((U^{j})_{j=0}^{N_{k}}\) be given by (1.3). Then there is a constant \(C_{T}\) not depending on \(u_{0}\) and \(k\) such that_
\[\bigg{\|}\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{X}\bigg{\|}_{L^{p}(\Omega )}\leq C_{T}(1+\|u_{0}\|_{L^{p}(\Omega;Y)})k^{\alpha}\log(T/k). \tag{1.5}\]
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\alpha\) as \(k\to 0\) up to a logarithmic factor._
Up to the logarithmic factor, the estimate (1.5) is optimal in the sense that the rate is the same as the rate for the initial value term on its own (i.e. with \(F=0\) and \(G=0\)). Theorem 1.1 follows from Theorem 6.3. In the case of the splitting scheme we show that the logarithmic factor can be omitted, see Corollary 6.4. In the case of additive noise a similar result is obtained in Theorem 3.1 for semigroups and schemes which are not necessarily contractive and for the range \(\alpha\in(0,1]\).
The error estimate (1.5) can be extended from the grid points to the full time interval \([0,T]\) assuming higher integrability of the initial values. Provided that \(u_{0}\in L^{p_{0}}(\Omega;Y)\) holds for some \(p_{0}\in(2,\infty)\) in addition to the assumptions of Theorem 1.1, the pathwise uniform error on the full time interval can be estimated as (see Theorem 6.9 below)
\[\bigg{\|}\sup_{t\in[0,T]}\|U(t)-\tilde{U}(t)\|_{X}\bigg{\|}_{L^{p}(\Omega)} \leq C_{T}(1+\|u_{0}\|_{L^{p_{0}}(\Omega;Y)})k^{\alpha}\log(T/k) \tag{1.6}\]
for all \(p\in[2,p_{0})\) and the piecewise constant extension \(\tilde{U}\) of \((U_{j})_{j=0,\ldots,N_{k}}\) to \([0,T]\). For the splitting scheme, the logarithmic correction factor can be replaced by \(\log(T/k)^{1/2}\) (see Corollary 6.10), which is known to be optimal already for scalar SDEs. The error estimate relies on new optimal path regularity estimates of stochastic convolutions in suitable log-Holder spaces, which will be presented in Proposition 6.8.
Theorem 1.1 applies to
* splitting scheme (S): \(R_{k}=S(k)\);
* implicit Euler (IE): \(R_{k}=(1-kA)^{-1}\);
* Crank-Nicolson (CN): \(R_{k}=(2+kA)(2-kA)^{-1}\).
The contractivity of the scheme \(R\) in case of (S) and (IE) follows from the contractivity of the semigroup \(S\). For other schemes the contractivity of \(R\) usually follows by a functional calculus argument (see Proposition 2.4 below).
In the above, one usually takes \(Y\) to be a suitable intermediate space between \(X\) and \(D(A)\). In the special and important case that \(Y=D(A)\) one can take \(\alpha=\frac{1}{2}\) for all of the aforementioned schemes. More general convergence rates can be found in Table 1.
Applications to Schrodinger and Maxwell equations are included in the main text (see Subsections 3.3, 6.4, and 6.5). Our results improve several results from the literature to more general schemes and general rates \(\alpha\). In Section 7 we include a setting for abstract wave equations, which was considered in [61] only for the splitting scheme. We prove similar higher order convergence rates for more general schemes, and in particular recover [61] as a special case.
To make the above results applicable to implementable numerical schemes for SPDEs, one would additionally need a space discretisation. Since the main novelty of our work lies in the treatment of temporal discretisations, we will only consider the latter.
A detailed understanding of the global Lipschitz setting is a quintessential step towards the treatment of locally Lipschitz nonlinearities, which occur more frequently in practice. Our result should be seen as a first step, and we plan to continue our work on uniform strong errors in a locally Lipschitz setting in the near future.
### Method of proof
For the proof of the convergence rate we need several ingredients. First of all, we need to prove that the mild solution actually is continuous with values in the subspace \(Y\). This can be seen as the replacement of the usual regularisation one has for parabolic equations. Surprisingly, we do not need any Lipschitz assumptions on \(F\) and \(G\) as mappings from \(Y\) to \(Y\), but linear growth conditions suffice. This is actually important, since Lipschitz estimates typically fail for Nemytskij mappings on Sobolev spaces of higher order (see [27]).
A key estimate in the proof is a new maximal inequality for discrete convolutions. In particular, this inequality will be used to prove stability of schemes such as (1.3), i.e.
\[\mathbb{E}\sup_{j\in\{0,\dots,N_{k}\}}\|U^{j}\|_{Y}^{p}\leq C,\]
where \(C\) is independent of the step size \(k\). But it also plays a role in estimates for the convergence.
A second key ingredient is another estimate recently proven in [60], which allows to estimate stochastic integral processes which contain a supremum
\[\mathbb{E}\sup_{i\in\{1,\dots,n\}}\sup_{t\geq 0}\Big{\|}\int_{0}^{t}\Phi_{i}( s)dW_{H}(s)\Big{\|}_{X}^{p} \tag{1.7}\]
by certain square functions with a logarithmic dependency on \(n\) (see Proposition 2.2 below).
Finally, to prove the desired convergence rate of Theorem 1.1 we need to split the error obtained in (1.3) into
1 (initial value part) \(+4\) (deterministic terms) \(+5\) (stochastic terms) \(=10\) terms.
To estimate these terms we require precise estimates for \(\|S(t_{j})-R_{k}^{j}\|_{\mathcal{L}(Y,X)}\), \(\mathbb{E}\|U(t)-U(s)\|^{p}\), stability estimates, and maximal estimates for continuous and discrete convolutions.
In the end, we derive an estimate for the error in terms of itself, and we apply a standard discrete Gronwall argument to deduce the desired error bound. In case of the splitting method some terms disappear since \(S(t_{j})=R_{k}^{j}\), which makes it possible to omit the logarithmic terms originating from terms such as (1.7).
### Overview
* Section 2 contains the preliminaries for the rest of the paper.
* Section 3 discusses the case of additive noise and semigroups which are not necessarily contractive. Results are illustrated for the Schrodinger equation.
* In Section 4 we recall a standard well-posedness result, and prove an additional well-posedness result for the subspace \(Y\) in case of linear growth in the \(Y\)-setting.
* Section 5 is concerned with the stability of the discretisation schemes.
* In Section 6 we state and prove the main result, which leads to Theorem 1.1 and the proof of (1.6). It is then applied to the Schrodinger equation as well as the Maxwell equation.
* Section 7 is concerned with improvements of the convergence rate for abstract wave equations. Examples with trace class, space-time white noise, and smooth noise are included.
#### Acknowledgements
The first author wishes to thank the DAAD for the financial support to visit TU Delft for one semester in 2022, and the colleagues in Delft for their hospitality. The authors also thank Jan van Neerven for helpful discussion and comments and Martin Hutzenthaler for suggesting to add error estimates on the full time interval.
## 2. Preliminaries
Throughout the paper, we consider the final time \(T>0\) to be fixed and denote the Borel \(\sigma\)-algebra of a Banach space \(X\) by \(\mathcal{B}(X)\). We use the notation \(f(x)\lesssim g(x)\) to denote that there is a constant \(C\geq 0\) such that for all \(x\) in the respective set, \(f(x)\leq Cg(x)\).
### Stochastic integration
Throughout the paper, we fix a probability space \((\Omega,\mathscr{F},\mathbb{P})\) and a filtration \((\mathscr{F}_{t})_{t\in[0,T]}\) on this probability space. Unless otherwise stated, all random variables and stochastic processes considered are defined on \((\Omega,\mathscr{F},\mathbb{P})\). Denote the progressive \(\sigma\)-algebra on \((\Omega,\mathscr{F},\mathbb{P})\) by \(\mathcal{P}\). Let \(H,X\) be Hilbert spaces and denote the space of Hilbert-Schmidt operators from \(H\) to \(X\) by \(\mathcal{L}_{2}(H,X)\). It consists of all bounded operators \(R:H\to X\) such that
\[\|R\|^{2}_{\mathcal{L}_{2}(H,X)}\coloneqq\sum_{i\in I}\|Re_{i}\|^{2}_{X}<\infty,\]
where \((e_{i})_{i\in I}\) is an orthonormal basis of \(H\). For \(R\in\mathcal{L}_{2}(H,X)\) and \(\gamma=(\gamma_{n})_{n\geq 1}\) centered i.i.d. normally distributed random variables we define
\[R\gamma=\sum_{n\geq 1}\gamma_{n}Rh_{n}, \tag{2.1}\]
where the convergence is in \(L^{p}(\Omega;X)\) and almost surely.
In the stochastic integrals appearing in expressions such as (1.7), the integrator is a \(H\)-cylindrical Brownian motion to take \(\mathcal{L}_{2}(H,X)\)-valued integrands into account. An \(H\)_-cylindrical Brownian motion_ is a mapping \(W_{H}:L^{2}(0,T;H)\to L^{2}(\Omega)\) such that
1. \(W_{H}h\) is Gaussian for all \(h\in L^{2}(0,T;H)\),
2. \(\mathbb{E}(W_{H}h_{1}\cdot W_{H}h_{2})=\langle h_{1},h_{2}\rangle_{L^{2}(0,T; H)}\) for all \(h_{1},h_{2}\in L^{2}(0,T;H)\),
where we include a complex conjugate on \(W_{H}h_{2}\) in case we want to use a complex \(H\)-cylindrical Brownian motion. For \(h\in H\) and \(t\in[0,T]\), we use the shorthand notation \(W_{H}(t)h\coloneqq W_{H}(\mathbf{1}_{(0,t)}\otimes h)\). Consequently, \((W_{H}(t)h)_{t\in[0,T]}\) is a Brownian motion for each fixed \(h\in H\), which is standard if and only if \(\|h\|_{H}=1\). In the special case \(H=\mathbb{R}\), this notion coincides with real-valued Brownian motions. We refer to an \(H\)-valued stochastic process \((W(t))_{t\geq 0}\) as a \(Q\)_-Wiener process_ if \(W(0)=0\), \(W\) has continuous trajectories and independent increments, and \(W(t)-W(s)\) is normally distributed with parameters \(0\) and \((t-s)Q\) for \(t\geq s\geq 0\). The operator \(Q\) is in \(\mathcal{L}(H)\), positive self-adjoint, and of trace class. One can show that \(W\) is a \(Q\)-Wiener process if and only if there exists an \(H\)-cylindrical Brownian motion \(W_{H}\) such that \(Q^{1/2}W_{H}\coloneqq\sum_{n\geq 1}Q^{1/2}h_{n}W_{H}(t)h_{n}=W(t)\) for an orthonormal basis \((h_{n})_{n\geq 1}\) of \(H\) (cf. (2.1)). For further properties of \(H\)-cylindrical Brownian motions, \(Q\)-Wiener processes and the Ito integral, we refer to [26].
To estimate Ito integrals w.r.t. such \(H\)-cylindrical Brownian motions, the Burkholder-Davis-Gundy inequalities are particularly helpful. They imply that
\[\left(\mathbb{E}\sup_{t\in[0,T]}\left\|\int_{0}^{t}g_{s}\,\mathrm{d}W_{H}(s) \right\|_{X}^{p}\right)^{1/p}\leq B_{p}\|g\|_{L^{p}(\Omega;L^{2}(0,T;\mathcal{ L}_{2}(H,X)))}. \tag{2.2}\]
In particular, one can take \(B_{2}=2\) (by Doob's maximal inequality [35, Thm. 3.2.2] and the Ito isometry) and \(B_{p}=2\sqrt{2}\sqrt{p}\) for \(p>2\). Indeed, this follows by combining the scalar result of [14, Theorem A] with the reduction technique in [40, Theorem 3.1] and the simple estimate \(\|(\xi^{2}+\eta^{2})^{1/2}\|_{p}\leq(\|\xi\|_{p}^{2}+\|\eta\|_{p}^{2})^{1/2}\) valid for real-valued random variables \(\xi\) and \(\eta\) and \(p\in[2,\infty)\).
A \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) is said to be _quasi-contractive_ if for some \(\lambda\geq 0\), \(\|S(t)\|\leq e^{\lambda t}\) for all \(t\geq 0\). The following maximal inequality for stochastic convolutions follows from [32], where the contractive case is treated. The quasi-contractive case follows by a scaling argument.
**Theorem 2.1**.: _Let \(X\) be a Hilbert space and let \((S(t))_{t\geq 0}\) be a quasi-contractive semigroup on \(X\). Then_
\[\mathbb{E}\sup_{t\in[0,T]}\left\|\int_{0}^{t}S(t-s)g_{s}\,\mathrm{d}W_{H}(s) \right\|_{X}^{p}\leq\mathrm{e}^{p\lambda T}B_{p}^{p}\|g\|_{L^{p}(\Omega;L^{2} (0,T;\mathcal{L}_{2}(H,X)))}^{p},\]
_where \(B_{p}\) is the constant from (2.2). One can take \(B_{2}=2\) and \(B_{p}=2\sqrt{2}\sqrt{p}\) for \(2<p<\infty\)._
Next, we state a special case of [60, Proposition 2.7], which will be needed to estimate stochastic integral terms without semigroups.
**Proposition 2.2**.: _Let \(X\) be a Hilbert space and let \(0<p<\infty\). Let \(\Phi:=(\Phi^{(j)})_{j=1}^{N}\) be a finite sequence in \(L^{p}_{p}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,X)))\) and set_
\[I_{N}^{\Phi}:=\left(\mathbb{E}\sup_{t\in[0,T],j\in\{1,\ldots,N\}}\left\|\int_{ 0}^{t}\Phi_{s}^{(j)}\,\mathrm{d}W_{H}(s)\right\|_{X}^{p}\right)^{1/p}.\]
_Then for some \(K_{p}\geq 0\)_
\[I_{N}^{\Phi}\leq K_{p}\log(N)\|\Phi\|_{L^{p}(\Omega;L^{2}(0,T;\ell_{N}^{\infty }(\mathcal{L}_{2}(H,X))))}\quad\text{ if }N\geq 2.\]
_If \(2\leq p<\infty\), this estimate holds with \(K_{p}=10\mathrm{e}\sqrt{p}\)._
Proof.: We only need to comment on the case \(p\in[2,\infty)\) and \(2\leq N\leq 7\), since the result in [60] was stated for \(N\geq 8\). In this case the triangle inequality and the Burkholder-Davis-Gundy inequalities with \(B_{p}\leq 2\sqrt{2}\sqrt{p}\) in \(X\) (see (2.2)) give
\[I_{N}^{\Phi} \leq\Big{(}\sum_{j=1}^{N}\mathbb{E}\sup_{t\in[0,T]}\Big{\|}\int_ {0}^{t}\Phi_{s}^{(j)}\,\mathrm{d}W_{H}(s)\Big{\|}_{X}^{p}\Big{)}^{1/p}\leq B_ {p}\Big{(}\sum_{j=1}^{N}\|\Phi^{j}\|_{L^{p}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,X)))}^{p}\Big{)}^{1/p}\] \[\leq 2\sqrt{2}\sqrt{p}N^{1/p}\|\Phi\|_{L^{p}(\Omega;L^{2}(0,T; \ell_{N}^{\infty}(\mathcal{L}_{2}(H,X))))}\leq 10\mathrm{e}\sqrt{p}\log(N)\| \Phi\|_{L^{p}(\Omega;L^{2}(0,T;\ell_{N}^{\infty}(\mathcal{L}_{2}(H,X))))}^{p},\]
where the last estimate follows from \(2\sqrt{2}N^{1/2}\leq 10\mathrm{e}\log(N)\) for \(2\leq N\leq 7\).
### Approximation of semigroups and interpolation
An integral part of approximating solutions of a stochastic evolution equation concerns the approximation of a semigroup by some scheme. The following definition allows us to quantify the approximation behaviour.
**Definition 2.3**.: _Let \(X\) be a Banach space. An \(\mathcal{L}(X)\)-valued scheme is a function \(R:[0,\infty)\to\mathcal{L}(X)\). We denote \(R_{k}\coloneqq R(k)\) for \(k\geq 0\). Let \(Y\) be a Banach space which is continuously and densely embedded in \(X\). If \(A\) generates a \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) on \(X\), an \(\mathcal{L}(X)\)-valued scheme \(R\) is said to approximate \(S\) to order \(\alpha>0\) on \(Y\) or, equivalently, \(R\)_converges of order \(\alpha\) on \(Y\)_if for all \(T>0\) there is a constant \(C_{\alpha}\geq 0\) such that_
\[\|(S(jk)-R_{k}^{j})u\|_{X}\leq C_{\alpha}k^{\alpha}\|u\|_{Y}\]
_for all \(u\in Y\), \(k>0\), and \(j\in\mathbb{N}\) such that \(jk\in[0,T]\). An \(\mathcal{L}(X)\)-valued scheme \(R\) is said to be contractive if \(\|R_{k}\|_{\mathcal{L}(X)}\leq 1\) for all \(k\geq 0\)._
Subsequently, we will omit the index for norms in the space \(X\). In the absence of nonlinear and noise terms, the following schemes approximate \(S\) to different orders:
* splitting scheme (S): \(R_{k}=S(k)\), any order \(\alpha>0\) on \(X\);
* implicit Euler (IE): \(R_{k}=(1-kA)^{-1}\), order \(\alpha\in(0,1]\) on \(D((-A)^{2\alpha})\);
* Crank-Nicolson (CN): \(R_{k}=(2+kA)(2-kA)^{-1}\), order \(\alpha\in(0,2]\) on \(D((-A)^{3\alpha/2})\) provided that \(R\) is contractive.
As many commonly used schemes, (IE) and (CN) are of the form \(R_{k}=r(-kA)\) for some function \(r:\mathbb{C}_{+}\to\mathbb{C}\), where \(r(-kA)\) is defined via the \(H^{\infty}\)-calculus of \(-A\). The following proposition gives a sufficient condition for such schemes to be contractive.
**Proposition 2.4**.: _Let \(A\) be the generator of a \(C_{0}\)-semigroup of contractions on a Hilbert space \(X\). Suppose that \(r:\mathbb{C}_{+}\to\mathbb{C}\) is holomorphic, \(|r(z)|\leq 1\), and let \(R_{k}=r(-kA)\) for \(k>0\). Then \(R\) is contractive._
The above assumption that \(|r(z)|\leq 1\) for \(z\in\mathbb{C}_{+}\) is standard in the theory of rational approximation of semigroups, see [13]. A common choice for the spaces \(Y\) on which a given scheme approximates \(S\) are domains of fractional powers of \(A\). An important property of these spaces is that they embed into the real interpolation spaces with parameter \(\infty\), i.e. for \(\alpha>0\)
\[D(A^{\alpha})\hookrightarrow D_{A}(\alpha,\infty). \tag{2.3}\]
Here, \(D_{A}(\alpha,\infty)\) denotes the real interpolation space \((X,D(A))_{\alpha,\infty}\). At later occasions, also the real interpolation spaces \((X,D(A))_{\alpha,2}\) will be used. See [51, 58] for details on real interpolation spaces.
Embeddings of the form (2.3) and properties of \(D_{A}(\alpha,\infty)\) allow us to obtain decay rates for semigroup differences as follows. Let \((S(t))_{t\geq 0}\) be a \(C_{0}\)-semigroup such that \(\|S(t)\|\leq Me^{\lambda t}\) for some \(M\geq 1\) and \(\lambda\geq 0\) for all \(t\geq 0\). Such \(M\) and \(\lambda\) exist for every \(C_{0}\)-semigroup [29, Prop. 5.5]. Then \(\|S(t)-S(s)\|_{\mathcal{L}(X)}\leq 2M\mathrm{e}^{\lambda T}\) for \(0\leq s\leq t\leq T\). Since
\[\|[S(t)-S(s)]x\|_{X}=\left\|\int_{s}^{t}S(r)Ax\;\mathrm{d}r\right\|_{X}\leq Me ^{\lambda T}(t-s)\|x\|_{D(A)}\]
for \(x\in D(A)\), we have \(\|S(t)-S(s)\|_{\mathcal{L}(D(A),X)}\leq 2Me^{\lambda T}(t-s)\). By interpolation,
\[\|S(t)-S(s)\|_{\mathcal{L}(D_{A}(\alpha,\infty),X)}\leq 2^{1-\alpha}Me^{ \lambda T}(t-s)^{\alpha}\leq 2Me^{\lambda T}(t-s)^{\alpha}\]
for \(\alpha\in(0,1)\). Let \(Y\) be another Hilbert space such that \(Y\hookrightarrow X\). Under the assumption that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously for some \(\alpha\in(0,1)\) or \(Y\hookrightarrow D(A)\) continuously, in which case we set \(\alpha=1\), this implies
\[\|S(t)-S(s)\|_{\mathcal{L}(Y,X)}\leq 2C_{Y}Me^{\lambda T}(t-s)^{\alpha}, \tag{2.4}\]
where \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\) or \(D(A)\).
### Gronwall type lemmas
We need the following variants of the classical Gronwall inequality.
**Lemma 2.5**.: _Let \(\phi:[0,T]\to[0,\infty)\) be a continuous function and let \(\alpha,\beta\in[0,\infty)\) be constants. Suppose that_
\[\phi(t)\leq\alpha+\beta\Big{(}\int_{0}^{t}\phi(s)^{2}ds\Big{)}^{1/2},\ \ t\in[0,T].\]
_Then_
\[\phi(t)\leq\alpha(1+\beta^{2}t)^{1/2}\exp\Big{(}\frac{1}{2}+\frac{1}{2}\beta^ {2}t\Big{)},\ \ t\in[0,T].\]
Proof.: Using \((a+b)^{2}\leq(1+\theta)a^{2}+(1+\theta^{-1})b^{2}\) for \(a,b\geq 0\) and \(\theta>0\), we can write
\[\phi(t)^{2}\leq(1+\theta)\alpha^{2}+\beta^{2}(1+\theta^{-1})\int_{0}^{t}\phi( s)^{2}ds,\ \ t\in[0,T].\]
Therefore, applying Gronwall's inequality we see that
\[\phi(t)^{2}\leq(1+\theta)\alpha^{2}\exp(\beta^{2}(1+\theta^{-1})t).\]
Taking \(\theta=\beta^{2}t\) we obtain
\[\phi(t)^{2}\leq(1+\beta^{2}t)\alpha^{2}\exp(\beta^{2}t+1),\]
which gives the desired estimate.
In the same way one can prove the following discrete analogue by using the discrete version of Gronwall's lemma instead (see [33, Proposition 5]).
**Lemma 2.6**.: _Let \(\alpha,\beta\geq 0\) and \((\varphi_{j})_{j\geq 0}\) be a nonnegative sequence. If_
\[\varphi_{j}\leq\alpha+\beta\left(\sum_{i=0}^{j-1}\varphi_{i}^{2}\right)^{1/2} \ \ \text{for}\ j\geq 0,\]
_then_
\[\varphi_{j}\leq\alpha(1+\beta^{2}j)^{1/2}\exp\left(\frac{1}{2}+\frac{1}{2} \beta^{2}j\right)\ \ \text{for}\ j\geq 0.\]
## 3. Convergence Rates for Additive Noise
In this section we present several results on convergence rates for linear equations with additive noise. The reason to start with this case is twofold. Higher convergence rates can be proved in this case. Moreover, it gives us the opportunity to explain the new techniques in a simpler setting, which can be helpful in understanding the more complicated multiplicative setting of Section 6.
Consider the stochastic evolution equation with additive noise of the form
\[U=AU\,\mathrm{d}t+g\,\mathrm{d}W_{H}(t)\text{ on }[0,T],\ U(0)=u_{0}\in L^{p}_{ \mathcal{F}_{0}}(\Omega;X), \tag{3.1}\]
where \(A\) generates a \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) on a Hilbert space \(X\) with norm \(\|\cdot\|\), \(W_{H}\) is an \(H\)-cylindrical Brownian motion for some Hilbert space \(H\) and \(p\in[2,\infty)\). For Holder continuous noise \(g\in L^{p}_{\mathcal{P}}(\Omega;C^{\alpha}([0,T];\mathcal{L}_{2}(H,X)))\), \(\alpha\in(0,1]\), mapping into a space \(Y\hookrightarrow X\), we prove rates of convergence for time discretisation schemes. An improvement of the rate is shown for the splitting scheme for quasi-contractive semigroups. Results are illustrated for the nonlinear Schrodinger equation in Subsection 3.3.
The mild solution to (3.1) is uniquely given by
\[U(t)=S(t)u_{0}+\int_{0}^{t}S(t-s)g(s)\,\mathrm{d}W_{H}(s). \tag{3.2}\]
To approximate it, we employ a time discretisation scheme \(R:[0,\infty)\to\mathcal{L}(X)\) with time step \(k>0\) on a uniform grid \(\{t_{j}=jk:\ j=0,\ldots,N_{k}\}\subseteq[0,T]\) with final time \(T=t_{N_{k}}>0\) and \(N_{k}=\frac{T}{k}\in\mathbb{N}\) being the number of time steps. The discrete solution is given by \(U^{0}\coloneqq u_{0}\) and
\[U^{j}\coloneqq R_{k}U^{j-1}+R_{k}g(t_{j-1})\Delta W_{j}=R_{k}^{j}u_{0}+\sum_{ i=0}^{j-1}R_{k}^{j-i}g(t_{i})\Delta W_{i+1},\ j=1,\ldots,N_{k}, \tag{3.3}\]
with Wiener increments \(\Delta W_{j}\coloneqq W_{H}(t_{j})-W_{H}(t_{j-1})\), where we used (2.1).
### General semigroups
Our first result concerns general \(C_{0}\)-semigroups \(S\). In Subsection 3.2 further improvement are discussed under further conditions on \(S\). Below we denote the Holder seminorm in \(C^{\alpha}([0,T];\mathcal{L}_{2}(H,X))\) by \([\cdot]_{\alpha}\) for \(\alpha\in(0,1]\).
**Theorem 3.1**.: _Let \(X\) and \(Y\) be Hilbert spaces such that \(Y\hookrightarrow X\). Let \(A\) be the generator of a \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) on \(X\) with \(\|S(t)\|\leq Me^{\lambda t}\) for some \(M\geq 1\) and \(\lambda\geq 0\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(X\). Assume \(R\) approximates \(S\) to order \(\alpha\in(0,1]\) on \(Y\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously if \(\alpha\in(0,1)\) or \(Y\hookrightarrow D(A)\) continuously if \(\alpha=1\). Let \(p\in[2,\infty)\), \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\), and \(g\in L^{p}_{\mathcal{P}}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))\) as well as \(g\in L^{p}_{\mathcal{P}}(\Omega;C^{\alpha}([0,T];\mathcal{L}_{2}(H,X)))\). Denote by \(U\) the mild solution of (3.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (3.3). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|\right\|_{p}\leq\left(C_{1}+C _{2}\log\left(\frac{T}{k}\right)\right)k^{\alpha}\]
_with constants \(C_{1}\coloneqq C_{\alpha}\|u_{0}\|_{L^{p}(\Omega;Y)}\) and_
\[C_{2}\coloneqq\frac{K_{p}\sqrt{T}}{\sqrt{2\alpha+1}}\left(M\mathrm{e}^{\lambda T }\left\|[g]_{\alpha}\right\|_{p}+\left(2M\mathrm{e}^{\lambda T}C_{Y}+C_{\alpha }\right)\left\|g\right\|_{L^{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))} \right),\]
_where \(C_{\alpha}\) is as in Definition 2.3, \(K_{p}=10\mathrm{e}\sqrt{p}\) and \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\) or \(D(A)\)._
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\min\{\alpha,1\}\) up to a logarithmic correction factor as \(k\to 0\)._
Proof.: Define \(S^{k}(t)\coloneqq R_{k}^{j}\) for \(t\in(t_{j-1},t_{j}]\) and let \(\lfloor t\rfloor\coloneqq\max\{t_{j}:\,t_{j}\leq t\}\) for \(t\in[0,T]\). Then the discrete solutions are given by the integral representation
\[U^{j}=R_{k}^{j}u_{0}+\int_{0}^{t_{j}}S^{k}(t_{j}-s)g(\lfloor s\rfloor)\, \mathrm{d}W_{H}(s).\]
Combining this representation with the mild solution formula (3.2), the error can be bounded by
\[E \coloneqq\Big{\|}\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|\Big{\|}_{p }\leq\Big{\|}\max_{0\leq j\leq N_{k}}\big{\|}[S(t_{j})-R_{k}^{j}]u_{0}\big{\|} \Big{\|}_{p}\] \[\quad+\Big{\|}\max_{0\leq j\leq N_{k}}\Big{\|}\int_{0}^{t_{j}}S(t_ {j}-s)[g(s)-g(\lfloor s\rfloor)]\,\mathrm{d}W_{H}(s)\big{\|}\Big{\|}_{p}\] \[\quad+\Big{\|}\max_{0\leq j\leq N_{k}}\Big{\|}\int_{0}^{t_{j}}[S(t _{j}-\lfloor s\rfloor)-S(t_{j}-s)]g(\lfloor s\rfloor)\,\mathrm{d}W_{H}(s) \big{\|}\Big{\|}_{p}\] \[\quad+\Big{\|}\max_{0\leq j\leq N_{k}}\Big{\|}\int_{0}^{t_{j}}[S( t_{j}-\lfloor s\rfloor)-S^{k}(t_{j}-s)]g(\lfloor s\rfloor)\,\mathrm{d}W_{H}(s) \big{\|}\Big{\|}_{p}\] \[\quad=:E_{1}+E_{2}+E_{3}+E_{4}. \tag{3.4}\]
We proceed to estimate all four terms individually. Since \(R\) approximates \(S\) to order \(\alpha\) on \(Y\),
\[E_{1}\leq C_{\alpha}\|u_{0}\|_{L^{p}(\Omega;Y)}k^{\alpha}. \tag{3.5}\]
For the second term, we note that for \(s\in[t_{\ell},t_{\ell+1})\)
\[\Big{\|}\sum_{i=0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)S(t_{j}-s) [g(s)-g(t_{i})]\Big{\|}_{\mathcal{L}_{2}(H,X)} \leq\|S(t_{j}-s)\|_{\mathcal{L}(X)}\|g(s)-g(t_{\ell})\|_{\mathcal{ L}_{2}(H,X)}\] \[\leq M\mathrm{e}^{\lambda T}[g]_{C^{\alpha}}(s-t_{\ell})^{\alpha}.\]
Proposition 2.2 with \(\Phi_{s}^{(j)}=\sum_{i=0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)S(t_{j}-s)[g(s)- g(t_{i})]\) then yields
\[E_{2} =\Big{\|}\max_{0\leq j\leq N_{k}}\Big{\|}\int_{0}^{t_{j}}\sum_{i=0 }^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)S(t_{j}-s)[g(s)-g(t_{i})]\,\mathrm{d}W_{ H}(s)\Big{\|}\Big{\|}_{p}\] \[\leq K_{p}\log(N_{k})\Big{\|}\Big{(}\int_{0}^{T}\max_{1\leq j\leq N _{k}}\Big{\|}\Phi_{s}^{(j)}\Big{\|}_{\mathcal{L}_{2}(H,X)}^{2}\,\mathrm{d}s \Big{)}^{1/2}\Big{\|}_{p}\] \[\leq K_{p}M\mathrm{e}^{\lambda T}\log(N_{k})\Big{\|}\Big{(}\sum_ {l=0}^{N_{k}-1}\int_{t_{\ell}}^{t_{\ell+1}}[g]_{\alpha}^{2}(s-t_{\ell})^{2 \alpha}\,\mathrm{d}s\Big{)}^{1/2}\Big{\|}_{p}\] \[\leq K_{p}M\mathrm{e}^{\lambda T}\frac{1}{\sqrt{2\alpha+1}}\log( N_{k})k^{\alpha+1/2}\Big{\|}\Big{(}\sum_{l=0}^{N_{k}-1}[g]_{\alpha}^{2}\Big{)}^{1/2} \Big{\|}_{p}\] \[=K_{p}M\mathrm{e}^{\lambda T}\Big{\|}[g]_{\alpha}\Big{\|}_{p} \frac{\sqrt{T}}{\sqrt{2\alpha+1}}\log(N_{k})k^{\alpha}, \tag{3.6}\]
where we have used Holder continuity of \(g\). Analogously, with \(\Phi_{s}^{(j)}=\sum_{i=0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)[S(t_{j}-t_{i}) -S(t_{j}-s)]g(t_{i})\) for \(E_{3}\) we obtain
\[E_{3}\leq 2K_{p}M\mathrm{e}^{\lambda T}C_{Y}\frac{\sqrt{T}}{\sqrt{2\alpha+1}} \Big{\|}g\Big{\|}_{L^{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))}\log(N_{ k})k^{\alpha} \tag{3.7}\]
using pathwise boundedness of \(g\) and noting that by (2.4)
\[\Big{\|}[S(t_{j}-t_{\ell})-S(t_{j}-s)]g(t_{\ell})\Big{\|}_{\mathcal{L}_{2}(H,X) }\leq 2M\mathrm{e}^{\lambda T}C_{Y}(s-t_{\ell})^{\alpha}\|g(t_{\ell})\|_{ \mathcal{L}_{2}(H,Y)}.\]
Likewise, with \(\Phi_{s}^{(j)}=\sum_{i=0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)[S(t_{j}-t_{i})- R_{k}^{j-i}]g(t_{i})\) we obtain
\[E_{4}\leq K_{p}C_{\alpha}\frac{\sqrt{T}}{\sqrt{2\alpha+1}}\Big{\|}g\Big{\|}_{ L^{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))}\log(N_{k})k^{\alpha}, \tag{3.8}\]
since \(R\) approximates \(S\) to order \(\alpha\) on \(Y\). The error bound follows from inserting (3.5), (3.6), (3.7), and (3.8) into (3.4).
For the _splitting scheme_ also known as the _exponential Euler method_, less regularity of the initial value suffices for the same convergence behaviour. The splitting scheme is obtained by setting \(R_{k}=S(k)\) in (3.3), i.e. we would solve exactly in the absence of noise \(g\).
**Corollary 3.2** (Splitting scheme).: _Let \(X\) and \(Y\) be Hilbert spaces such that \(Y\hookrightarrow X\). Let \(A\) be the generator of a \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) on \(X\) with \(\|S(t)\|\leq Me^{\lambda t}\) for some \(M\geq 1\) and \(\lambda\geq 0\). Assume that \(g\in L^{p}_{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))\) and \(g\in\cap L^{p}_{p}(\Omega;C^{\alpha}([0,T];\mathcal{L}_{2}(H,X)))\) for some \(\alpha\in(0,1]\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously if \(\alpha\in(0,1)\) or \(Y\hookrightarrow D(A)\) continuously if \(\alpha=1\). Let \(p\in[2,\infty)\) and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;X)\). Denote by \(U\) the mild solution of (3.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (3.3) obtained with the splitting scheme \(R\coloneqq S\). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\left\|U(t_{j})-U^{j}\right\|\right\|_{p}\leq C \log\left(\frac{T}{k}\right)k^{\alpha}\]
_with constant_
\[C\coloneqq K_{p}Me^{\lambda T}\frac{\sqrt{T}}{\sqrt{2\alpha+1}}\left(\left\|[g ]_{\alpha}\right\|_{p}+2C_{Y}\left\|g\right\|_{L^{p}(\Omega;L^{\infty}(0,T; \mathcal{L}_{2}(H,Y)))}\right),\]
_where \(K_{p}=10\mathrm{e}\sqrt{p}\) and \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\) or \(D(A)\)._
_In particular, if \(Y\hookrightarrow D(A)\) and \(g\) is Lipschitz continuous as a map to \(\mathcal{L}_{2}(H,X)\), the approximations \((U^{j})_{j}\) converge at rate \(1\) up to a logarithmic correction factor as \(k\to 0\)._
Proof.: We split the error as in (3.4). For the splitting scheme, the terms \(E_{1}\) and \(E_{4}\) in (3.4) vanish due to \(S(t_{j})-R_{k}^{j}=S(jk)-S(k)^{j}=S(jk)-S(jk)=0\) and, likewise, \(S(t_{j}-t_{i})-R_{k}^{j-i}=0\). The error bound follows from inserting the bounds (3.6) and (3.7) of the remaining terms into (3.4).
### Quasi-contractive Semigroups
Considering quasi-contractive semigroups, that is, \(C_{0}\)-semigroups \((S(t))_{t\geq 0}\) for which \(\|S(t)\|\leq\mathrm{e}^{\lambda t}\) for some \(\lambda\geq 0\) for all \(t\geq 0\), allows us to eliminate the logarithmic factor for the splitting scheme. The principle that lies at the heart of our proof is the maximal inequality in Theorem 2.1, which is used to estimate the stochastic convolutions in the error term. Depending on the spatial regularity of the noise \(g\), convergence rate \(\alpha\in(0,1]\) is attained without logarithmic correction factor.
**Theorem 3.3** (Splitting scheme, quasi-contractive case).: _Adopt the notation and assumptions of Corollary 3.2. In addition, assume that \(\|S(t)\|\leq\mathrm{e}^{\lambda t}\) for some \(\lambda\geq 0\) for all \(t\in[0,T]\). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\left\|U(t_{j})-U^{j}\right\|\right\|_{p}\leq Ck ^{\alpha}\]
_with constant_
\[C\coloneqq\frac{B_{p}\sqrt{T}}{\sqrt{2\alpha+1}}\left(\mathrm{e}^{\lambda T} \left\|[g]_{C^{\alpha}}\right\|_{p}+2C_{Y}\mathrm{e}^{2\lambda T}\left\|g \right\|_{L^{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))}\right),\]
_where \(B_{p}\) is the constant from Theorem 2.1._
Proof.: We bound the error as in (3.4), where the first and fourth term vanish as discussed in the proof of Corollary 3.2. We proceed to bound the remaining terms using the maximal inequality from Theorem 2.1 instead of Proposition 2.2 to obtain
\[E_{2} \leq\left\|\sup_{t\in[0,T]}\left\|\int_{0}^{t}S(t-s)[g(s)-g(\lfloor s \rfloor)]\,\mathrm{d}W_{H}(s)\right\|\right\|_{p}\] \[\leq B_{p}\mathrm{e}^{\lambda T}\left\|\left(\int_{0}^{T}\left\|g (s)-g(\lfloor s\rfloor)\right\|_{\mathcal{L}_{2}(H,X)}^{2}\,\mathrm{d}s \right)^{1/2}\right\|_{p}\] \[\leq B_{p}\mathrm{e}^{\lambda T}\left\|\left(\sum_{i=0}^{N_{k}-1} \int_{t_{i}}^{t_{i+1}}[g]_{C^{\alpha}}^{2}(s-t_{i})^{2\alpha}\,\mathrm{d}s \right)^{1/2}\right\|_{p} \tag{3.9}\] \[\leq\frac{B_{p}\mathrm{e}^{\lambda T}\sqrt{T}}{\sqrt{2\alpha+1}} \left\|[g]_{C^{\alpha}}\right\|_{p}k^{\alpha}\]
by Holder continuity of \(g\). Analogously, for \(E_{3}\) we obtain by the semigroup bound (2.4)
\[E_{3} \leq\left\|\sup_{t\in[0,T]}\left\|\int_{0}^{t}S(t-s)[S(s-\lfloor s \rfloor)-I]g(\lfloor s\rfloor)\,\mathrm{d}W_{H}(s)\right\|\right\|_{p}\] \[\leq B_{p}\mathrm{e}^{\lambda T}\left\|\left(\int_{0}^{T}\left\| [S(s-\lfloor s\rfloor)-I]g(\lfloor s\rfloor)\right\|_{\mathcal{L}_{2}(H,X)}^{ 2}\,\mathrm{d}s\right)^{1/2}\right\|_{p}\] \[\leq 2B_{p}\mathrm{e}^{2\lambda T}C_{Y}\left\|\left(\sum_{i=0}^{ N_{k}-1}\int_{t_{i}}^{t_{i+1}}(s-t_{i})^{2\alpha}\left\|g(t_{i})\right\|_{ \mathcal{L}_{2}(H,Y)}^{2}\,\mathrm{d}s\right)^{1/2}\right\|_{p} \tag{3.10}\] \[\leq\frac{2B_{p}\mathrm{e}^{2\lambda T}C_{Y}}{\sqrt{2\alpha+1}} \left\|g\right\|_{L^{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))}\sqrt{T} k^{\alpha}.\]
The final error bound follows from adding (3.9) and (3.10).
In particular, convergence rate \(1\) is attained without logarithmic correction factor for spatially sufficiently regular noise \(g\). General, possibly irregular initial values \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;X)\) are still admissible as the following corollary shows.
**Corollary 3.4**.: _Let \(X\) be a Hilbert space and let \(A\) be the generator of a quasi-contractive \(C_{0}\)-semigroup on \(X\). Assume that \(g\in L^{p}_{p}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,D(A))))\) is pathwise Lipschitz continuous as a map to \(\mathcal{L}_{2}(H,X)\). Let \(p\in[2,\infty)\) and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;X)\). Denote by \(U\) the mild solution of (3.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (3.3) obtained with the splitting scheme \(R:=S\). Then there is a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\left\|U(t_{j})-U^{j}\right\|\right\|_{p}^{2} \leq Ck,\]
_i.e. the approximations \((U^{j})_{j}\) converge at rate \(1\) as \(k\to 0\)._
### Application to the linear Schrodinger equation with additive noise
In this subsection, we study convergence rates of time discretisations of the linear stochastic Schrodinger equation with a potential
\[\begin{cases}\,\mathrm{d}u=-\mathrm{i}(\Delta+V)u\;\mathrm{d}t-\mathrm{i}\; \mathrm{d}W\ \ \text{on}\ [0,T],\\ u(0)=u_{0}\end{cases} \tag{3.11}\]
in \(\mathbb{R}^{d}\) for \(d\in\mathbb{N}\), where \(\{W(t)\}_{t\geq 0}\) is a square integrable \(\mathbb{K}\)-valued \(Q\)-Wiener process, \(\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}\), with respect to a normal filtration \((\mathscr{F}_{t})_{t\geq 0}\), \(V\) is a \(\mathbb{K}\)-valued potential and \(u_{0}\) is an \(\mathscr{F}_{0}\)-measurable random variable. Next we introduce conditions on the dimension and the regularity of \(V\).
Let \(\sigma\geq 0\) and, for this subsection only, write \(L^{2}=L^{2}(\mathbb{R}^{d})\) and \(H^{\sigma}=H^{\sigma}(\mathbb{R}^{d})\). We will also be using the Bessel potential spaces \(H^{\sigma,q}(\mathbb{R}^{d})\), which coincide with the classical Sobolev spaces \(W^{\sigma,q}(\mathbb{R}^{d})\) if \(\sigma\in\mathbb{N}\) and \(q\in(1,\infty)\). For details on these spaces we refer to [9, 58].
To ensure well-posedness of (3.11), we assume one of the following mutually exclusive conditions holds.
**Assumption 3.5**.: _Let \(\sigma\geq 0\), \(d\in\mathbb{N}\) and \(V\in L^{2}\) such that_
1. \(\sigma>\frac{d}{2}\) _and_ \(V\in H^{\sigma}\)_, or_
2. \(\sigma=0\) _and_ \(V\in H^{\beta}\) _for some_ \(\beta>\frac{d}{2}\)_, or_
3. \(\sigma\in(0,1)\)_,_ \(d>2\sigma\)_, and_ \(V\in H^{\beta}\) _for some_ \(\beta>\frac{d}{2}\)__
4. \(\sigma=1\)_,_ \(d\geq 2\)_, and_ \(V\in H^{\beta}\) _for some_ \(\beta>\frac{d}{2}\)_._
In particular, this assumption implies that \(Vu\in H^{\sigma}\) for any \(u\in H^{\sigma}\) and \(\|Vu\|_{H^{\sigma}}\leq C_{V}\|u\|_{H^{\sigma}}\) for some constant \(C_{V}\geq 0\) depending on \(V\), which follows from the algebra property of \(H^{\sigma}\) in case (i). Note that while (i) is taken verbatim from [2, Prop. 4.1], cases (ii) and (iv) assume less
regularity in our assumption and case (iii) is new. In the second case (ii), Holder's inequality and the Sobolev embedding \(H^{\beta}\hookrightarrow L^{\infty}\) for \(\beta>\frac{d}{2}\) yield
\[\|Vu\|_{L^{2}}\leq\|V\|_{L^{\infty}}\|u\|_{L^{2}}\lesssim\|V\|_{H^{\beta}}\|u\|_ {L^{2}}.\]
in the case (ii), see [2, Prop. 4.1]. The case (iii) is covered by Lemma 3.6 below. Lastly, \(\|Vu\|_{H^{1}}\lesssim\|u\|_{H^{1}}\) in case (iv) follows from Holder's inequality, once with \(p=2\beta\) and \(q=\frac{4\beta}{2\beta-2}\), \(\beta>1\), and the embeddings \(H^{\beta}\hookrightarrow L^{\infty}\), \(H^{1}\hookrightarrow L^{q}\), as well as \(H^{\beta}\hookrightarrow H^{1,2\beta}\) via
\[\|Vu\|_{H^{1}}^{2} \lesssim\|Vu\|_{L^{2}}^{2}+\|Vu^{\prime}\|_{L^{2}}^{2}+\|V^{ \prime}u\|_{L^{2}}^{2}\] \[\leq\|V\|_{L^{\infty}}^{2}(\|u\|_{L^{2}}^{2}+\|u^{\prime}\|_{L^{ 2}}^{2})+\|V^{\prime}\|_{L^{2\beta}}^{2}\|u\|_{L^{q}}^{2}\] \[\lesssim(\|V\|_{H^{\beta}}^{2}+\|V\|_{H^{1,2\beta}}^{2})\|u\|_{H^ {1}}^{2}\lesssim\|V\|_{H^{\beta}}^{2}\|u\|_{H^{1}}^{2}.\]
Hence, multiplication by \(V\) is a bounded operator on \(H^{\sigma}\) if Assumption 3.5 holds.
**Lemma 3.6**.: _Let \(\sigma\in(0,1)\), \(d\in\mathbb{N}\) such that \(d>2\sigma\), and \(V\in H^{\beta}(\mathbb{R}^{d})\) for some \(\beta>\frac{d}{2}\). Then \(\|Vu\|_{H^{\sigma}}\leq C_{V}\|u\|_{H^{\sigma}}\) for some constant \(C_{V}\geq 0\) for all \(u\in H^{\sigma}(\mathbb{R}^{d})\)._
Proof.: Let \(q_{1}=\frac{2d}{d-2\sigma}\) and \(q_{2}=\frac{d}{\sigma}\). Then \(\frac{1}{q_{1}}+\frac{1}{q_{2}}=\frac{1}{2}\) and \(q_{1}<\infty\) because \(d>2\sigma\). By classic Sobolev and Bessel potential space embeddings [10, Thm. 6.5.1], \(H^{d/2}\hookrightarrow H^{\sigma,q_{2}}\), \(H^{\sigma}\hookrightarrow L^{q_{1}}\), and \(H^{\beta}\hookrightarrow C_{b}(\mathbb{R}^{d})\hookrightarrow L^{\infty}\). Thus, an application of the product estimate [57, Prop. 2.1.1] yields
\[\|Vu\|_{H^{\sigma}}\lesssim\|V\|_{H^{\sigma,q_{2}}}\|u\|_{L^{q_{1}}}+\|V\|_{L^ {\infty}}\|u\|_{H^{\sigma}}\lesssim(\|V\|_{H^{d/2}}+\|V\|_{H^{\beta}})\|u\|_{ H^{\sigma}}\lesssim\|V\|_{H^{\beta}}\|u\|_{H^{\sigma}}.\qed\]
Since \(-\mathrm{i}\Delta\) generates a contractive semigroup [2, Lemma 2.1], its bounded perturbation \(-\mathrm{i}(\Delta+V)\) generates a quasi-contractive semigroup [29, Thm. III.1.3]. Thus, we are in the setting of Subsection 3.2. Global existence and uniqueness of solutions \(U\in L^{p}(\Omega;C([0,T];H^{\sigma}))\) to (3.11) in \(H^{\sigma}\) are guaranteed provided that \(p\in[2,\infty)\), \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma})\), \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\sigma})\) and Assumption 3.5 holds.
Therefore, the Schrodinger equation (3.11) can be rewritten in the form of (3.1) on \(X=H^{\sigma}\) with an \(H\)-cylindrical Brownian motion \(W_{H}\) for \(H=L^{2}\).
For the splitting scheme, we recover the error bound from [2, Thm. 4.3] showing convergence rate \(1\) in the case of sufficiently regular \(Q^{1/2}\) under less regularity assumptions on \(V\). Moreover, under weaker regularity assumptions on \(Q^{1/2}\) and \(V\), we additionally provide an error bound for fractional convergence rates \(\alpha\in(0,1]\).
**Theorem 3.7**.: _Let \(\sigma\geq 0\), \(d\in\mathbb{N}\), and \(V\in L^{2}\) satisfy Assumption 3.5, and let \(p\in[2,\infty)\). Assume that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma})\) and \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\sigma+2\alpha})\) for some \(\alpha\in(0,1]\). Denote by \(U\) the mild solution of the linear stochastic Schrodinger equation with additive noise (3.11) and by \((U^{j})_{j=0,\dots,N_{k}}\) the temporal approximations as defined in (3.3) obtained with the splitting scheme \(R\coloneqq S\). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\sigma}}\right\|_{p}\leq C \|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\sigma+2\alpha})}k^{\alpha}.\]
Proof.: As discussed above, \(A=-\mathrm{i}(\Delta+V)\) generates a quasi-contractive semigroup on \(H^{\sigma}\). Furthermore, setting \(g=-\mathrm{i}Q^{1/2}\) allows us to rewrite (3.11) in the form of a stochastic evolution equation (3.1). Thus, Corollary 3.2 is applicable with \(X=H^{\sigma}\) and \(H=L^{2}\). It remains to check that \(g\in L^{p}_{\mathcal{P}}(\Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))\) for some \(Y\hookrightarrow D_{A}(\alpha,\infty)\) and that \(g\in L_{\mathcal{P}}(\Omega;C^{\alpha}([0,T];\mathcal{L}_{2}(H,X)))\). The latter holds for any \(\alpha\in(0,1]\) due to \(g\) being constant in time. Taking \(Y=H^{\sigma+2\alpha}=(H^{\sigma},H^{\sigma+2})_{\alpha,2}=(H^{\sigma},D(A))_{ \alpha,2}\hookrightarrow(H^{\sigma},D(A))_{\alpha,\infty}\), the first condition is satisfied as well. Corollary 3.2 yields the desired error bound.
Furthermore, Theorem 3.1 enables us to extend [2, Thm. 4.3] to general discretisation schemes \(R\) other than the splitting scheme at the price of an additional logarithmic factor.
**Theorem 3.8**.: _Let \(\sigma\geq 0\), \(d\in\mathbb{N}\), and \(V\in L^{2}\) satisfy Assumption 3.5, and let \(p\in[2,\infty)\). Assume that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma})\) and \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\sigma+2\alpha})\) for some \(\alpha\in(0,1]\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(H^{\sigma}\) and \(H^{\sigma+2\alpha}\). Assume \(R\) approximates \(S\) to order
\(\alpha\) on \(H^{\sigma+2\alpha}\). Denote by \(U\) the mild solution of the linear stochastic Schrodinger equation with additive noise (3.11) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (3.3). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\sigma}}\right\|_{p}\leq C \|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\sigma+2\alpha})}\log\left(\frac{T}{k} \right)k^{\alpha}.\]
Note that in the absence of a potential, the same convergence rates are obtained without any limitation on the dimension \(d\in\mathbb{N}\) in terms of the parameter \(\sigma\).
## 4. Well-posedness
We consider the stochastic evolution equation with multiplicative noise
\[\begin{cases}\,\mathrm{d}U=(AU+F(t,U))\,\mathrm{d}t+G(t,U)\,\mathrm{d}W_{H}\ \text{ on }[0,T],\\ U(0)=u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;X)\end{cases} \tag{4.1}\]
for \(1\leq p<\infty\) and \(A\) generating a \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) of contractions on \(X\). In this section, we present progressive measurability, linear growth and global Lipschitz conditions on \(F\) and \(G\) ensuring well-posedness of the above equation.
**Assumption 4.1**.: _Let \(F:\Omega\times[0,T]\times X\to X,F(\omega,t,x)=\tilde{F}(\omega,t,x)+f(\omega,t)\) and \(G:\Omega\times[0,T]\times X\to\mathcal{L}_{2}(H,X),G(\omega,t,x)=\tilde{G}( \omega,t,x)+g(\omega,t)\) be strongly \(\mathcal{P}\otimes\mathcal{B}(X)\)-measurable, and such that \(\tilde{F}(\cdot,\cdot,0)=0\) and \(\tilde{G}(\cdot,\cdot,0)=0\), and suppose_
1. (global Lipschitz continuity on \(X\)) _there exist constants_ \(C_{F,X},C_{G,X}\geq 0\) _such that for all_ \(\omega\in\Omega,t\in[0,T]\) _and_ \(x\in X\)_, it holds that_ \[\|\tilde{F}(\omega,t,x)-\tilde{F}(\omega,t,y)\|\leq C_{F,X}\|x-y\|,\ \|\tilde{G}(\omega,t,x)-\tilde{G}(\omega,t,y)\|\leq C_{G,X}\|x-y\|,\]
2. (integrability) \(f\in L^{p}_{\mathcal{P}}(\Omega;L^{1}(0,T;X))\) _and_ \(g\in L^{p}_{\mathcal{P}}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,X)))\)_._
Note that Assumption 4.1 implies linear growth of \(F\) and \(G\):
\[\|\tilde{F}(\omega,t,x)\|\leq C_{F,X}(1+\|x\|),\ \|\tilde{G}(\omega,t,x)\|_{ \mathcal{L}_{2}(H,X)}\leq C_{G,X}(1+\|x\|), \tag{4.2}\]
where the constant \(1\) can be left out, but is included for later use in Theorem 4.4.
Well-posedness shall be understood in the sense of existence and uniqueness of mild solutions to (4.1).
**Definition 4.2**.: _A \(U\in L^{0}_{\mathcal{P}}(\Omega;C([0,T];X))\) is called a mild solution to (4.1) if a.s. for all \(t\in[0,T]\)_
\[U(t)=S(t)u_{0}+\int_{0}^{t}S(t-s)F(s,U(s))\,\mathrm{d}s+\int_{0}^{t}S(t-s)G(s, U(s))\,\mathrm{d}W_{H}(s).\]
The following well-posedness result is more or less standard.
**Theorem 4.3**.: _Suppose that Assumption 4.1 holds. Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on \(X\). Let \(p\in[2,\infty)\) and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;X)\). Then (4.1) has a unique mild solution \(U\in L^{p}(\Omega;C([0,T];X))\). Moreover,_
\[\|U\|_{L^{p}(\Omega;C([0,T];X))}\leq C^{X}_{bda}\Big{(}1+\|u_{0}\|_{L^{p}( \Omega;X)}+\|f\|_{L^{p}(\Omega;L^{1}(0,T;X))}+B_{p}\|g\|_{L^{p}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,X)))}\Big{)},\]
_where \(C^{X}_{bda}\coloneqq(1+C^{2}T)^{1/2}\mathrm{e}^{(1+C^{2}T)/2}\) with \(C\coloneqq C_{F,X}T^{1/2}+B_{p}C_{G,X}\), and \(B_{p}\) is the constant from Theorem 2.1._
Proof.: First, local existence and uniqueness of solutions are to be proven. Second, local solutions are concatenated to obtain global existence and uniqueness. We only sketch the steps. Let \(\delta\in(0,T]\). Define the spaces \(Z_{\delta}\coloneqq L^{p}(\Omega;C([0,\delta];X))\), \(Z\coloneqq Z_{T}\), \(Z^{p}_{\delta}\) as the subset of all adapted \(v\in Z_{\delta}\) and \(Z^{\mathcal{P}}\coloneqq Z^{\mathcal{P}}_{T}\). For \(v\in Z^{\mathcal{P}}_{\delta}\), we define the fixed point functional
\[\Gamma v(t)\coloneqq S(t)u_{0}+\int_{0}^{t}S(t-s)F(s,v(s))\,\mathrm{d}s+\int_{0 }^{t}S(t-s)G(s,v(s))\,\mathrm{d}W_{H}(s). \tag{4.3}\]
The problem of finding local mild solutions of (4.1) then reduces to finding fixed points \(v\in Z^{\mathcal{P}}_{\delta}\) of \(\Gamma\). The contraction mapping theorem yields such unique fixed points provided that \(\Gamma\) is a contraction which maps \(Z^{\mathcal{P}}\) and thus \(Z^{\mathcal{P}}_{\delta}\) into itself. That is, _(i)_ continuity of paths of \(\Gamma v\) and maximal estimates for \(v\in Z^{\mathcal{P}}_{\delta}\) (see Theorem 2.1) as well as _(ii)_ adaptedness of \(\Gamma v\), and that _(iii)_\(\Gamma\) is a (strict) contraction on \(Z^{\mathcal{P}}_{\delta}\). Lastly, we consider the evolution equation on \([\delta,2\delta]\) with initial value \(U(\delta)\) to extend the solution to larger time intervals.
It remains to prove the a priori estimate for the mild solution \(U\). Let \(r\in[0,T]\). Let \(\psi(r)=1+\left\|\sup_{t\in[0,r]}\|U(t)\|\right\|_{p}\). From the triangle inequality, Theorem 2.1 and (4.2) we see that
\[\psi(r) \leq 1+\left\|u_{0}\right\|_{L^{p}(\Omega;X)}+C_{F,X}\left\|\int_ {0}^{r}1+\|U(s)\|\,\mathrm{d}s\right\|_{p}+\left\|f\right\|_{L^{p}(\Omega;L^{ 1}(0,r;X))}\] \[\leq c_{u_{0},f,g}+C_{F,X}\int_{0}^{r}\psi(s)\,\mathrm{d}s+B_{p}C _{G,X}\left(\int_{0}^{r}\psi(s)^{2}\,\mathrm{d}s\right)^{1/2}\] \[\leq c_{u_{0},f,g}+C\left(\int_{0}^{r}\psi(s)^{2}\,\mathrm{d}s \right)^{1/2},\]
where \(c_{u_{0},f,g}=1+\|u_{0}\|_{L^{p}(\Omega;X)}+\|f\|_{L^{p}(\Omega;L^{1}(0,T;X)) }+B_{p}\|g\|_{L^{p}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,X)))}\) and \(C=C_{F,X}T^{1/2}+B_{p}C_{G,X}\). Here we used Minkowski's inequality to pull in the \(L^{p}(\Omega)\) and \(L^{p/2}(\Omega)\) norms. Lastly, the version of Gronwall's inequality from Lemma 2.5 yields the desired result
\[\psi(T)\leq c_{u_{0},f,g}(1+C^{2}T)^{1/2}e^{(1+C^{2}T)/2}.\qed\]
Lastly, we present a well-posedness result on subspaces \(Y\hookrightarrow X\) which does not require Lipschitz continuity of \(\tilde{F},\tilde{G}\) on \(Y\) but merely linear growth.
**Theorem 4.4**.: _Suppose that Assumption 4.1 holds. Let \(Y\hookrightarrow X\) be a Hilbert space and \(A\) the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on both \(X\) and \(Y\). Let \(p\in[2,\infty)\) and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Additionally, suppose that \(f\in L^{p}_{\mathcal{P}}(\Omega;L^{1}(0,T;Y))\), \(g\in L^{p}_{\mathcal{P}}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,Y)))\), \(F:\Omega\times[0,T]\times Y\to Y\), \(G:\Omega\times[0,T]\times Y\to\mathcal{L}_{2}(H,Y)\) are strongly \(\mathcal{P}\otimes\mathcal{B}(Y)\)-measurable, and there are \(L_{F,Y},L_{G,Y}\geq 0\) such that for all \(\omega\in\Omega\), \(t\in[0,T]\), and \(x\in Y\),_
\[\|\tilde{F}(\omega,t,x)\|_{Y}\leq L_{F,Y}(1+\|x\|_{Y}),\ \|\tilde{G}(\omega,t,x)\| _{\mathcal{L}_{2}(H,Y)}\leq L_{G,Y}(1+\|x\|_{Y}).\]
_Under these conditions the mild solution \(U\in L^{p}(\Omega;C([0,T];X))\) to (4.1) is in \(L^{p}(\Omega;C([0,T];Y))\) and_
\[\|U\|_{L^{p}(\Omega;C([0,T];Y))}\leq C^{Y}_{bdd}\Big{(}1+\|u_{0}\|_{L^{p}( \Omega;Y)}+\|f\|_{L^{p}(\Omega;L^{1}(0,T;Y))}+B_{p}\|g\|_{L^{p}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,Y)))}\Big{)},\]
_where \(C^{Y}_{bdd}\coloneqq(1+C^{2}T)^{1/2}\mathrm{e}^{(1+C^{2}T)/2}\) with \(C\coloneqq L_{F,Y}T^{1/2}+B_{p}L_{G,Y}\), and \(B_{p}\) is the constant from Theorem 2.1._
The constant \(C\) appears exponentially in the above. In the special case \(p=2\), \(L_{F,Y}=L_{G,Y}=T=1\), this leads to \(C^{Y}_{\mathrm{bdd}}\leq\sqrt{10}e^{5}\leq 470\).
Proof.: Recall that by Banach's fixed point theorem for \(\delta\leq T_{0}\), where \(T_{0}\in(0,1]\) only depends on \(p\), \(C_{F,X}\), \(C_{G,X}\) and \(X\), one has \(U=\lim_{n\to\infty}U_{n}\) in \(L^{p}(\Omega;C([0,\delta];X))\), where \(U_{0}=u_{0}\) and \(U_{n+1}=\Gamma(U_{n})\) with \(\Gamma\) as defined in (4.3). Since \(F\) and \(G\) map \(Y\) into \(Y\), we can also consider \(\Gamma\) as a mapping on \(Z^{2}\coloneqq L^{p}_{\mathcal{P}}(\Omega;L^{2}(0,\delta;Y))\) to eventually show that \(U\) is in \(L^{p}_{\mathcal{P}}(\Omega;C([0,\delta];Y))\subseteq Z^{2}\). Note that for \(U\in Z^{2}\), \(F(\cdot,U)\) and \(G(\cdot,U)\) are progressively measurable as \(Y\) and \(\mathcal{L}_{2}(H,Y)\)-valued mappings by [35, Theorem 1.1.6]. Moreover, we claim that for all \(v\in Z^{2}\),
\[\|\Gamma(v)\|_{L^{p}(\Omega;C([0,\delta];Y))} \leq\|u_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{L^{p}(\Omega;L^{1}(0, \delta;Y))}\] \[\quad+B_{p}\|g\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y )))}+\big{(}L_{F,Y}+B_{p}L_{G,Y}\big{)}(1+\|v\|_{Z^{2}}). \tag{4.4}\]
Indeed, since \(S\) is contractive, the maximal inequality, linear growth of \(\tilde{F}\) and \(\tilde{G}\) on \(Y\), and \(\delta\leq 1\) imply
\[\|\Gamma(v)-S(\cdot)u_{0}\|_{L^{p}(\Omega;C([0,\delta];Y))} \leq\|F(\cdot,v)\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}+B_{p}\|G( \cdot,v)\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y)))}\] \[\leq\|f\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}+L_{F,Y}\left(\delta+\| v\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}\right)\] \[\quad+B_{p}\left(\|g\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_ {2}(H,Y)))}+L_{G,Y}\left(\sqrt{\delta}+\|v\|_{L^{p}(\Omega;L^{2}(0,\delta;Y))} \right)\right)\] \[\leq\|f\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}+B_{p}\|g\|_{L^{p}( \Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y)))}\] \[\quad+\left(L_{F,Y}+B_{p}L_{G,Y}\right)\left(1+\|v\|_{Z^{2}} \right).\]
Therefore, (4.4) follows. Now (4.4) implies
\[\|\Gamma(v)\|_{Z^{2}} \leq\delta^{1/2}\|\Gamma(v)\|_{L^{p}(\Omega;C([0,\delta];Y))}\] \[\leq\theta(1+\|u_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{L^{p}(\Omega;L^{1 }(0,\delta;Y))}+\|g\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y)))}+\|v \|_{Z^{2}}),\]
where \(\theta=\delta^{1/2}\max\{1,B_{p},L_{F,Y}+B_{p}L_{G,Y}\}\). Choosing \(\delta\in(0,T_{0}]\) such that \(\theta\leq\frac{1}{2}\), iteratively we obtain that for \(n\geq 1\),
\[\|U_{n}\|_{Z^{2}} \leq\theta(1+\|u_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{L^{p}(\Omega;L^{1 }(0,\delta;Y))}+\|g\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y)))})+ \theta\|U_{n-1}\|_{Z^{2}}\] \[\leq\theta(1+\|u_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{L^{p}(\Omega;L^{1 }(0,\delta;Y))}+\|g\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y)))})\] \[\quad+\theta^{2}(1+\|u_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{L^{p}( \Omega;L^{1}(0,\delta;Y))}+\|g\|_{L^{p}(\Omega;L^{2}(0,\delta;\mathcal{L}_{2} (H,Y)))}+\|U_{n-2}\|_{Z^{2}})\] \[\leq\ldots\leq\sum_{j=1}^{n}\theta^{j}(1+\|u_{0}\|_{L^{p}(\Omega ;Y)}+\|f\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}+\|g\|_{L^{p}(\Omega;L^{2}(0, \delta;\mathcal{L}_{2}(H,Y)))})+\theta^{n}\|U_{0}\|_{Z^{2}}\] \[\leq 1+\|f\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}+\|g\|_{L^{p}( \Omega;L^{2}(0,\delta;\mathcal{L}_{2}(H,Y)))}+2\|u_{0}\|_{L^{p}(\Omega;Y)}.\]
In conclusion, \((U_{n})_{n\in\mathbb{N}}\) is bounded in \(Z^{2}\). By reflexivity of \(Y\), and thus of \(Z^{2}\) (see [35, Corollary 1.3.22]), there is a subsequence \((U_{n_{j}})_{j\in\mathbb{N}}\) and \(V\in Z^{2}\) such that \(U_{n_{j}}\to V\) weakly in \(Z^{2}\) and
\[\|V\|_{Z^{2}}\leq 1+\|f\|_{L^{p}(\Omega;L^{1}(0,\delta;Y))}+\|g\|_{L^{p}(\Omega;L^{2}(0, \delta;\mathcal{L}_{2}(H,Y)))}+2\|u_{0}\|_{L^{p}(\Omega;Y)}. \tag{4.5}\]
Since \(U_{n}\to U\) in \(L^{p}(\Omega;C([0,\delta];X))\), it follows that \(V=U\). Since \(U=\Gamma(U)\), (4.4) and (4.5) give that \(U\) is in \(L^{p}(\Omega;C([0,\delta];Y))\). The same argument can be applied on \([j\delta,(j+1)\delta]\) using the initial value \(U(j\delta)\in L^{p}(\Omega;Y)\) for \(j=1,2,\ldots\) to obtain the statement on \([0,T]\).
The final a priori estimate follows as in Theorem 4.3, where we note that the Lipschitz conditions on \(F\) and \(G\) were not used in the estimate.
## 5. Stability
Before analysing convergence of temporal approximations to solutions of the stochastic evolution equation (4.1) with multiplicative noise, the question of stability of time discretisation schemes arises. Our aim is to prove stability of contractive time discretisation schemes under linear growth assumptions on \(F\) and \(G\), and contractivity conditions on the scheme \(R\). We formulate the result for mappings on \(X\), but they will also be applied on \(Y\) later on.
Let \(R_{k}:X\to X\) be a contractive time discretisation scheme with time step \(k>0\) on a uniform grid \(\{t_{j}=jk:\ j=0,\ldots,N_{k}\}\subseteq[0,T]\) with final time \(T=t_{N_{k}}>0\) and \(N_{k}=\frac{T}{k}\in\mathbb{N}\) being the number of time steps. We consider the temporal approximations of the mild solution to (4.1) given by \(U^{0}\coloneqq u_{0}\) and
\[U^{j}\coloneqq R_{k}U^{j-1}+kR_{k}F(t_{j-1},U^{j-1})+R_{k}G(t_{j-1},U^{j-1}) \Delta W_{j} \tag{5.1}\]
with Wiener increments \(\Delta W_{j}\coloneqq W_{H}(t_{j})-W_{H}(t_{j-1})\) (see (2.1)). The above definition of \(U^{j}\) can be reformulated as the discrete variation-of-constants formula
\[U^{j}=R_{k}^{j}u_{0}+k\sum_{i=0}^{j-1}R_{k}^{j-i}F(t_{i},U^{i})+\sum_{i=0}^{j-1 }R_{k}^{j-i}G(t_{i},U^{i})\Delta W_{i+1} \tag{5.2}\]
for \(j=0,\ldots,N_{k}\).
**Proposition 5.1** (Stability).: _Let \(X\) be a Hilbert space, \(p\in[2,\infty)\) and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;X)\). Suppose that \(F:\Omega\times[0,T]\times X\to X\), \(G:\Omega\times[0,T]\times X\to\mathcal{L}_{2}(H,X)\) are strongly \(\mathcal{P}\otimes\mathcal{B}(X)\)-measurable, where \(F=\tilde{F}+f\) and \(G=\tilde{G}+g\), \(f\in L^{p}_{p}(\Omega;C([0,T];X))\), \(g\in L^{p}_{p}(\Omega;C([0,T];\mathcal{L}_{2}(H,X)))\), and there are \(L_{F,X},L_{G,X}\geq 0\) such that for all \(\omega\in\Omega\), \(t\in[0,T]\) and \(x\in X\),_
\[\|\tilde{F}(\omega,t,x)\|_{X}\leq L_{F,X}(1+\|x\|_{X}),\ \|\tilde{G}(\omega,t,x)\|_ {\mathcal{L}_{2}(H,X)}\leq L_{G,X}(1+\|x\|_{X}).\]
_Let \((R_{k})_{k>0}\) be a contractive time discretisation scheme and \(N_{k}\geq 2\). Then the temporal approximations \((U^{j})_{j=0,\ldots,N_{k}}\) obtained via (5.1) are stable. That is,_
\[1+\left\|\max_{0\leq j\leq N_{k}}\|U^{j}\|\right\|_{p}\leq C_{stab}c_{u_{0},f, g,T},\]
_where \(C_{stab}\coloneqq(1+C^{2}T)^{1/2}e^{(1+C^{2}T)/2}\) with \(C\coloneqq L_{F,X}T^{1/2}+B_{p}L_{G,X}\),_
\[c_{u_{0},f,g,T}\coloneqq 1+\|u_{0}\|_{L^{p}(\Omega;X)}+\|f\|_{L^{p}(\Omega;C([0, T];X))}T+\|g\|_{L^{p}(\Omega;C([0,T];\mathcal{L}_{2}(H,X)))}B_{p}T^{1/2},\]
_and \(B_{p}\) is the constant from Theorem 2.1._
The exponential dependence comes from an application of Gronwall's inequality. Therefore, to make the result suitable for numerical applications, some optimization of the constants was necessary. In the special case that \(L_{F,X}=L_{G,X}=T=1\), and \(p=2\) one can check that \(C_{\mathrm{stab}}=\sqrt{10}e^{5}\leq 470\) which numerically seems a reasonable constants for later error estimates.
Proof.: Let \(\varphi_{N}\coloneqq 1+\|\max_{0\leq j\leq N}\|U^{j}\|\|_{p}\) and \(N\in\{0,\ldots,N_{k}\}\). Then the variation-of-constants formula (5.2) and contractivity of \(R_{k}\) allow us to bound
\[\varphi_{N} \leq 1+\|u_{0}\|_{L^{p}(\Omega;X)}+k\sum_{i=0}^{N-1}\left\|\max_{0 \leq j\leq i}\|F(t_{j},U^{j})\|\right\|_{p} \tag{5.3}\] \[\quad+\left\|\max_{0\leq j\leq N}\left\|\sum_{i=0}^{j-1}R_{k}^{j- i}G(t_{i},U^{i})\Delta W_{i+1}\right\|\right\|_{p}.\]
Invoking linear growth of \(\tilde{F}\) and boundedness of \(f\) for the second term, we obtain the bound
\[k\sum_{i=0}^{N-1}\left\|\max_{0\leq j\leq i}\|F(t_{j},U^{j})| \right\|_{p}\leq k\sum_{i=0}^{N-1}\left\|\max_{0\leq j\leq i}\left(L_{F,X} \left(1+\|U^{j}\|\right)+\|f(t_{j})\|\right)\right\|_{p} \tag{5.4}\] \[\leq k\sum_{i=0}^{N-1}\left(L_{F,X}\left(1+\left\|\max_{0\leq j \leq i}\|U^{j}\|\right\|_{p}\right)+\|f\|_{L^{p}(\Omega;C([0,T];X))}\right)\] \[=C_{1,f}t_{N}+L_{F,X}k\sum_{i=0}^{N-1}\varphi_{i}\leq C_{1,f}t_{ N}+L_{F,X}t_{N}^{1/2}\left(k\sum_{i=0}^{N-1}\varphi_{i}^{2}\right)^{1/2},\]
where we have set \(C_{1,f}\coloneqq\|f\|_{L^{p}(\Omega;C([0,T];X))}\), and used the Cauchy-Schwarz inequality and \(Nk=t_{N}\) in the last line. It remains to bound the last term in (5.3).
Since \(R_{k}\) is a contraction, by the Sz.-Nagy dilation theorem [55, Theorem I.4.2] we can find a Hilbert space \(\widetilde{X}\), a contractive injection \(Q:X\to\widetilde{X}\), a contractive projection \(P:\widetilde{X}\to X\), and a unitary \(\widetilde{R}_{k}\) on \(\widetilde{X}\) such that
\[R_{k}^{i}=P\widetilde{R}_{k}^{i}Q\ \ \text{for all }i\geq 0.\]
Let \(G^{k}(s)\coloneqq G(t_{i},U^{i})\) for \(s\in[t_{i},t_{i+1}),0\leq i\leq N_{k}-1\), and \(S^{k}(s)\coloneqq\widetilde{R}_{k}^{-i}\) for \(s\in(t_{i-1},t_{i}],1\leq i\leq N_{k}\). Then it follows from Theorem 2.1 that
\[\left\|\max_{0\leq j\leq N}\left\|\sum_{i=0}^{j-1}R_{k}^{j-i}G(t_{ i},U^{i})\Delta W_{i+1}\right\|\right\|_{p} =\left\|\max_{0\leq j\leq N}\left\|\sum_{i=0}^{j-1}\widetilde{R}_{k}^{j-i}QG (t_{i},U^{i})\Delta W_{i+1}\right\|\right\|_{p}\] \[\leq\left\|\max_{0\leq j\leq N}\left\|\sum_{i=0}^{j-1}\widetilde{ R}_{k}^{-i}QG(t_{i},U^{i})\Delta W_{i+1}\right\|\right\|_{p}\]
\[\leq\left\|\sup_{t\in[0,t_{N}]}\left\|\int_{0}^{t}S^{k}(-\lfloor s \rfloor)QG^{k}(s)\,\mathrm{d}W_{H}(s)\right\|\right\|_{p}\] \[\leq B_{p}\left\|\left(\int_{0}^{t_{N}}\|G^{k}(s)\|_{\mathcal{L}_{ 2}(H,X)}^{2}ds\right)^{1/2}\right\|_{p}\] \[\leq B_{p}\left(k\sum_{i=0}^{N-1}\left\|\|G(t_{i},U^{i})\|_{ \mathcal{L}_{2}(H,X)}\right\|_{p}^{2}\right)^{1/2} \tag{5.5}\] \[\leq B_{p}L_{G,X}\left(k\sum_{i=0}^{N-1}\varphi_{i}^{2}\right)^{1 /2}+C_{2,g}t_{N}^{1/2},\]
where we have set \(C_{2,g}:=B_{p}\|g\|_{L^{p}(\Omega;C([0,T];\mathcal{L}_{2}(H,X)))}\).
Inserting (5.4) and (5.5) in (5.3) gives the bound
\[\varphi_{N}\leq 1+\|u_{0}\|_{L^{p}(\Omega;X)}+C_{1,f}t_{N}+C_{2,g}t_{N}^{1/2 }+(L_{F,X}t_{N}^{1/2}+B_{p}L_{G,X})\left(k\sum_{i=0}^{N-1}\varphi_{i}^{2} \right)^{1/2}.\]
Setting \(C:=L_{F,X}t_{N}^{1/2}+B_{p}L_{G,X}\) and \(c_{u_{0},f,g,t_{N}}:=1+\|u_{0}\|_{L^{p}(\Omega;X)}+C_{1,f}t_{N}+C_{2,g}t_{N}^{ 1/2}\), we obtain from the discrete version of Gronwall's Lemma 2.6 that
\[\varphi_{N}\leq c_{u_{0},f,g}(1+C^{2}kN)^{1/2}\mathrm{e}^{(1+C^{2}kN)/2}.\]
This implies the desired statement for \(N=N_{k}\) noting that \(t_{N_{k}}=kN_{k}=T\).
## 6. Convergence Rates for multiplicative noise
Our aim is to prove rates of convergence of contractive time discretisation schemes for nonlinear stochastic evolution equations of the form
\[\mathrm{d}U=(AU+F(t,U))\,\mathrm{d}t+G(t,U)\,\mathrm{d}W_{H}(t),\ U(0)=u_{0} \in L^{p}(\Omega;X) \tag{6.1}\]
with \(t\in[0,T]\) on a Hilbert space \(X\) with norm \(\|\cdot\|\), where \(W_{H}\) is an \(H\)-cylindrical Brownian motion for some Hilbert space \(H\) and \(p\in[2,\infty)\). The operator \(A\) is assumed to generate a contractive \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) on \(X\) and \(F,G\) are assumed to be progressively measurable, of linear growth and globally Lipschitz as detailed in Assumption 4.1. Hence, we have the unique mild solution given by a fixed point of
\[U(t)=S(t)u_{0}+\int_{0}^{t}S(t-s)F(s,U(s))\,\mathrm{d}s+\int_{0}^{t}S(t-s)G(s,U (s))\,\mathrm{d}W_{H}(s) \tag{6.2}\]
for \(t\in[0,T]\), see Section 4.
To obtain convergence rates for temporal discretisations of the mild solution, we assume additional structure of the nonlinearity \(F\) and the noise \(G\). Let \(Y\) be another Hilbert space such that \(Y\hookrightarrow X\) and the semigroup \((S(t))_{t\geq 0}\) is also contractive on \(Y\). We will assume \(F\) and \(G\) map \(Y\) into \(Y\) and enjoy linear growth conditions as on \(X\) also on \(Y\). Note that Lipschitz continuity is not assumed on \(Y\) contrary to \(X\). This additional structure resembling the famous Kato setting [42] allows for convergence rates of temporal discretisations for a large class of time discretisation schemes introduced in Subsection 6.1. The quantitative error estimate in Theorem 6.3 is the main result of this paper, stating that the additional structure suffices to obtain the order of the scheme as the convergence rate of the temporal approximations up to a logarithmic correction factor for sufficiently regular initial data. For the _splitting scheme_ the logarithmic correction factor can be omitted as illustrated in Subsection 6.2. The main error estimate of Theorem 6.3 is extended to the full time interval \([0,T]\) in Subsection 6.3 As an application, we revisit the Schrodinger equation, now with a multiplicative potential, in Subsection 6.4 and consider the stochastic Maxwell's equations in Subsection 6.5.
### General contractive time discretisation schemes
We now detail the assumptions on the structure of \(F\) and \(G\) on \(Y\). Note that the assumption also implies that the conditions of Theorems 4.3 and 4.4 hold.
**Assumption 6.1**.: _Let \(X,Y\) be Hilbert spaces such that \(Y\hookrightarrow X\) continuously, and let \(p\in[2,\infty)\). Let \(F:\Omega\times[0,T]\times X\to X,F(\omega,t,x)=\tilde{F}(\omega,t,x)+f(\omega,t)\) and \(G:\Omega\times[0,T]\times X\to\mathcal{L}_{2}(H,X),G(\omega,t,x)=\tilde{G}( \omega,t,x)+g(\omega,t)\) be strongly \(\mathcal{P}\otimes\mathcal{B}(X)\)-measurable, and such that \(\tilde{F}(\cdot,\cdot,0)=0\) and \(\tilde{G}(\cdot,\cdot,0)=0\), and suppose_
1. (global Lipschitz continuity on \(X\)) _there exist constants_ \(C_{F,X},C_{G,X}\geq 0\) _such that for all_ \(\omega\in\Omega,t\in[0,T]\)_, and_ \(x\in X\)_, it holds that_ \[\|\tilde{F}(\omega,t,x)-\tilde{F}(\omega,t,y)\|\leq C_{F,X}\|x-y\|,\ \|\tilde{G}(\omega,t,x)-\tilde{G}(\omega,t,y)\|\leq C_{G,X}\|x-y\|,\]
2. (Holder continuity with values in \(X\)) _for some_ \(\alpha\in(0,1]\)_,_ \[C_{\alpha,F}\coloneqq\sup_{\omega\in\Omega,x\in X}[F(\omega,\cdot,x)]_{ \alpha}<\infty,\ C_{\alpha,G}\coloneqq\sup_{\omega\in\Omega,x\in X}[G(\omega, \cdot,x)]_{\alpha}<\infty,\]
3. (\(Y\)-invariance) \(F:\Omega\times[0,T]\times Y\to Y\) _and_ \(G:\Omega\times[0,T]\times Y\to\mathcal{L}_{2}(H,Y)\) _are_ \(\mathcal{P}\otimes\mathcal{B}(Y)\)_-measurable,_ \(f\in L^{p}_{p}(\Omega;C([0,T];Y))\)_, and_ \(g\in L^{p}_{p}(\Omega;C([0,T];\mathcal{L}_{2}(H,Y)))\)_,_
4. (linear growth on \(Y\)) _there exist constants_ \(L_{F,Y},L_{G,Y}\geq 0\) _such that for all_ \(\omega\in\Omega,t\in[0,T]\)_, and_ \(x\in Y\)_, it holds that_ \[\|\tilde{F}(\omega,t,x)\|_{Y}\leq L_{F,Y}(1+\|x\|_{Y}),\ \|\tilde{G}(\omega,t,x)\|_{\mathcal{L}_{2}(H,Y)}\leq L_{G,Y}(1+\|x\|_{Y}).\]
Condition (b) can be weakened to the existence of some \(\alpha\in(0,1]\) such that
\[\sup_{x\in X}\sup_{0\leq s\leq t\leq T}\frac{F(\cdot,t,x)-F(\cdot,s,x)}{(t-s )^{\alpha}}\in L^{p}(\Omega)\]
and likewise for \(G\), i.e. pathwise Holder continuity uniformly in \(x\in X\) is sufficient together with existence of \(p\)-th moments of the Holder seminorms. Assumption 6.1 implies that (6.1) has a unique mild solution.
In order to bound the error arising from time discretisation of the mild solution, moment bounds of differences of the mild solution at different time points as in the following lemma are required. As a shorthand notation, let
\[\|f\|_{p,q,Z}\coloneqq\|f\|_{L^{p}(\Omega;L^{q}(0,T;Z))},\ \|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_where \(C_{u_{0},f,g,X}\) and \(C_{u_{0},f,g,Y}\) are as defined in (6.3), \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\) or \(D(A)\), and \(B_{p}\) is the constant from Theorem 2.1._
Proof.: Since the conditions of Theorems 4.3 and 4.4 are met, \(U\) is pathwise continuous on \(X\). By Theorem 4.4, pathwise continuity of \(U\) follows on \(Y\) as well. Moreover, the bound (6.4) holds true.
Fix \(t,s\in[0,T]\) with \(s\leq t\). From the mild solution formula (6.2) we deduce that
\[(\mathbb{E}\|U(t)-U(s)\|^{p})^{1/p}\leq\|[S(t)-S(s)]u_{0}\|_{L^{p}( \Omega;X)}\] \[+\left\|\int_{0}^{s}\|[S(t-r)-S(s-r)]F(r,U(r))\|\;\mathrm{d}r \right\|_{p}+\left\|\int_{s}^{t}\|S(t-r)F(r,U(r))\|\;\mathrm{d}r\right\|_{p}\] \[+\left\|\int_{0}^{s}[S(t-r)-S(s-r)]G(r,U(r))\;\mathrm{d}W_{H}(r) \right\|_{L^{p}(\Omega;X)}\] \[+\left\|\int_{s}^{t}S(t-r)G(r,U(r))\;\mathrm{d}W_{H}(r)\right\|_ {L^{p}(\Omega;X)}=:E_{1}+E_{2}+E_{3}+E_{4}+E_{5},\]
where \(E_{\ell}=E_{\ell}(t,s)\) for \(1\leq\ell\leq 5\). We proceed to bound these five expressions individually. By the semigroup bound (2.4),
\[E_{1}\leq\|S(t)-S(s)\|_{\mathcal{L}(Y,X)}\|u_{0}\|_{L^{p}(\Omega;Y)}\leq 2C_ {Y}(t-s)^{\alpha}\|u_{0}\|_{L^{p}(\Omega;Y)}.\]
Using (6.4) and (2.4) as well as linear growth of \(\tilde{F}\) on \(Y\) and pathwise boundedness of \(f\), we obtain
\[E_{2} \leq 2C_{Y}s(t-s)^{\alpha}\left\|\sup_{r\in[0,T]}\|F(r,U(r))\|_{Y} \right\|_{p}\] \[\leq 2C_{Y}s(t-s)^{\alpha}\left\|\sup_{r\in[0,T]}\left(L_{F,Y}(1+ \|U(r)\|_{Y})+\|f(r)\|_{Y}\right)\right\|_{p}\] \[\leq 2C_{Y}T\big{(}L_{F,Y}C_{u_{0},f,g,Y}+\|f\|_{p,\infty,Y} \big{)}(t-s)^{\alpha}.\]
Analogously,
\[E_{3}\leq(C_{F,X}C_{u_{0},f,g,X}+\|f\|_{p,\infty,X})(t-s)\]
is obtained by contractivity of the semigroup, linear growth of \(F\) on \(X\) and boundedness of the solution. For the terms involving a stochastic integral, we apply Theorem 2.1. Additionally making use of the bound (2.4) for semigroup differences, linear growth of \(\tilde{G}\), (6.4), and pathwise boundedness of \(g\) results in
\[E_{4} \leq B_{p}\left(\mathbb{E}\left(\int_{0}^{s}\|[S(t-r)-S(s-r)]G(r, U(r))\|_{\mathcal{L}_{2}(H,X)}^{2}\;\mathrm{d}r\right)^{p/2}\right)^{1/p}\] \[\leq 2B_{p}C_{Y}\big{(}L_{G,Y}C_{u_{0},f,g,Y}+\|\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
For time discretisation, we employ a contractive time discretisation scheme \(R:[0,\infty)\to\mathcal{L}(X)\) with time step \(k>0\) on a uniform grid \(\{t_{j}=jk:\ j=0,\ldots,N_{k}\}\subseteq[0,T]\) with final time \(T=t_{N_{k}}>0\) and \(N_{k}=\frac{T}{k}\in\mathbb{N}\) being the number of time steps. As in the previous section, the discrete solution is given by \(U^{0}\coloneqq u_{0}\) and
\[U^{j} \coloneqq R_{k}U^{j-1}+kR_{k}F(t_{j-1},U^{j-1})+R_{k}G(t_{j-1},U^{j-1 })\Delta W_{j} \tag{6.6}\] \[=R_{k}^{j}u_{0}+k\sum_{i=0}^{j-1}R_{k}^{j-i}F(t_{i},U^{i})+\sum_{ i=0}^{j-1}R_{k}^{j-i}G(t_{i},U^{i})\Delta W_{i+1} \tag{6.5}\]
for \(j=1,\ldots,N_{k}\) with Wiener increments \(\Delta W_{j}\coloneqq W_{H}(t_{j})-W_{H}(t_{j-1})\).
We recall from Definition 2.3 that \(R\)_approximates \(S\) to order \(\alpha>0\) on \(Y\)_ or, equivalently, \(R\)_converges of order \(\alpha\) on \(Y\)_ if there is a constant \(C_{\alpha}\geq 0\) such that for all \(u\in Y\)
\[\|(S(t_{j})-R_{k}^{j})u\|\leq C_{\alpha}k^{\alpha}\|u\|_{Y}.\]
Under the conditions of Assumption 6.1 we conclude from Proposition 5.1 and the remark thereafter that \(R\) is stable not only on \(X\) but also on \(Y\) provided that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\) and \(R\) is contractive on both \(X\) and \(Y\). Thus,
\[1+\left\|\max_{0\leq j\leq N_{k}}\|U^{j}\|_{Y}\right\|_{p}\leq K_{u_{0},f,g,Y}, \tag{6.7}\]
where \(K_{u_{0},f,g,Y}\coloneqq C_{\mathrm{stab}}c_{u_{0},f,g,T}\) with constants \(C_{\mathrm{stab}},c_{u_{0},f,g,T}\) as in Proposition 5.1 applied on \(Y\) instead of \(X\).
We can now state and prove the main result of this paper.
**Theorem 6.3**.: _Suppose that Assumption 6.1 holds for some \(\alpha\in(0,1]\) and \(p\in[2,\infty)\). Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on both \(X\) and \(Y\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(X\) and \(Y\). Assume \(R\) approximates \(S\) to order \(\alpha\) on \(Y\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously if \(\alpha\in(0,1)\) or \(Y\hookrightarrow D(A)\) continuously if \(\alpha=1\). Let \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Denote by \(U\) the mild solution of (6.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|\right\|_{p}\leq C_{\mathrm{ e}}\left(C_{1}k+C_{2}k^{1/2}+\left(C_{3}+C_{4}\log\left(\frac{T}{k}\right) \right)k^{\alpha}\right)\]
_with constants \(C_{\mathrm{e}}\coloneqq(1+C^{2}T)^{1/2}\exp((1+C^{2}T)/2)\), \(C\coloneqq C_{F,X}\sqrt{T}+B_{p}C_{G,X}\), \(C_{1}\coloneqq L_{1}(\frac{C_{F,X}}{2}T^{2}+B_{p}C_{G,X}\sqrt{T})\), \(C_{2}\coloneqq L_{2}(\frac{2}{3}C_{F,X}T+(\frac{3}{2})^{1/2}B_{p}C_{G,X}\sqrt{T})\), \(C_{4}\coloneqq C_{3,\log}\sqrt{T}\), and_
\[C_{3} \coloneqq C_{\alpha}\|u_{0}\|_{L^{p}(\Omega;Y)}+C_{2,\alpha}T+C_{3, \alpha}\sqrt{T},\] \[C_{2,\alpha} \coloneqq\frac{C_{F,X}L_{3}+C_{\alpha,F}}{\alpha+1}+\left(L_{F,Y} K_{u_{0},f,g,Y}+\|f\|_{L^{p}(\Omega;L^{\infty}(0,T;Y))}\right)\left(\frac{2C_{Y}}{ \alpha+1}+C_{\alpha}\right),\] \[C_{3,\alpha} \coloneqq\frac{B_{p}}{\sqrt{2\alpha+1}}\Big{(}\sqrt{3}C_{G,X}L_{3} +C_{\alpha,G}+2C_{Y}\big{(}L_{G,Y}K_{u_{0},f,g,Y}+\|g\|_{L^{p}(\Omega;L^{\infty }(0,T;\mathcal{L}_{2}(H,Y)))}\big{)}\Big{)},\] \[C_{3,\log} \coloneqq K_{p}C_{\alpha}\big{(}L_{G,Y}K_{u_{0},f,g,Y}+\|g\|_{L^{p}( \Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))}\big{)},\]
_where \(L_{1},L_{2},L_{3}\) are as defined in Lemma 6.2, \(K_{u_{0},f,g,Y}\) as in (6.7), \(K_{p}=10\mathrm{e}\sqrt{p}\), \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\) or \(D(A)\), and \(B_{p}\) is the constant from Theorem 2.1._
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\min\{\alpha,\frac{1}{2}\}\) up to a logarithmic correction factor as \(k\to 0\)._
The constant \(C_{\mathrm{e}}\) appears exponentially in the above. In the special case that \(C_{F,X}=C_{G,X}=T=1\), and \(p=2\) one can check that, similarly to Theorem 4.4, this yields the numerically reasonable value \(C_{\mathrm{e}}=\sqrt{10}e^{5}\leq 470\).
If \(R\) commutes with the resolvent of \(A\), contractivity of \(R\) and \(S\) extend to fractional domain spaces and complex interpolation spaces. Hence, contractivity on \(Y\) often comes together with contractivity on \(X\). We recall that contractivity of a large class of schemes follows from Proposition 2.4.
Proof.: The assumptions of Theorems 4.3 and 4.4 holds, and thus the mild solution \(U\) exists and the bound (6.4) holds.
By definition, \(U(t_{j})=U^{j}=u_{0}\) for \(j=0\). Let \(N\in\{1,\ldots,N_{k}\}\). Using (6.6), the discretisation error can be split into three parts
\[E(N) :=\left\|\max_{1\leq j\leq N}\left\|U(t_{j})-U^{j}\right\|\right\|_ {p}\] \[\leq\left\|\max_{1\leq j\leq N}\left\|(S(t_{j})-R_{k}^{j})u_{0} \right\|\right\|_{p}\] \[\quad+\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s )F(s,U(s))\,\mathrm{d}s-k\sum_{i=0}^{j-1}R_{k}^{j-i}F(t_{i},U^{i})\right\|\right\| _{p}\] \[\quad+\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s )G(s,U(s))\,\mathrm{d}W_{H}(s)-\sum_{i=0}^{j-1}R_{k}^{j-i}G(t_{i},U^{i})\Delta W _{i+1}\right\|\right\|_{p}\] \[\quad=:M_{1}+M_{2}+M_{3}.\]
Using convergence of \(R\) of order \(\alpha\) on \(Y\) and the dominated convergence theorem, we obtain
\[M_{1}\leq C_{\alpha}k^{\alpha}\|u_{0}\|_{L^{p}(\Omega;Y)}. \tag{6.8}\]
To shorten notation for the discrete terms, we introduce the piecewise constant functions \(F^{k}(s)\coloneqq F(t_{i},U^{i})\) and \(G^{k}(s)\coloneqq G(t_{i},U^{i})\) for \(s\in[t_{i},t_{i+1}),0\leq i\leq N_{k}-1\) as well as \(S^{k}(s)\coloneqq R_{k}^{i}\) for \(s\in(t_{i-1},t_{i}],1\leq i\leq N_{k}\). This allows us to rewrite
\[M_{2} =\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s)F(s, U(s))-S^{k}(t_{j}-s)F^{k}(s)\,\mathrm{d}s\right\|\right\|_{p}\] \[\leq\left\|\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\max_{1\leq j \leq N}\left\|S(t_{j}-s)[F(s,U(s))-F(s,U(t_{i}))]\right\|\,\mathrm{d}s\right\| _{p}\] \[\quad+\left\|\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\max_{1\leq j \leq N}\left\|S(t_{j}-s)[F(s,U(t_{i}))-F(t_{i},U(t_{i}))]\right\|\,\mathrm{d} s\right\|_{p}\] \[\quad+\left\|\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\max_{1\leq j \leq N}\left\|S(t_{j}-s)[F(t_{i},U(t_{i}))-F(t_{i},U^{i})]\right\|\,\mathrm{d} s\right\|_{p}\] \[\quad+\left\|\int_{0}^{t_{N}}\max_{1\leq j\leq N}\left\|[S(t_{j}- s)-S^{k}(t_{j}-s)]F^{k}(s)\right\|\,\mathrm{d}s\right\|_{p}\] \[\quad=:M_{2,1}+M_{2,2}+M_{2,3}+M_{2,4}.\]
Making use of Minkowski's inequality in \(L^{p}(\Omega)\), contractivity of \((S(t))_{t\geq 0}\) and Lipschitz continuity of \(\tilde{F}\), we derive the bound
\[M_{2,3}\leq C_{F,X}\sum_{i=0}^{N-1}\left\|\int_{t_{i}}^{t_{i+1}}\left\|U(t_{i} )-U^{i}\right\|\,\mathrm{d}s\right\|_{p}\leq C_{F,X}k\sum_{i=0}^{N-1}E(i) \tag{6.9}\]
for \(M_{2,3}\). Proceeding likewise for \(M_{2,1}\), we obtain from Lemma 6.2 that
\[M_{2,1} \leq C_{F,X}\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}(\mathbb{E} \left\|U(s)-U(t_{i})\right\|^{p})^{1/p}\,\mathrm{d}s\] \[\leq C_{F,X}\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}L_{1}(s-t_{i})+ L_{2}(s-t_{i})^{1/2}+L_{3}(s-t_{i})^{\alpha}\,\mathrm{d}s\] \[\leq C_{F,X}\sum_{i=0}^{N-1}\left(\frac{L_{1}}{2}k^{2}+\frac{2L_{2 }}{3}k^{3/2}+\frac{L_{3}}{\alpha+1}k^{\alpha+1}\right)\]
\[=C_{F,X}t_{N}\left(\frac{L_{1}}{2}k+\frac{2L_{2}}{3}k^{1/2}+\frac{L_{3}}{ \alpha+1}k^{\alpha}\right). \tag{6.10}\]
Analogously, uniform Holder continuity yields
\[M_{2,2} \leq\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\left\|F(s,U(t_{i}))-F(t _{i},U(t_{i}))\right\|_{L^{p}(\Omega;X)}\,\mathrm{d}s\] \[\leq\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}(s-t_{i})^{\alpha}\, \mathrm{d}s\left\|[F(\cdot,U(t_{i}))]_{\alpha}\right\|_{p} \tag{6.11}\] \[\leq\sum_{i=0}^{N-1}\frac{k^{\alpha+1}}{\alpha+1}C_{\alpha,F}= \frac{C_{\alpha,F}t_{N}}{\alpha+1}k^{\alpha}.\]
Using the semigroup bound (2.4) together with the assumed convergence rate \(\alpha\) of \(R\) on \(Y\), the linear growth assumption and stability of \(R\), we obtain
\[M_{2,4} \leq\left\|\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\left\|[S(t_{j}- s)-S(t_{j}-t_{i})]F(t_{i},U^{i})\right\|\,\mathrm{d}s\right\|_{p}\] \[\quad+\left\|\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\left\|\left[S (t_{j}-t_{i})-R_{k}^{j-i}\right]F(t_{i},U^{i})\right\|\,\mathrm{d}s\right\|_{p}\] \[\leq 2C_{Y}\sum_{i=0}^{N-1}\left\|\int_{t_{i}}^{t_{i+1}}(s-t_{i}) ^{\alpha}\|F(t_{i},U^{i})\|_{Y}\,\mathrm{d}s\right\|_{p}\] \[\quad+C_{\alpha}k^{\alpha}\sum_{i=0}^{N-1}\left\|\int_{t_{i}}^{t_ {i+1}}\left\|F(t_{i},U^{i})\right\|_{Y}\,\mathrm{d}s\right\|_{p}\] \[\leq\left(\frac{2C_{Y}}{\alpha+1}+C_{\alpha}\right)k^{\alpha+1} \sum_{i=0}^{N-1}\Big{(}L_{F,Y}\left\|1+\|U^{i}\|_{Y}\right\|_{p}+\|f(t_{i})\|_ {L^{p}(\Omega;Y)}\Big{)} \tag{6.12}\] \[\leq\left(\frac{2C_{Y}}{\alpha+1}+C_{\alpha}\right)\big{(}L_{F,Y} K_{u_{0},f,g,Y}+\|f\|_{p,\infty,Y}\big{)}t_{N}k^{\alpha}.\]
In conclusion from (6.10), (6.11), (6.9), and (6.12), \(M_{2}\) is bounded by
\[M_{2} \leq\frac{C_{F,X}L_{1}}{2}t_{N}k+\frac{2C_{F,X}L_{2}}{3}t_{N}k^{1 /2}+C_{2,\alpha}t_{N}k^{\alpha}+C_{F,X}k\sum_{i=0}^{N-1}E(i) \tag{6.13}\] \[\leq\frac{C_{F,X}L_{1}}{2}t_{N}k+\frac{2C_{F,X}L_{2}}{3}t_{N}k^{1 /2}+C_{2,\alpha}t_{N}k^{\alpha}+C_{F,X}\sqrt{t_{N}}\bigg{(}k\sum_{i=0}^{N-1}E( i)^{2}\bigg{)}^{1/2},\]
where we have used the Cauchy-Schwarz inequality in the last line.
Let \(\lfloor s\rfloor=\max\{t_{i}:0\leq i\leq N_{k}-1,t_{i}\leq s\}\). The remaining term \(M_{3}\) can be rewritten as
\[M_{3} =\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s)G(s,U(s))-S^{k}(t_{j}-s)G^{k}(s)\,\mathrm{d}W_{H}(s)\right\|\right\|_{p}\] \[\leq\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s)[G (s,U(s))-G(s,U(\lfloor s\rfloor)]\,\mathrm{d}W_{H}(s)\right\|\right\|_{p}\] \[\quad+\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s)[ G(s,U(\lfloor s\rfloor))-G(\lfloor s\rfloor,U(\lfloor s\rfloor)]\,\mathrm{d}W_{H}(s) \right\|\right\|_{p}\] \[\quad+\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}S(t_{j}-s)[ G(\lfloor s\rfloor,U(\lfloor s\rfloor))-G^{k}(s)]\,\mathrm{d}W_{H}(s)\right\| \right\|_{p}\] \[\quad+\left\|\max_{1\leq j\leq N}\left\|\int_{0}^{t_{j}}[S(t_{j}- \lfloor s\rfloor)-S(t_{j}-s)]G^{k}(s)\,\mathrm{d}W_{H}(s)\right\|\right\|_{p}\]
\[\leq B_{p}C_{G,X}\left(\mathbb{E}\left(k\sum_{i=0}^{N-1}\|U(t_{i})-U^ {i}\|^{2}\right)^{p/2}\right)^{1/p}\] \[=B_{p}C_{G,X}k^{1/2}\left\|\sum_{l=0}^{N-1}\max_{0\leq j\leq l}\|U( t_{j})-U^{j}\|^{2}\right\|_{p/2}\] \[\leq B_{p}C_{G,X}\sqrt{k}\bigg{(}\sum_{l=0}^{N-1}\|\max_{0\leq j \leq l}\|U(t_{j})-U^{j}\|^{2}\bigg{\|}_{p/2} \tag{6.16}\] \[=B_{p}C_{G,X}\sqrt{k}\bigg{(}\sum_{l=0}^{N-1}E(l)^{2}\bigg{)}^{1/2}.\]
Since \(R\) is contractive on \(Y\) by assumption, the conditions of Proposition 5.1 are fulfilled not only on \(X\) but also on \(Y\). Thus, we can use the estimate (6.7). Together with the maximal inequality, the semigroup difference bound (2.4), the ideal property of \(\mathcal{L}_{2}(H,X)\), and linear growth of \(\tilde{G}\), this
yields
\[M_{3,4} \leq\left\|\sup_{t\in[0,t_{N}]}\left\|\int_{0}^{t}S(t-s)\left(\sum_{i =0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)[S(s-t_{i})-I]G(t_{i},U^{i})\right)\, \mathrm{d}W_{H}(s)\right\|\right\|_{p}\] \[\leq B_{p}\left(\mathbb{E}\left(\int_{0}^{t_{N}}\left\|\mathbf{1} _{[t_{i},t_{i+1})}(s)[S(s-t_{i})-I]G(t_{i},U^{i})\right\|_{\mathcal{L}_{2}(H,X )}^{2}\,\mathrm{d}s\right)^{p/2}\right)^{1/p}\] \[\leq 2B_{p}C_{Y}\left(\mathbb{E}\left(\sum_{\ell=0}^{N-1}\int_{t_{ \ell}}^{t_{\ell+1}}(s-t_{\ell})^{2\alpha}\left\|G(t_{\ell},U^{\ell})\right\|_{ \mathcal{L}_{2}(H,Y)}^{2}\,\mathrm{d}s\right)^{p/2}\right)^{1/p}\] \[\leq\frac{2B_{p}C_{Y}}{\sqrt{2\alpha+1}}\sqrt{t_{N}}k^{\alpha} \left\|\max_{0\leq j\leq N-1}\left\|G(t_{j},U^{j})\right\|_{\mathcal{L}_{2}(H, Y)}\right\|_{p} \tag{6.17}\] \[\leq\frac{2B_{p}C_{Y}}{\sqrt{2\alpha+1}}\big{(}L_{G,Y}K_{u_{0},f,g,Y}+\left\|g\right\|_{p,\infty,Y}\big{)}\sqrt{t_{N}}k^{\alpha}.\]
Applying Proposition 2.2 with \(\Phi_{s}^{(j)}=\sum_{i=0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)[S(t_{j}-t_{i})- R_{k}^{j-i}]G(U^{i})\) to the remaining term, we conclude that
\[M_{3,5}=\left(\mathbb{E}\max_{1\leq j\leq N}\left\|\int_{0}^{t_{ j}}\sum_{i=0}^{j-1}\mathbf{1}_{[t_{i},t_{i+1})}(s)[S(t_{j}-t_{i})-R_{k}^{j-i}]G( t_{i},U^{i})\,\mathrm{d}W_{H}(s)\right\|^{p}\right)^{1/p}\] \[\leq K_{p}\log(N)\left\|\left(\sum_{\ell=0}^{N-1}k\left(\max_{1 \leq j\leq N}\left\|[S(t_{j}-t_{\ell})-R_{k}^{j-l}]G(t_{\ell},U^{\ell})\right\| _{\mathcal{L}_{2}(H,X)}\right)^{2}\right)^{1/2}\right\|_{p}\] \[\leq K_{p}\log(N)\left(\mathbb{E}\left(\sum_{l=0}^{N-1}k\left(C_ {\alpha}k^{\alpha}\left\|G(t_{\ell},U^{\ell})\right\|_{\mathcal{L}_{2}(H,Y)} \right)^{2}\right)^{p/2}\right)^{1/p}\] \[\leq K_{p}C_{\alpha}\sqrt{t_{N}}\log(N)k^{\alpha}\left\|\max_{0 \leq j\leq N-1}\left\|G(t_{j},U^{j})\right\|_{\mathcal{L}_{2}(H,Y)}\right\|_{p} \tag{6.18}\] \[\leq K_{p}C_{\alpha}\big{(}L_{G,Y}K_{u_{0},f,g,Y}+\left\|g\right\| _{p,\infty,Y}\big{)}\sqrt{t_{N}}\log(N)k^{\alpha}\]
using that \(R\) approximates \(S\) to order \(\alpha\) on \(Y\), the ideal property of \(\mathcal{L}_{2}(H,X)\), linear growth, and stability of \(R\) on \(Y\). Combining the bounds (6.14) to (6.18), we deduce
\[M_{3} \leq B_{p}C_{G,X}L_{1}\sqrt{t_{N}}k+\sqrt{\frac{3}{2}}B_{p}C_{G,X }L_{2}\sqrt{t_{N}}k^{1/2}+C_{3,\alpha}\sqrt{t_{N}}k^{\alpha} \tag{6.19}\] \[\quad+C_{3,\log}\sqrt{t_{N}}\log(N)k^{\alpha}+B_{p}C_{G,X}\bigg{(} k\sum_{l=0}^{N-1}E(l)^{2}\bigg{)}^{1/2}.\]
Having bounded each term individually in (6.8), (6.13) and (6.19), we conclude
\[E(N)\leq C_{1}k+C_{2}k^{1/2}+C_{3}k^{\alpha}+C_{4}\log(N)k^{\alpha}+C\bigg{(}k \sum_{l=0}^{N-1}E(l)^{2}\bigg{)}^{1/2},\]
and thus by the discrete version of Gronwall's Lemma 2.6
\[E(N)\leq(1+C^{2}t_{N})^{1/2}\mathrm{e}^{(1+C^{2}t_{N})/2}\left(C_{1}k+C_{2}k^{ 1/2}+C_{3}k^{\alpha}+C_{4}\log(N)k^{\alpha}\right).\]
The desired error estimate is obtained for \(N=N_{k}\). As \(k\to 0\), the terms with the lowest exponents dominate, i.e.
\[E(N_{k})\lesssim k^{1/2}+k+\log(N_{k})k^{\alpha}\lesssim\log(N_{k})k^{\min\{ \frac{1}{2},\alpha\}},\,\,(k\to 0).\qed\]
### Splitting scheme
We analyse the time discretisation error for the special case \(R_{k}\coloneqq S(k)\) known as the _splitting scheme_. Obviously, the splitting scheme is contractive for contractive semigroups. Furthermore, several terms in the error analysis vanish for the splitting scheme, since \(S(t_{j})-R_{k}^{j}=S(t_{j})-S(k)^{j}=0\) by the semigroup property. In particular, the logarithmic correction factor is not needed for this scheme.
**Corollary 6.4** (Splitting scheme).: _Suppose that Assumption 6.1 holds for some \(\alpha\in(0,1]\) and \(p\in[2,\infty)\). Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on both \(X\) and \(Y\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously if \(\alpha\in(0,1)\) or \(Y\hookrightarrow D(A)\) continuously if \(\alpha=1\). Let \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Consider the splitting scheme \(R\coloneqq S\) for time discretisation. Denote by \(U\) the mild solution of (6.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|\right\|_{p}\leq C_{\mathrm{S },\mathrm{e}}\left(C_{\mathrm{S},1}k+C_{\mathrm{S},2}k^{1/2}+C_{\mathrm{S},3} k^{\alpha}\right)\]
_with constants \(C_{\mathrm{S},\mathrm{e}}\coloneqq C_{\mathrm{e}}\), \(C_{\mathrm{S},1}\coloneqq C_{1}\), \(C_{\mathrm{S},2}\coloneqq C_{2}\) as in Theorem 6.3, \(C_{\mathrm{S},3}\coloneqq C_{\mathrm{S},2,\alpha}T+C_{\mathrm{S},3,\alpha}T^{1 /2}\), \(C_{\mathrm{S},3,\alpha}\coloneqq C_{3,\alpha}\), and_
\[C_{\mathrm{S},2,\alpha}\coloneqq\frac{1}{\alpha+1}\left(C_{F,X}L_{3}+C_{ \alpha,F}+2C_{Y}\big{(}L_{F,Y}K_{u_{0},f,g,Y}+\|f\|_{L^{p}(\Omega;L^{\infty}(0,T;Y))}\big{)}\right),\]
_where \(C_{3,\alpha}\) is as defined in Theorem 6.3, \(L_{3}\) as in Lemma 6.2, \(K_{u_{0},f,g,Y}\) as in (6.7), \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\) or \(D(A)\), and \(B_{p}\) is the constant from Theorem 2.1._
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\min\{\alpha,\frac{1}{2}\}\) as \(k\to 0\)._
Proof.: Adopt the notation from the proof of Theorem 6.3. Contractivity of \(R\) on \(X\) and \(Y\) is immediate from contractivity of \(S\) on these spaces. Since \(S(t_{j})-R_{k}^{j}=0\) for any \(j\in\{0,\ldots,N_{k}\}\), the terms \(M_{1}\) and \(M_{3,5}\) vanish. Moreover, the second term in \(M_{2,4}\) vanishes so that
\[M_{2,4}\leq\frac{2C_{Y}}{\alpha+1}\big{(}L_{F,Y}K_{u_{0},f,g,Y}+\|f\|_{p,\infty,Y}\big{)}t_{N}k^{\alpha}.\]
Combining the individual bounds for the remaining terms, the estimate follows from a discrete Gronwall argument as in the proof of Theorem 6.3. The logarithmic correction factor vanishes due to \(M_{3,5}=0\).
### Error estimates on the full time interval
In this subsection we will extend the error estimates of Theorem 6.3 and Corollary 6.4 to the full time interval by using a suitable Holder regularity of the paths of the mild solution.
The following simple deterministic result provides a way to connect the uniform error to the error on the grid. Given a non-decreasing function \(\Phi:[0,T]\to[0,\infty)\) such that \(\Phi\neq 0\) on \((0,T]\) we say that \(u\in C^{\Phi}([0,T];X)\) if \(u:[0,T]\to X\) is continuous and
\[[u]_{C^{\Phi}([0,T];X)}=\sup_{0\leq s<t\leq T}\frac{\|u(t)-u(s)\|}{\Phi(t-s)}<\infty.\]
Moreover, we set \(\|u\|_{C^{\Phi}([0,T];X)}\coloneqq\|u\|_{\infty}+[u]_{C^{\Phi}([0,T];X)}\).
**Lemma 6.5** (Decomposition of the error on the full time interval).: _Let \(u\in C^{\Phi}([0,T];X)\) for a non-decreasing function \(\Phi:[0,T]\to[0,\infty)\) such that \(\Phi\neq 0\) on \((0,T]\). Let \(\Pi\subseteq[0,T]\) be a finite time grid, and denote by \(\tilde{u}:\Pi\to X\) an approximation of \(u\), which is extended to \([0,T]\) by setting \(\tilde{u}(t)\coloneqq\tilde{u}(\lfloor t\rfloor_{\Pi})\) for \(t\notin\Pi\), where \(\lfloor t\rfloor_{\Pi}\coloneqq\max\{s\in\Pi:s\leq t\}\). Then it holds that_
\[\sup_{t\in[0,T]}\|u(t)-\tilde{u}(t)\|\leq\Phi(h)\cdot\|u\|_{C^{\Phi}([0,T];X)}+ \sup_{t\in\Pi}\|u(t)-\tilde{u}(t)\|\]
_for the maximal time step \(h\coloneqq\sup_{t\in[0,T]}\mathrm{dist}(t,\Pi)\)._
Proof.: For \(t\in[0,T]\) we can write
\[\|u(t)-\tilde{u}(t)\| \leq\|u(t)-u(\lfloor t\rfloor_{\Pi})\|+\|u(\lfloor t\rfloor_{\Pi} )-\tilde{u}(t)\|\] \[\leq\|u\|_{C^{\Phi}([0,T];X)}\cdot\Phi(t-\lfloor t\rfloor_{\Pi})+ \sup_{s\in\Pi}\|u(s)-\tilde{u}(s)\|,\]
which implies the required result.
From the above we see that in order to estimate the uniform error on \([0,T]\), we need an (optimal) Holder regularity result for the mild solution \(U\) to (6.1). In order to obtain such a result, the main difficulty lies in estimating the stochastic convolution.
**Lemma 6.6** (Path regularity of stochastic convolutions).: _Let \(X,Y\) be Hilbert spaces such that \(Y\hookrightarrow X\) continuously. Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on both \(X\) and \(Y\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) holds for some \(\alpha\in(0,1/2]\). Let \(q\in(2,\infty]\) be such that \(\frac{1}{2}-\frac{1}{q}=\alpha\) and let \(2\leq p<p_{0}<\infty\). Suppose that_
\[g\in L^{p}(\Omega;L^{2}(0,T;\mathcal{L}_{2}(H,Y)))\cap L^{p_{0}}(\Omega;L^{q}( 0,T;\mathcal{L}_{2}(H,X)))\]
_and define \(J_{g}:\Omega\times[0,T]\to X\) as the stochastic convolution_
\[J_{g}(t)=\int_{0}^{t}S(t-s)g(s)dW_{H}(s).\]
_Then one has \(J_{g}\in L^{p}(\Omega;C^{\Psi}([0,T];X))\) for \(\Psi:(0,T]\to(0,\infty),\Psi(r)\coloneqq r^{\alpha}(1+\log(\frac{T}{r}))^{1/2}\) and there exist constants \(C_{p},C_{\alpha,p,p_{0},T}\geq 0\) such that_
\[\|J_{g}\|_{L^{p}(\Omega;C^{\Psi}([0,T];X))}\leq C_{p}\|g\|_{L^{p}(\Omega;L^{2} (0,T;\mathcal{L}_{2}(H,Y)))}+C_{\alpha,p,p_{0},T}\|g\|_{L^{p_{0}}(\Omega;L^{q} (0,T;\mathcal{L}_{2}(H,X)))}.\]
By a simple rescaling the result extends to quasi-contraction semigroups. Moreover, from the proof below one can see that a certain Orlicz integrability in \(\Omega\) is sufficient for \(g\). Note that the above path regularity is optimal for \(q=\infty\). Indeed, Levy's modulus continuity theorem for a scalar Brownian motion states that a.s.
\[\limsup_{h\downarrow 0}\sup_{t\in[0,1-h]}\frac{B(t+h)-B(t)}{\sqrt{2h\log(1/h)}}=1,\]
which shows that \(\Psi\) cannot be replaced by a "better" function.
Proof of Lemma 6.6.: For \(0\leq s<t\leq T\), we can write
\[\|J_{g}(t)-J_{g}(s)\| \leq\left\|(S(t-s)-I)\int_{0}^{s}S(s-r)g(r)\,\mathrm{d}W_{H}(r) \right\|+\left\|\int_{s}^{t}S(t-r)g(r)\,\mathrm{d}W_{H}(r)\right\|\] \[\coloneqq T_{1}(t,s)+T_{2}(t,s).\]
For \(T_{1}\) we can write
\[T_{1}(t,s)\leq\|S(t-s)-I\|_{\mathcal{L}(Y,X)}\Big{\|}\int_{0}^{s}S(s-r)g(r)\, \mathrm{d}W_{H}(r)\Big{\|}_{Y}\leq c(t-s)^{\alpha}\|J_{g}(s)\|_{Y}\]
for some \(c\geq 0\). Therefore, by Theorem 2.1 we obtain
\[\bigg{\|}\sup_{0\leq s<t\leq T}\frac{T_{1}(t,s)}{\Psi(t-s)}\bigg{\|}_{p}\leq c \bigg{\|}\sup_{0\leq s<t\leq T}\frac{\|J_{g}(s)\|_{Y}}{(1+\log(\frac{T}{t-s}) )^{1/2}}\bigg{\|}_{L^{p}(\Omega)}\leq cB_{p}\|g\|_{L^{p}(\Omega;L^{2}(0,T; \mathcal{L}_{2}(H,Y)))}.\]
For \(T_{2}\) we use the dilatation result of [55, Theorem I.7.1] (cf. [32]). We can find a Hilbert space \(\widetilde{X}\), a contractive injection \(Q:X\to\widetilde{X}\), a contractive projection \(P:\widetilde{X}\to X\), and a unitary \(C_{0}\)-group \((G(t))_{t\in\mathbb{R}}\) on \(\widetilde{X}\) such that \(S(t)=PG(t)Q\) for \(t\geq 0\). Thus, we can write
\[T_{2}(t,s)=\bigg{\|}\int_{s}^{t}PG(t-r)Qg(r)\,\mathrm{d}W_{H}(r)\bigg{\|}_{X} \leq\Big{\|}\int_{s}^{t}G(-r)Qg(r)\,\mathrm{d}W_{H}(r)\Big{\|}_{\widetilde{X}} =\|I(t)-I(s)\|_{\widetilde{X}},\]
where \(I(t)\coloneqq\int_{0}^{t}G(-r)Qg(r)\,\mathrm{d}W_{H}(r)\). Then by [54, (2.12) and Theorem 3.2(vi)] we have \(I\in L^{p}(\Omega;C^{|\cdot|^{\alpha}|\log(\cdot)|^{1/2}}([0,T];\widetilde{X}))\) and thus by boundedness of \(|\log(\cdot)|^{1/2}(1+\log(\frac{T}{\cdot}))^{-1/2}\) on \((0,T]\) also \(I\in L^{p}(\Omega;C^{\Psi}([0,T];\widetilde{X}))\). Moreover, there are constants \(c_{\alpha,T},C_{\alpha,p,p_{0},T}\geq 0\) such that
\[\|I\|_{L^{p}(\Omega;C^{\Psi}([0,T];\widetilde{X}))} \leq c_{\alpha,T}\|I\|_{L^{p}(\Omega;B^{\alpha}_{\mathbf{2},\infty }(0,T;\widetilde{X}))}\] \[\leq C_{\alpha,p,p_{0},T}\|G(-r)Qg(r)\|_{L^{p_{0}}(\Omega;L^{q}(0,T; \mathcal{L}_{2}(H,\widetilde{X})))}\] \[\leq C_{\alpha,p,p_{0},T}\|g\|_{L^{p_{0}}(\Omega;L^{q}(0,T; \mathcal{L}_{2}(H,X)))}.\]
It follows that
\[\bigg{\|}\sup_{0\leq s<t\leq T}\frac{T_{2}(t,s)}{\Psi(t-s)}\bigg{\|}_{p}\leq\|I\|_ {L^{p}(\Omega;C^{\Phi}([0,T];\tilde{X}))}\leq C_{\alpha,p,p_{0},T}\|g\|_{L^{p_{0} }(\Omega;L^{q}(0,T;\mathcal{L}_{2}(H,X)))}.\]
Now the required estimate follows by combining the estimates for \(T_{1}\) and \(T_{2}\).
**Remark 6.7**.: _For analytic semigroups on \(X\), the result of Lemma 6.6 even holds if merely \(g\in L^{p_{0}}(\Omega;L^{q}(0,T;\mathcal{L}_{2}(H,X)))\), and even \(J_{g}\in L^{p}(\Omega;B^{\alpha}_{\Phi_{2},\infty}(0,T;X))\) (see [54, Theorem 5.1]). In particular, the space \(Y\) and contractivity of \(S\) are not needed. We do not know if one can take \(p_{0}=p\) in Lemma 6.6, even in the analytic case. Also we do not know if the above Besov regularity of \(J_{g}\) holds in the non-analytic case._
After these preparations we can now prove the required path regularity of the mild solution.
**Proposition 6.8** (Path regularity of the mild solution).: _Suppose that Assumption 6.1 holds for some \(\alpha\in(0,1/2]\) and \(p\in[2,\infty)\). Let \(p_{0}\in(p,\infty)\) and \(q\in(2,\infty]\) be such that \(\frac{1}{2}-\frac{1}{q}=\alpha\), and suppose that \(f,g\), and \(u_{0}\) additionally satisfy_
\[f\in L^{p_{0}}(\Omega;L^{1}(0,T;X)),\ \ g\in L^{p_{0}}(\Omega;L^{q}(0,T; \mathcal{L}_{2}(H,X))),\ \ \text{and}\ \ u_{0}\in L^{p_{0}}_{\mathcal{F}_{0}}(\Omega;X)\cap L^{p}_{\mathcal{F}_{0} }(\Omega;Y).\]
_Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on both \(X\) and \(Y\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously. Let \(\Psi:(0,T]\to(0,\infty)\) be given by \(\Psi(r)\coloneqq r^{\alpha}(1+\log(\frac{T}{r}))^{1/2}\). Then the mild solution to (6.1) satisfies \(U\in L^{p}(\Omega;C^{\Psi}([0,T];X))\) and there exists a constant \(C\) depending on \((T,p,p_{0},\alpha,X,Y)\) such that_
\[\|U\|_{L^{p}(\Omega;C^{\Phi}([0,T];X))}\leq C\big{(}1+\|u_{0}\|_{ L^{p}(\Omega;Y)}+\|f\|_{L^{p}(\Omega;L^{\infty}(0,T;Y))}+\|g\|_{L^{p}( \Omega;L^{\infty}(0,T;\mathcal{L}_{2}(H,Y)))}\\ +\|u_{0}\|_{L^{p_{0}}(\Omega;X)}+\|f\|_{L^{p_{0}}(\Omega;L^{1}(0, T;X))}+\|g\|_{L^{p_{0}}(\Omega;L^{q}(0,T;\mathcal{L}_{2}(H,X)))}\big{)}.\]
Proof.: The mild solution formula (6.2) yields an initial value term, a difference of deterministic convolutions, and a stochastic version of the latter. The first two can be estimated as in the proof of Lemma 6.2, resulting in an upper bound of the form
\[c(1+\|u_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{p,\infty,Y}+\|g\|_{p,2,Y})\]
for some \(c\geq 0\) depending on \(T\). To the remaining term, we apply Lemma 6.6 and note that
\[\|\!|G(\cdot,U(\cdot))|\!|\!|_{p,2,Y} \leq L_{G,Y}C_{u_{0},f,g,Y}+|\!|\!|g|\!|_{p,\infty,Y},\] \[|\!|\!|G(\cdot,U(\cdot))|\!|\!|_{p_{0},q,X} \leq T^{1/q}|\!|\!|G(\cdot,U(\cdot))|\!|\!|_{p_{0},\infty,X}+|\!|\!|g| \!|_{p_{0},q,X}\leq T^{1/q}C_{G,X}\tilde{C}_{u_{0},f,g,X}+|\!|\!|g|\!|_{p_{0}, q,X}\] \[\lesssim 1+\|u_{0}\|_{L^{p_{0}}(\Omega;X)}+\|f\|_{p_{0},1,X}+|\!| \!|g|\!|_{p_{0},q,X},\]
where \(\tilde{C}_{u_{0},f,g,X}\) is defined as \(C_{u_{0},f,g,X}\) in (6.3) with \(p\) replaced by \(p_{0}\).
Consequently, we can now "upgrade" Theorem 6.3 to an estimate on the full time interval.
**Theorem 6.9** (Uniform error on the full interval for general schemes).: _Suppose that Assumption 6.1 holds for some \(\alpha\in(0,1/2]\) and \(p\in[2,\infty)\). Let \(A\) be the generator of a \(C_{0}\)-contraction semigroup \((S(t))_{t\geq 0}\) on both \(X\) and \(Y\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(X\) and \(Y\). Assume \(R\) approximates \(S\) to order \(\alpha\) on \(Y\). Suppose that \(Y\hookrightarrow D_{A}(\alpha,\infty)\) continuously. Let \(p_{0}\in(p,\infty)\) and \(q\in(2,\infty]\) be such that \(\frac{1}{2}-\frac{1}{q}=\alpha\), and suppose that \(f,g\), and \(u_{0}\) have additional integrability as \(X\)-valued processes_
\[f\in L^{p_{0}}(\Omega;L^{1}(0,T;X)),\ \ g\in L^{p_{0}}(\Omega;L^{q}(0,T;\mathcal{L}_{2} (H,X))),\ \ \text{and}\ \ u_{0}\in L^{p_{0}}_{\mathcal{F}_{0}}(\Omega;X)\cap L^{p}_{\mathcal{F}_{0}}( \Omega;Y).\]
_Denote by \(U\) the mild solution of (6.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5). Define the piecewise linear extension \(\tilde{U}:[0,T]\to L^{p}(\Omega;X)\) by \(\tilde{U}(t)\coloneqq U^{j}\) for \(t\in[t_{j},t_{j+1})\), \(0\leq j\leq N_{k}-1\), and \(\tilde{U}(T)\coloneqq U^{N_{k}}\). Then for all \(N_{k}\geq 2\) there is a constant \(C\geq 0\) depending on \((T,p_{0},\alpha,u_{0},f,g,X,Y)\) such that_
\[\bigg{\|}\sup_{t\in[0,T]}\|U(t)-\tilde{U}(t)\|\bigg{\|}_{p}\leq C\Big{(}1+\log \Big{(}\frac{T}{k}\Big{)}\Big{)}k^{\alpha}.\]
Proof.: The error bound follows from applying Lemma 6.5 with \(\Phi=(\cdot)^{\alpha}(1+\log(\frac{T}{k}))^{1/2}\) in combination with Theorem 6.3 and Proposition 6.8 to bound the first and second term obtained from the proposition, respectively.
Invoking Corollary 6.4 instead of Theorem 6.3, we obtain an analogous bound for the splitting scheme with the square root of a logarithmic factor.
**Corollary 6.10** (Uniform error on the interval for splitting).: _Suppose that the conditions of Theorem 6.9 hold and that \(R_{k}=S(k)\). Then for all \(N_{k}\geq 2\) there is a constant \(C\geq 0\) depending on \((T,p,p_{0},\alpha,u_{0},f,g,X,Y)\) such that_
\[\Big{\|}\sup_{t\in[0,T]}\|U(t)-\tilde{U}(t)\|\Big{\|}_{p}\leq C\Big{(}1+\log \Big{(}\frac{T}{k}\Big{)}^{1/2}\Big{)}k^{\alpha}.\]
Thus we can conclude that Theorem 6.3 and Corollary 6.4 can be improved to a uniform error estimate on \([0,T]\) at the price of a slightly more restrictive integrability condition on \(g\) and \(u_{0}\). Moreover, in the splitting scheme an additional logarithmic factor appears. Recall from [53, Theorem 3] that already for SDEs the error has to grow at least as \(\log(T/k)^{1/2}k^{1/2}\) for \(k\to 0\). Therefore, for \(\alpha=1/2\), Corollary 6.10 gives the optimal convergence rate.
In the applications given below we restrict ourselves to the uniform error estimate on the grid points. By the above result, these statements can be extended to the full interval \([0,T]\) with additionally the square root of a logarithmic factor by imposing extra integrability conditions on the data.
### Application to the Schrodinger equation
In this subsection, we reconsider the stochastic Schrodinger equation with a potential from Subsection 3.3, now with linear multiplicative noise
\[\begin{cases}\mathrm{d}u=-\mathrm{i}(\Delta+V)u\;\mathrm{d}t-\mathrm{i}u\; \mathrm{d}W\ \ \text{on}\ [0,T],\\ u(0)=u_{0}\end{cases} \tag{6.20}\]
and its nonlinear variant with \(\phi:\mathbb{C}\to\mathbb{C}\) and \(\psi:\mathbb{C}\to\mathbb{C}\),
\[\begin{cases}\mathrm{d}u=-\mathrm{i}(\Delta u+Vu+\phi(u))\;\mathrm{d}t-\mathrm{ i}\psi(u)\;\mathrm{d}W\ \ \text{on}\ [0,T],\\ u(0)=u_{0}\end{cases} \tag{6.21}\]
in \(\mathbb{C}^{d}\) for \(d\in\mathbb{N}\), with \(Q\)-Wiener process \(\{W(t)\}_{t\geq 0}\), potential \(V\) and initial value \(u_{0}\) as introduced in Subsection 3.3.
Let \(\sigma\geq 0\) and, for this subsection only, write \(L^{2}=L^{2}(\mathbb{R}^{d})\) and \(H^{\sigma}=H^{\sigma}(\mathbb{R}^{d})\). We recall that well-posedness of (3.11) required Assumption 3.5 on \(\sigma\) and \(d\in\mathbb{N}\) to hold so that multiplication by \(V\) is a bounded operator on \(H^{\sigma}\). Based on the combination of the cases of Assumption 3.5 for \(X=H^{\sigma}\) and \(Y=H^{\sigma+2\alpha}\), the following assumption emerges.
**Assumption 6.11**.: _Let \(\sigma\geq 0\), \(d\in\mathbb{N}\), \(\alpha\in\big{(}0,\frac{1}{2}\big{]}\), \(V\in H^{\beta}\) for some \(\beta>0\) such that_
1. \(\sigma>\frac{d}{2}\) _and_ \(\beta=\sigma+2\alpha\)_, or_
2. \(\sigma=0\)_,_ \(d=1\)_,_ \(\alpha>\frac{1}{4}\)_, and_ \(\beta=2\alpha\)_, or_
3. \(\sigma=0\)_,_ \(d>4\alpha\)_,_ \(\alpha<\frac{1}{2}\)_, and_ \(\beta>\frac{d}{2}\)_, or_
4. \(\sigma=0\)_,_ \(d\geq 2\)_,_ \(\alpha=\frac{1}{2}\)_, and_ \(\beta>\frac{d}{2}\)_, or_
5. \(\sigma\in(0,1)\)_,_ \(d>2\sigma\)_,_ \(\alpha>\frac{d}{4}-\frac{\sigma}{2}\)_, and_ \(\beta=\sigma+2\alpha\)_, or_
6. \(\sigma\in(0,1)\)_,_ \(d>2\sigma+4\alpha\)_,_ \(\alpha<\frac{1-\sigma}{2}\)_, and_ \(\beta>\frac{d}{2}\)_, or_
7. \(\sigma\in(0,1)\)_,_ \(d\geq 2\)_,_ \(\alpha=\frac{1-\sigma}{2}\)_, and_ \(\beta>\frac{d}{2}\)_, or_
8. \(\sigma=1\)_,_ \(d\in\{2,3\}\)_,_ \(\alpha>\frac{d}{4}-\frac{1}{2}\)_, and_ \(\beta=1+2\alpha\)_._
For the splitting scheme, we recover the error bound from [2, Thm. 5.5] showing convergence rate \(\frac{1}{2}\) for linear noise in the case of sufficiently regular \(Q^{1/2}\) and \(V\) and \(\sigma>\frac{d}{2}\). Assuming less regularity of \(Q^{1/2}\) and \(V\) we extend their result to fractional convergence rates \(\alpha\in\big{(}0,\frac{1}{2}\big{]}\) as well as the cases (ii)-(viii) of Assumption 6.11.
**Theorem 6.12**.: _Let \(\sigma\geq 0\), \(d\in\mathbb{N}\), and \(V\in L^{2}\). Suppose that Assumption 6.11 is satisfied for some \(\alpha\in\left(0,\frac{1}{2}\right]\), \(\beta>0\), and \(p\in[2,\infty)\), and that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma+2\alpha})\) as well as \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\beta})\). Denote by \(U\) the mild solution of the linear stochastic Schrodinger equation with multiplicative noise (6.20) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5) obtained with the splitting scheme \(R\coloneqq S\). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\sigma}}\right\|_{p} \leq C\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}k^{\alpha}.\]
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\frac{1}{2}\) as \(k\to 0\) if \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\sigma+1})\), \(V\in H^{\sigma+1}\), \(\sigma>\frac{d}{2}\), and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma+1})\)._
Proof.: By [2, Lemma 2.1], \(A=-\mathrm{i}\Delta\) generates a contractive semigroup on both Hilbert spaces \(X=H^{\sigma}\) and \(Y=H^{\sigma+2\alpha}\). Furthermore, setting \(F(u)=-\mathrm{i}V\cdot u\) and \(G(u)=-\mathrm{i}M_{u}Q^{1/2}\) for \(u\in H^{\sigma}\) with the multiplication operator \(M_{u}\) allows us to rewrite (6.20) in the form of a stochastic evolution equation (6.1). It remains to verify the mapping, linear growth and Lipschitz continuity conditions from Assumption 6.1.
Note that Assumption 6.11 implies that Assumption 6.11 is satisfied for both \(\sigma\) and \(\sigma+2\alpha\). In particular, this means that \(Vu\in Y=H^{\sigma+2\alpha}\) for any \(u\in H^{\sigma+2\alpha}\) and \(\|Vu\|_{H^{\sigma+2\alpha}}\leq C_{V}\|u\|_{H^{\sigma+2\alpha}}\) for some constant \(C_{V}\geq 0\). More specifically, it can be shown that \(C_{V}\lesssim\|V\|_{H^{\beta}}\), cf. Subsection 3.3. Hence, \(F\) maps both \(X\) and \(Y\) into themselves and it is of linear growth on \(Y\) because of
\[\|F(u)\|_{Y}=\|-\mathrm{i}V\cdot u\|_{H^{\sigma+2\alpha}}\leq C_{V}\|u\|_{H^{ \sigma+2\alpha}}=C_{V}\|u\|_{Y},\ u\in Y.\]
Likewise, Lipschitz continuity on \(X\) is obtained.
Set \(H=L^{2}\). Due to
\[\|G(u)\|_{\mathcal{L}_{2}(H,Y)} =\|-\mathrm{i}M_{u}\cdot Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{ \sigma+2\alpha})}\] \[\leq\|M_{u}\|_{\mathcal{L}(H^{\beta},H^{\sigma+2\alpha})}\|Q^{1/ 2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})} \tag{6.22}\] \[\lesssim\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}\|u\|_{H^{ \sigma+2\alpha}}=\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}\|u\|_{Y},\ u\in Y,\]
\(G\) is of linear growth on \(Y\). To see this, we estimate the operator norm of \(M_{u}\) from \(H^{\beta}\) to \(H^{\sigma+2\alpha}\) using either the Banach algebra property of \(H^{\beta}\), a combination of Holder's inequality and Sobolev embeddings or an argument analogous to Lemma 3.6 as discussed in Subsection 3.3. Likewise, we check Lipschitz continuity of \(G\) on \(X\) with a multiple of \(\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}\) as Lipschitz constant. Measurability and Holder continuity in time are trivially fulfilled due to \(F\) and \(G\) depending only on \(u\in X\). Thus, Corollary 6.4 is applicable with \(X=H^{\sigma}\), \(H=L^{2}\), and \(Y=H^{\sigma+2\alpha}\hookrightarrow(H^{\sigma},D(A))_{\alpha,\infty}\), yielding the desired error bound.
Furthermore, Theorem 6.3 enables us to extend [2, Thm. 5.5] to general discretisation schemes \(R\) other than the splitting scheme at the price of an additional logarithmic factor.
**Theorem 6.13**.: _Let \(\sigma\geq 0\), \(d\in\mathbb{N}\), and \(V\in L^{2}\). Suppose that Assumption 6.11 is satisfied for some \(\alpha\in\left(0,\frac{1}{2}\right]\), \(\beta>0\), and \(p\in[2,\infty)\), and that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma+2\alpha})\) as well as \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\beta})\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(H^{\sigma}\) and \(H^{\sigma+2\alpha}\). Assume \(R\) approximates \(S\) to order \(\alpha\) on \(H^{\sigma+2\alpha}\). Denote by \(U\) the mild solution of the linear stochastic Schrodinger equation with multiplicative noise (6.20) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\sigma}}\right\|_{p} \leq C\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}\log\left(\frac{T}{k} \right)k^{\alpha}.\]
As in the additive case, the conditions on the dimension \(d\in\mathbb{N}\) are not required in the absence of a potential.
When passing to a nonlinear situation as in (6.21), showing Lipschitz continuity of \(G\) requires estimates of the form
\[\|\psi(u)-\psi(v)\|_{H^{\sigma}}\lesssim\|u-v\|_{H^{\sigma}},\ u,v\in H^{\sigma}\]
and similar for \(\phi\). However, the best estimate known for \(\sigma\in(0,1)\) and \(\psi\in C^{2}\) with bounded first and second derivatives is [57, Prop. 2.7.2],
\[\|\psi(u)-\psi(v)\|_{H^{\sigma}}\lesssim\|u-v\|_{H^{\sigma}}+(1+\|u\|_{H^{ \sigma}}+\|v\|_{H^{\sigma}})\|u-v\|_{L^{\infty}}.\]
Since this estimate is nonlinear in \(u\) and \(v\), showing Lipschitz continuity of \(G\) is currently out of reach for \(\sigma>0\). Another reason to restrict our considerations to \(\sigma=0\) in the following is the negative result from Dahlberg [27]. It states that for \(\sigma+2\alpha\in\left(\frac{3}{2},1+\frac{d}{2}\right)\), the only mappings \(\psi\) such that \(\psi\circ u\in H^{\sigma+2\alpha}\) for all \(u\in H^{\sigma+2\alpha}\) are the affine-linear ones. Hence, in dimension \(d>1\), the optimal rate \(\alpha=\frac{1}{2}\) cannot be expected for all \(\sigma>\frac{1}{2}\) for genuinely nonlinear \(\psi\). For \(\sigma=0\), however, a convergence rate can be obtained.
**Theorem 6.14**.: _Let \(\sigma=0\), \(d\in\mathbb{N}\), and \(V\in L^{2}\). Suppose that one of the cases (ii)-(iv) of Assumption 6.11 is satisfied for some \(\alpha\in\left(0,\frac{1}{2}\right]\), \(\beta>0\), and \(p\in[2,\infty)\), and that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma+2\alpha})\) as well as \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\beta})\). Let \(\phi,\psi:\mathbb{C}\to\mathbb{R}\) be Lipschitz continuous and such that \(\phi(0)=\psi(0)=0\). Denote by \(U\) the mild solution of the nonlinear stochastic Schrodinger equation with multiplicative noise (6.21) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5) obtained with the splitting scheme \(R\coloneqq S\). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\sigma}}\right\|_{p} \leq C\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}k^{\alpha}.\]
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\frac{1}{2}\) as \(k\to 0\) if \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{1})\), \(V\in H^{1}\), and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{1})\) for \(d=1\). In dimension \(d\geq 2\), this is attained for \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\beta})\) and \(V\in H^{\beta}\) for some \(\beta>\frac{d}{2}\), and \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{1})\)._
Proof.: From the linear case, it is already clear that
\[\|G(u)-G(v)\|_{\mathcal{L}_{2}(L^{2},L^{2})}\lesssim\|\psi\circ u-\psi\circ v \|_{L^{2}}\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}.\]
Lipschitz continuity of \(\psi\) with Lipschitz constant \(C_{\psi}\geq 0\) implies Lipschitz continuity of \(G\) on \(X=L^{2}\) via
\[\|\psi\circ u-\psi\circ v\|_{L^{2}}\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{ \beta})}\leq C_{\psi}\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}\|u-v\|_{L^ {2}}.\]
Since from (6.22) we know that
\[\|G(u)\|_{\mathcal{L}_{2}(L^{2},H^{2\alpha})}\lesssim\|\psi\circ u\|_{H^{2 \alpha}}\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})},\]
it remains to estimate the norm of the composition \(\|\psi\circ u\|_{H^{2\alpha}}\) by a multiple of \(\|u\|_{H^{2\alpha}}\) to show linear growth of \(G\) on \(H^{2\alpha}\). In the cases (ii) with \(\alpha<\frac{1}{2}\) and (iii), \(2\alpha\in(0,1)\). Hence, by [57, Prop. 2.4.1], \(\|\psi\circ u\|_{H^{2\alpha}}\lesssim\|u\|_{H^{2\alpha}}\). In the remaining cases, \(2\alpha=1\) holds, so that
\[\|\psi\circ u\|_{H^{2\alpha}}^{2}=\|\psi\circ u\|_{L^{2}}^{2}+\|\nabla(\psi \circ u)\|_{L^{2}}^{2}\leq\|\psi\circ u\|_{L^{2}}^{2}+C_{\psi}^{2}\|\nabla u \|_{L^{2}}^{2}\leq C_{\psi}^{2}\|u\|_{H^{1}}^{2},\]
where in the first inequality we have invoked [57, Prop. 2.6.1]. Hence, \(G\) is of linear growth on \(Y=H^{2\alpha}\). In the same way one can see that \(F(u)=\phi(u)\) is Lipschitz on \(X\) and of linear growth on \(Y\). The statement of this theorem follows by an application of Corollary 6.4.
For nonlinear noise, the result can again be extended to general contractive schemes.
**Theorem 6.15**.: _Let \(\sigma=0\), \(d\in\mathbb{N}\), and \(V\in L^{2}\). Suppose that one of the cases (ii)-(iv) of Assumption 6.11 is satisfied for some \(\alpha\in\left(0,\frac{1}{2}\right]\), \(\beta>0\), and \(p\in[2,\infty)\), and that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;H^{\sigma+2\alpha})\) as well as \(Q^{1/2}\in\mathcal{L}_{2}(L^{2},H^{\beta})\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(L^{2}\) and \(H^{2\alpha}\). Assume \(R\) approximates \(S\) to order \(\alpha\) on \(H^{2\alpha}\). Let \(\phi,\psi:\mathbb{C}\to\mathbb{R}\) be Lipschitz continuous and such that \(\phi(0)=\psi(0)=0\). Denote by \(U\) the mild solution of the nonlinear stochastic Schrodinger equation with multiplicative noise (6.21) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\sigma}}\right\|_{p}\leq C \|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2},H^{\beta})}\log\left(\frac{T}{k}\right)k^{ \alpha}.\]
### Application to Maxwell's equations
As a second example, we consider the stochastic Maxwell's equations
\[\begin{cases}\,\,\mathrm{d}U=[AU+F(U)]\,\,\mathrm{d}t+G(U)\,\,\mathrm{d}W\,\,\, \text{on}\,\,[0,T],\\ U(0)=(\mathbf{E}_{0}^{\top},\mathbf{H}_{0}^{\top})^{\top}\end{cases} \tag{6.23}\]
with boundary conditions of a perfect conductor as in [16]. It describes the behaviour of the electric and magnetic field \(\mathbf{E}\) and \(\mathbf{H}\), respectively, on a bounded, simply connected domain \(\mathcal{O}\subseteq\mathbb{R}^{3}\) with smooth boundary with unit outward normal vector \(\mathbf{n}\). Here, \(A:D(A)\to X\coloneqq L^{2}(\mathcal{O})^{6}\) is the Maxwell operator defined by
\[A\begin{pmatrix}\mathbf{E}\\ \mathbf{H}\end{pmatrix}\coloneqq\begin{pmatrix}0&\varepsilon^{-1}\nabla\times \\ -\mu^{-1}\nabla\times&0\end{pmatrix}\begin{pmatrix}\mathbf{E}\\ \mathbf{H}\end{pmatrix}=\begin{pmatrix}\varepsilon^{-1}\nabla\times\mathbf{H} \\ -\mu^{-1}\nabla\times\mathbf{E}\end{pmatrix}\]
on \(D(A)\coloneqq H_{0}(\operatorname{curl},\mathcal{O})\times H(\operatorname{ curl},\mathcal{O})\) with \(H(\operatorname{curl},\mathcal{O})\coloneqq\{\mathbf{H}\in(L^{2}(O))^{3}:\, \nabla\times\mathbf{H}\in L^{2}(\mathcal{O})^{3}\}\) and its subspace \(H_{0}(\operatorname{curl},\mathcal{O})\) of those \(\mathbf{H}\) with vanishing tangential trace \(\mathbf{n}\times\mathbf{H}|_{\partial\mathcal{O}}\). The permittivity and permeability \(\varepsilon,\mu\in L^{\infty}(\mathcal{O})\) are assumed to be uniformly positive, i.e. \(\varepsilon,\mu\geq\kappa>0\) for some constant \(\kappa\). We equip the Hilbert space \(X=L^{2}(\mathcal{O})^{6}=L^{2}(\mathcal{O})^{3}\times L^{2}(\mathcal{O})^{3}\) with the weighted scalar product
\[\left\langle\begin{pmatrix}\mathbf{E}_{1}\\ \mathbf{H}_{1}\end{pmatrix},\begin{pmatrix}\mathbf{E}_{1}\\ \mathbf{H}_{1}\end{pmatrix}\right\rangle\coloneqq\int_{\mathcal{O}}\left(\mu \langle\mathbf{H}_{1},\mathbf{H}_{2}\rangle+\varepsilon\langle\mathbf{E}_{1}, \mathbf{E}_{2}\rangle\right)\mathrm{d}x,\]
where \(\langle\cdot,\cdot\rangle\) denotes the standard scalar product in \(L^{2}(\mathcal{O})^{3}\). Furthermore, \(W\) is a \(Q\)-Wiener process for a symmetric, non-negative operator \(Q\) with finite trace such that \(Q^{1/2}\in\mathcal{L}_{2}(H,X)\), where \(H=L^{2}(\mathcal{O})^{6}\) is equipped with the standard norm.
For \(F:\Omega\times[0,T]\times X\to X\) we consider the linear drift term given by
\[(\omega,t,U)\mapsto F(\omega,t,U)=\begin{pmatrix}\sigma_{1}(\cdot,t)\mathbf{E }\\ \sigma_{2}(\cdot,t)\mathbf{H}\end{pmatrix},\quad U=(\mathbf{E}^{\top},\mathbf{H }^{\top})^{\top}, \tag{6.24}\]
for sufficiently smooth \(\sigma_{1},\sigma_{2}:\mathcal{O}\times[0,T]\to\mathbb{R}\). We assume boundedness of \(\sigma_{1},\sigma_{2}\) and their partial derivatives w.r.t. the spatial variables. In particular, let \(\sigma_{j}\) be uniformly Lipschitz continuous in time and let \(\partial_{x_{i}}\sigma_{j},\sigma_{j}\in L^{\infty}(\mathcal{O}\times[0,T])\) for \(i=1,2,3\) and \(j=1,2\). Then \(F\) is Lipschitz on \(X\) due to
\[\|F(t,V)\|_{X}^{2} =\int_{\mathcal{O}}\left(\mu(x)\|\sigma_{2}(\cdot,t)\mathbf{H}_{ V}\|_{L^{2}(\mathcal{O})^{3}}^{2}+\varepsilon(x)\|\sigma_{1}(\cdot,t) \mathbf{E}_{V}\|_{L^{2}(\mathcal{O})^{3}}^{2}\right)\,\,\mathrm{d}x\] \[\leq\max\{\|\sigma_{1}\|_{\infty},\|\sigma_{2}\|_{\infty}\}^{2} \|V\|_{X}^{2}\eqqcolon C_{F}^{2}\|V\|_{X}^{2},\quad V=(\mathbf{E}_{V}^{\top}, \mathbf{H}_{V}^{\top})^{\top},\]
and linearity of \(F\). A straightforward explicit calculation of the curl operator shows that
\[\|AF(t,V)\|_{X}^{2} =\left\|\begin{pmatrix}\varepsilon^{-1}\nabla\times(\sigma_{2}( \cdot,t)\mathbf{H}_{V})\\ -\mu^{-1}\nabla\times(\sigma_{1}(\cdot,t)\mathbf{E}_{V})\end{pmatrix}\right\|_{ X}^{2}\] \[\leq\kappa^{-2}\int_{\mathcal{O}}\mu\|\nabla\times(\sigma_{1}( \cdot,t)\mathbf{E}_{V})\|_{L^{2}(\mathcal{O})^{3}}^{2}+\varepsilon\|\nabla \times(\sigma_{2}(\cdot,t)\mathbf{H}_{V})\|_{L^{2}(\mathcal{O})^{3}}^{2}\,\, \mathrm{d}x\] \[\leq 3\kappa^{-2}\left(C_{F}^{2}\|AV\|_{X}^{2}+2\max_{j=1,2}\max_{ i=1,2,3}\|\partial_{x_{i}}\sigma_{j}\|_{\infty}^{2}\|V\|_{X}^{2}\right).\]
We conclude linear growth of \(F\) on \(Y\coloneqq D(A)\) by
\[\|F(t,V)\|_{D(A)}^{2} =\|F(t,V)\|_{X}^{2}+\|AF(t,V)\|_{X}^{2}\] \[\leq\left(\max\{1,3\kappa^{-2}\}C_{F}^{2}+6\kappa^{-2}\max_{j=1,2 }\max_{i=1,2,3}\|\partial_{x_{i}}\sigma_{j}\|_{\infty}^{2}\right)\|V\|_{D(A)}^{2}.\]
As noise \(G(V)\), where \(V=(\mathbf{E}_{V}^{\top},\mathbf{H}_{V}^{\top})^{\top}\in L^{2}(\mathcal{O})^{6}\), we consider the Nemytskij map associated to \(\operatorname{diag}((-\varepsilon^{-1}\mathbf{E}_{V}^{\top},-\mu^{-1}\mathbf{H }_{V}^{\top}))Q^{1/2}\), i.e. for \(h\in L^{2}(\mathcal{O})^{6}\) and \(x\in\mathcal{O}\), we have
\[(G(V)h)(x)=\begin{pmatrix}-\varepsilon^{-1}(x)\operatorname{diag}(\mathbf{E}_{V} (x))&0\\ 0&-\mu^{-1}(x)\operatorname{diag}(\mathbf{H}_{V}(x))\end{pmatrix}(Q^{1/2}h)(x) \in\mathbb{R}^{6}. \tag{6.25}\]
Since for \(V_{1},V_{2}\in L^{2}(\mathcal{O})^{6}\),
\[\|G(V_{1}-V_{2})\|_{\mathcal{L}_{2}(H,X)}\leq\kappa^{-1}\|Q^{1/2}\|_{\mathcal{L}_ {2}(H,X)}\|V_{1}-V_{2}\|_{X},\]
\(G:X\to\mathcal{L}_{2}(H,X)\) is Lipschitz continuous on \(X\). As discussed in [16, p.5], \(G\) is of linear growth on \(D(A)\) under higher regularity assumptions on \(Q^{1/2}\). To be precise, if \(Q^{1/2}\in\mathcal{L}_{2}(L^{2}(\mathcal{O})^{6},H^{1+\beta}(\mathcal{O})^{6})\) for some \(\beta>\frac{3}{2}\), then, for some \(C\geq 0\),
\[\|G(V)\|_{\mathcal{L}_{2}(H,D(A))}\leq C\|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2}( \mathcal{O})^{6},H^{1+\beta}(\mathcal{O})^{6})}(1+\|V\|_{D(A)}).\]
This directly follows from the estimate [16, formula (7)] for \(\mathbb{G}\) defined by \(G=\mathbb{G}Q^{1/2}\) taking into account that for an orthonormal basis \((e_{l})_{l\in\mathbb{N}}\) of \(H\), we have
\[\|G(V)\|_{\mathcal{L}_{2}(H,D(A))}=\sum_{l\in\mathbb{N}}\|G(V)e_{l}\|_{D(A)}= \sum_{l\in\mathbb{N}}\|\mathbb{G}(V)Q^{1/2}e_{l}\|_{D(A)}=\|\mathbb{G}(V)\|_{ \mathcal{L}_{2}(Q^{1/2}H,D(A))}.\]
The choice of the coefficient \(\beta>\frac{3}{2}\) stems from the fact that the Sobolev embedding \(H^{\beta}(\mathcal{O})\hookrightarrow L^{\infty}(\mathcal{O})\) holds for \(\beta>\frac{d}{2}=\frac{3}{2}\) since \(\mathcal{O}\subseteq\mathbb{R}^{3}\)[36, Ex. 9.3.4]. Thus, for the embedding into \(D(A)\) to hold, \(Q^{1/2}\) is required to map into \(H^{1+\beta}(\mathcal{O})^{6}\).
**Theorem 6.16**.: _Let \(p\in[2,\infty)\) and \(F,G\) as introduced in (6.24) and (6.25), respectively. Suppose that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;D(A))\) and \(Q^{1/2}\in\mathcal{L}_{2}(L^{2}(\mathcal{O})^{6},H^{1+\beta}(\mathcal{O})^{6})\) for some \(\beta>\frac{3}{2}\). Denote by \(U\) the mild solution to the stochastic Maxwell's equations (6.23) with multiplicative noise (6.20) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5) obtained with the splitting scheme \(R\coloneqq S\). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{s}}\right\|_{p}\leq C\|Q ^{1/2}\|_{\mathcal{L}_{2}(L^{2}(\mathcal{O})^{6},H^{1+\beta}(\mathcal{O})^{6}) }k^{1/2},\]
_i.e. the approximations \((U^{j})_{j}\) converge at rate \(\frac{1}{2}\) as \(k\to 0\)._
Proof.: The theorem follows from Corollary 6.4 with \(\alpha=\frac{1}{2}\) and \(Y=D(A)\). From the above considerations it follows that the conditions on \(F\) and \(G\) are met. It remains to verify that \(Y\) is Hilbert and \((S(t))_{t\geq 0}\) is a contraction semigroup on both \(X\) and \(Y\). Since \(Y=D(A)\) is a Banach space [52, p. 410] and \(\lambda-A\) defines an isomorphism between \(D(A)\) and \(X\) for \(\lambda\in\rho(A)\), it is also a Hilbert space. By [16, Formula (3)], \((S(t))_{t\geq 0}\) is a contraction semigroup on \(X\). By definition of the graph norm, this implies contractivity on \(D(A)\).
We can extend [16, Thm. 3.3] to schemes other than the splitting scheme.
**Theorem 6.17**.: _Let \(p\in[2,\infty)\) and \(F,G\) as introduced in (6.24) and (6.25), respectively. Suppose that \(u_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;D(A))\) and \(Q^{1/2}\in\mathcal{L}_{2}(L^{2}(\mathcal{O})^{6},H^{1+\beta}(\mathcal{O})^{6})\) for some \(\beta>\frac{3}{2}\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(L^{2}(\mathcal{O})^{6}\) and \(D(A)\). Assume \(R\) approximates \(S\) to order \(\alpha\in\left(0,\frac{1}{2}\right]\) on \(D(A)\). Denote by \(U\) the mild solution to the stochastic Maxwell's equations (6.23) with multiplicative noise (6.20) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (6.5). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{H^{\alpha}}\right\|_{p}\leq C \|Q^{1/2}\|_{\mathcal{L}_{2}(L^{2}(\mathcal{O})^{6},H^{1+\beta}(\mathcal{O})^{6 })}\log\left(\frac{T}{k}\right)k^{\alpha},\]
_i.e. the approximations \((U^{j})_{j}\) converge at rate \(\min\left(\alpha,\frac{1}{2}\right)\) up to a logarithmic correction factor as \(k\to 0\)._
## 7. Convergence Rates for abstract Wave Equations
In this section, we shall be concerned with rates of convergence for abstract stochastic wave equations of the form
\[\operatorname{d}\!U=\left(AU+\mathbf{F}(t,U)\right)\operatorname{d}\!t+ \mathbf{G}(t,U)\operatorname{d}\!W_{H}(t),\ U(0)=U_{0}=(u_{0},v_{0})\in L^{p}( \Omega;X) \tag{7.1}\]
on a phase space \(X=V\times V_{-1}\) of product structure to be specified later, which takes different regularities of the first and second component of the mild solution into account. We achieve the following convergence rates for sufficiently regular noise:
* \(\mathrm{E}_{k}^{\infty}\lesssim k^{\alpha}\log(T/k)\) with \(\alpha\) close to one (general contractive schemes, multiplicative noise);
* \(\mathrm{E}_{k}^{\infty}\lesssim k\) (splitting scheme, multiplicative noise).
Up to a logarithmic factor, these rates are optimal for the given problem. They provide an alternative proof of [61, Thm. 3.1] for the splitting scheme under less regularity assumptions on \(\mathbf{F}\) and \(\mathbf{G}\) and without making use of the group structure of the semigroup. The latter is crucial in order to extend the convergence result beyond the splitting scheme. We extend the convergence result to general contractive schemes, which, to the best of our knowledge, is novel.
At the heart of our proof lies the higher Holder continuity of the first component of the mild solution in \(V\) compared to the mild solution vector in \(X\), which emerges from the product structure of the phase space on which the abstract wave equation is considered. This allows for better estimates of those error terms depending on the Holder continuity of the mild solution. Incorporating this into the setting of Section 6 leads to the main Theorem 7.6 in Subsection 7.1. Subsection 7.2 covers the splitting scheme. An extension of the error estimates to the full time interval is presented in Subsection7.3. The results are illustrated for the stochastic wave equation with trace class noise, space-time white noise, and smooth noise in Subsections 7.4 to 7.6.
Let \(V\) be a separable Hilbert space equipped with the norm \(\|\cdot\|_{V}\). Consider a densely defined, positive self-adjoint invertible operator \(\Lambda:D(\Lambda)\subseteq V\to V\). For \(\beta\in\mathbb{R}\), define the norm \(\|u\|_{V_{\beta}}\coloneqq\|\Lambda^{\beta/2}u\|_{V}\) for \(u\in V_{\beta}\) and, for \(\beta\geq 0\), denote the domain of \(\Lambda^{\frac{\beta}{2}}\) by \(V_{\beta}\) and equip it with this norm. For negative \(\beta\), we denote by \(V_{\beta}\) the completion of \(V\) with respect to \(\|\cdot\|_{V_{\beta}}\). We can thus interpret \(\Lambda\) as an operator mapping from \(V_{1}\) to \(V_{-1}\) and it holds that \(V=V_{0}\). In this section, we consider stochastic evolution equations on the phase space \(X\coloneqq V_{0}\times V_{-1}=V\times V_{-1}\). More generally, we introduce the product spaces
\[X_{\beta}\coloneqq V_{\beta}\times V_{\beta-1}=D(\Lambda^{\frac{\beta}{2}}) \times D(\Lambda^{\frac{\beta-1}{2}}) \tag{7.2}\]
for \(\beta\in\mathbb{R}\), equipped with the norm \(\|U\|_{X_{\beta}}\coloneqq(\|u\|_{V_{\beta}}^{2}+\|v\|_{V_{\beta-1}}^{2})^{1/2}\) for \(U=(u,v)\in X_{\beta}\). Clearly, it then holds that \(X=X_{0}\).
The stochastic evolution equation (7.1) depends on the nonlinearity \(\mathbf{F}:\Omega\times[0,T]\times X\to X\) and the multiplicative noise \(\mathbf{G}:\Omega\times[0,T]\times X\to\mathcal{L}_{2}(H,X)\) on the phase space \(X\). However, the product structure of \(X\) considered in this section motivates an interpretation of (7.1) as a system of two evolution equations. Setting
\[A=\begin{pmatrix}0&I\\ -\Lambda&0\end{pmatrix},\quad\mathbf{F}(t,U)=\begin{pmatrix}0\\ F(t,u)\end{pmatrix},\quad\mathbf{G}(t,U)=\begin{pmatrix}0\\ G(t,u)\end{pmatrix}\quad\text{ for }U=\begin{pmatrix}u\\ v\end{pmatrix}\in X \tag{7.3}\]
gives rise to the system of evolution equations
\[\begin{cases}\mathrm{d}u=v\;\mathrm{d}t,\\ \mathrm{d}v=(-\Lambda u+F(t,u))\;\mathrm{d}t+G(t,u)\;\mathrm{d}W_{H}(t).\end{cases}\]
This precisely captures the setting of stochastic wave equations when thinking of \(v(t)\) as the derivative of \(u(t)\), thus yielding a stochastic evolution equation for the derivative \(\dot{u}(t)\) with left hand side \(\mathrm{d}\dot{u}\). The invertibility of \(\Lambda\) does not lead to restrictions, because we can always reduce to this case by writing \(-\Lambda u+F(t,u)=-(\Lambda+\varepsilon)u+\varepsilon u+F(t,u)\) without changing the properties of \(F\).
The operator \(A\) from (7.3) generates a \(C_{0}\)-semigroup \((S(t))_{t\geq 0}\) given by
\[S(t)=\begin{pmatrix}\cos(t\Lambda^{1/2})&\Lambda^{-1/2}\sin(t\Lambda^{1/2})\\ -\Lambda^{1/2}\sin(t\Lambda^{1/2})&\cos(t\Lambda^{1/2})\end{pmatrix}, \tag{7.4}\]
where we use the spectral theorem for self-adjoint operators to define the matrix entries. Indeed,
\[\lim_{t\to 0}\|\cos(t\Lambda^{1/2})x-x\|=\lim_{t\to 0}\Big{\|}\int_{0}^{t}\sin(s \Lambda^{1/2})\Lambda^{1/2}x\;\mathrm{d}s\Big{\|}\leq\lim_{t\to 0}t\|\Lambda^{1/2}x\|=0\]
and, analogously, \(\lim_{t\to 0}\|\pm\Lambda^{\mp 1/2}\sin(t\Lambda^{1/2})x-x\|=0\) for \(x\in D(\Lambda^{1/2})\). Strong continuity of the semigroup follows by density of \(D(\Lambda^{1/2})\), and the spectral theorem. It is straightforward
to see that \(S\) satisfies the semigroup property and that \(A\) is its infinitesimal generator. Due to \(-\Lambda u\in V_{-1}\) if and only if \(u\in V_{1}\), we find that the domain of \(A\) is given by
\[D(A)=\{U\in X:AU\in X\}=\{(u,v)\in X:(v,-\Lambda u)\in V_{0}\times V_{-1}\}=X_{1}.\]
Let \(\beta\in\mathbb{R}\). Combining the respective one-dimensional statements with the spectral theorem, we obtain that \(\sin(t\Lambda^{1/2})\) and \(\cos(t\Lambda^{1/2})\) are contractive on \(V_{\beta}\), \(\sin(0\cdot\Lambda^{1/2})=0\), and that \(\Lambda\) and powers thereof commute with both \(\sin(t\Lambda^{1/2})\) and \(\cos(t\Lambda^{1/2})\). The trigonometric identity satisfied by \(\sin(t\Lambda^{1/2})\) and \(\cos(t\Lambda^{1/2})\) implies contractivity of the semigroup, that is,
\[\|S(t)U\|_{X_{\beta}}\leq\|U\|_{X_{\beta}}. \tag{7.5}\]
Our aim is to derive conditions on \(F\) and \(G\) rather than \(\mathbf{F}\) and \(\mathbf{G}\) under which the temporal approximations
\[U^{j}=R_{k}^{j}U_{0}+k\sum_{i=0}^{j-1}\mathbf{F}(t_{i},U^{i})+\sum_{i=0}^{j-1} \Delta W_{i+1}R_{k}^{j-i}\mathbf{G}(t_{i},U^{i}) \tag{7.6}\]
converge to the mild solution \(U(t)=(u(t),v(t))\in X\) at a certain rate. As will become apparent, rates of convergence \(>1/2\) can be attained up to a logarithmic correction factor even for general contractive schemes. The key aspect of our main theorem, Theorem 6.3, enabling this optimal rate consists of higher order Holder continuity of the first component of the mild solution.
### General contractive time discretisation schemes
As will be shown, the following assumptions on \(F\) and \(G\) imply that \(\mathbf{F}\) and \(\mathbf{G}\) fall within the scope of Section 6.
**Assumption 7.1**.: _Let \(V\) be a Hilbert space, \(\Lambda:D(\Lambda)\subseteq V\to V\) a densely defined, positive, self-adjoint, and invertible operator, and \(p\in[2,\infty)\). Let \(F:\Omega\times[0,T]\times V\to V_{-1}\), \(F(\omega,t,x)=\tilde{F}(\omega,t,x)+f(\omega,t)\) and \(G:\Omega\times[0,T]\times V\to\mathcal{L}_{2}(H,V_{-1})\), \(G(\omega,t,x)=\tilde{G}(\omega,t,x)+g(\omega,t)\) be strongly \(\mathcal{P}\otimes\mathcal{B}(V)\)-measurable, and such that \(\tilde{F}(\cdot,\cdot,0)=0\) and \(\tilde{G}(\cdot,\cdot,0)=0\), and suppose that for some \(\delta>0\) and \(\alpha\in(0,1]\),_
1. (Lipschitz continuity from \(V\) to \(V_{-1}\)) _there exist constants_ \(C_{F},C_{G}\geq 0\) _such that for all_ \(\omega\in\Omega,t\in[0,T]\) _and_ \(x,y\in V\)_, it holds that_ \[\|\tilde{F}(\omega,t,x)-\tilde{F}(\omega,t,y)\|_{V_{-1}} \leq C_{F}\|x-y\|_{V},\] \[\|\tilde{G}(\omega,t,x)-\tilde{G}(\omega,t,x)\|_{\mathcal{L}_{2}(H,V_{-1})} \leq C_{G}\|x-y\|_{V},\]
2. (Holder continuity with values in \(V_{-1}\)) _there are constants_ \(C_{\alpha,F},C_{\alpha,G}\geq 0\) _such that_ \[\sup_{\omega\in\Omega,x\in V}[\Lambda^{-\frac{1}{2}}F(\omega,\cdot,x)]_{\alpha} \leq C_{\alpha,F},\ \sup_{\omega\in\Omega,x\in V}[\Lambda^{-\frac{1}{2}}G(\omega,\cdot,x)]_{\alpha }\leq C_{\alpha,G},\]
3. (continuity with values in \(V_{\delta-1}\)) \(f\in L_{\mathcal{P}}^{p}(\Omega;C([0,T];V_{\delta-1}))\) _and_ \(g\in L_{\mathcal{P}}^{p}(\Omega;C([0,T];\mathcal{L}_{2}(H,V_{\delta-1})))\)_,_
4. (invariance) \(F:\Omega\times[0,T]\times V_{\delta}\to V_{\delta-1}\) _and_ \(G:\Omega\times[0,T]\times V_{\delta}\to\mathcal{L}_{2}(H,V_{\delta-1})\) _are_ \(\mathcal{P}\otimes\mathcal{B}(V_{\delta})\)_-measurable,_
5. (linear growth from \(V_{\delta}\) to \(V_{\delta-1}\)) _there exist constants_ \(L_{F},L_{G}\geq 0\) _such that for all_ \(\omega\in\Omega\)_,_ \(t\in[0,T]\) _and_ \(x\in V\)_, it holds that_ \[\|\tilde{F}(\omega,t,x)\|_{V_{\delta-1}} \leq L_{F}(1+\|x\|_{V_{\delta}}),\] \[\|\tilde{G}(\omega,t,x)\|_{\mathcal{L}_{2}(H,V_{\delta-1})} \leq L_{G}(1+\|x\|_{V_{\delta}}).\]
It is important to note that both \(\delta\in(0,1]\) and \(\delta\in(1,2]\) will be considered. As for \(\delta=2\), optimal rates are obtained for the usual schemes, larger values of \(\delta\) are not considered.
Next, we first show that we satisfy the required conditions for well-posedness and thus (7.1) has a unique mild solution. Adopt the notation of the proof of Theorem 6.3, replacing \(F,\tilde{F},f,G,\tilde{G}\) and \(g\) by \(\mathbf{F},\tilde{\mathbf{F}},\mathbf{f},\mathbf{G},\tilde{\mathbf{G}}\) and \(\mathbf{g}\), respectively.
Setting \(Y:=X_{\delta}\) for some \(\delta\geq\alpha\), it is clear from \(X=X_{0}\), invertibility of \(\Lambda\), and \(D(A^{n})=X_{n}\) that \(Y\hookrightarrow X\) and \(Y\hookrightarrow D_{A}(\beta,\infty)\) for any \(\beta\in(0,\delta)\). Since \(V_{\delta}\) are separable Hilbert spaces for \(\delta\in\mathbb{R}\), so are \(X\) and \(Y\). Contractivity of the semigroup follows from (7.5). Note that strong \(\mathcal{P}\otimes\mathcal{B}(X)\)-measurability of \(\mathbf{F}\) and \(\mathbf{G}\), and that \(\tilde{\mathbf{F}},\tilde{\mathbf{G}}\) vanish in \(0\) immediately follow from the
respective assumptions on \(\tilde{F},\tilde{G}\) due to the structure (7.3). We are left to prove Lipschitz continuity, linear growth, \(Y\)-invariance, and Holder continuity of \(\mathbf{F},\mathbf{G}\), and continuity of \(\mathbf{f}\) and \(\mathbf{g}\). Deducing \(Y\)-invariance from Assumption 7.1 is straightforward noting that
\[\|\mathbf{f}\|_{p,\infty,Y}=\left\|\sup_{t\in[0,T]}\|\mathbf{f}(t)\|_{Y} \right\|_{p}=\left\|\sup_{t\in[0,T]}\|f(t)\|_{V_{\delta-1}}\right\|_{p}=\|f\|_{ p,\infty,V_{\delta-1}} \tag{7.7}\]
and, likewise, \(\|\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Lemma 7.3** (Stability).: _Suppose that Assumption 7.1 holds for some \(\alpha\in(0,1]\), \(\delta\geq\alpha\), and \(p\in[2,\infty)\). Let \(Y\coloneqq X_{\delta}\) as defined in (7.2) and \(U_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(X\) and \(Y\), and let \(N_{k}\geq 2\). Then the temporal approximations \((U^{j})_{j=0,\ldots,N_{k}}\) obtained via (7.6) are stable on both \(X\) and \(Y\). That is, for \(Z\in\{X,Y\}\),_
\[1+\biggl{\|}\max_{0\leq j\leq N_{k}}\|U^{j}\|_{Z}\biggr{\|}_{p}\leq C^{Z}_{stab }c_{U_{0},f,g,T,Z},\]
_where \(C^{Z}_{stab}\coloneqq(1+C^{2}_{Z}T)^{1/2}e^{(1+C^{2}_{Z}T)/2}\) with \(C_{X}\coloneqq C_{F}T^{1/2}+B_{p}C_{G}\), \(C_{Y}\coloneqq L_{F}T^{1/2}+B_{p}L_{G}\),_
\[c_{U_{0},f,g,T,Z}\coloneqq 1+\|U_{0}\|_{L^{p}(\Omega;Z)}+\|f\|_{L^{p}( \Omega;L^{\infty}(0,T;Z_{2}))}T+\|g\|_{L^{p}(\Omega;L^{\infty}(0,T;\mathcal{L} _{2}(H,Z_{2})))}B_{p}T^{1/2},\]
\(Z_{2}\coloneqq V_{-1}\) _if \(Z=X\), \(Z_{2}\coloneqq V_{\delta-1}\) if \(Z=Y\), and \(B_{p}\) is the constant from Theorem 2.1._
We denote
\[K_{U_{0},f,g,Y}\coloneqq C^{Y}_{\operatorname{stab}}c_{U_{0},f,g,T,Y}=C^{Y}_{ \operatorname{stab}}(1+\|U_{0}\|_{L^{p}(\Omega;Y)}+\|f\|_{p,\infty,V_{\delta-1} }T+\|g\|_{p,\infty,V_{\delta-1}}B_{p}T^{1/2}) \tag{7.9}\]
so that \(K_{U_{0},f,g,Y}=K_{U_{0},\mathbf{f},\mathbf{g},Y}\) with \(K_{U_{0},\mathbf{f},\mathbf{g},Y}\) as defined in (6.7).
For future estimates, it is useful to know the decay of differences of the sine and cosine operators \(\sin(t\Lambda^{1/2})\) and \(\cos(t\Lambda^{1/2})\). We include a short proof for convenience of the reader.
**Lemma 7.4**.: _Let \(t\in[0,T]\). Then for all \(\alpha\in[0,1]\), we have_
\[\|\Lambda^{-\frac{\alpha}{2}}[\sin(t\Lambda^{1/2})-\sin(s\Lambda^ {1/2})]\|_{\mathcal{L}(V)} \leq 2(t-s)^{\alpha},\] \[\|\Lambda^{-\frac{\alpha}{2}}[\cos(t\Lambda^{1/2})-\cos(s\Lambda^ {1/2})]\|_{\mathcal{L}(V)} \leq 2(t-s)^{\alpha}\]
_for all \(0\leq s\leq t\leq T\)._
Proof.: The statement is trivially fulfilled for \(t=s\). Let \(0\leq s<t\leq T\). We claim that
\[\zeta_{\alpha}(t,s)\coloneqq\frac{|\sin(t)-\sin(s)|}{|t-s|^{\alpha}}\leq 2.\]
Indeed, if \(|t-s|\leq 1\), then by the mean value theorem \(\zeta_{\alpha}(t,s)\leq\zeta_{1}(t,s)\leq 1\). If \(|t-s|>1\), then \(\zeta_{\alpha}(t,s)\leq 2\). Now let \(\lambda>0\). Applying the claim with \(t\lambda^{1/2}\) and \(s\lambda^{1/2}\) gives
\[\lambda^{-\alpha/2}|\sin(t\lambda^{1/2})-\sin(s\lambda^{1/2})|\leq 2|t-s|^{ \alpha}.\]
Thus by the spectral theorem for self-adjoint operators and positivity of \(\Lambda\), we get the desired statement. The statement for the cosine is proven analogously.
While the mild solution \(U\) has at most \(1/2\)-Holder continuous paths as follows from Lemma (6.2), the product structure of the stochastic evolution equation results in higher Holder continuity of the first component \(u\) of \(U\), as the following lemma illustrates. In particular, \(u\) has Lipschitz continuous paths for sufficiently regular \(F\) and \(G\).
**Lemma 7.5**.: _Suppose that Assumption 7.1 holds for some \(\alpha\in(0,1]\), \(\delta\geq\alpha\), and \(p\in[2,\infty)\). Let \(X\coloneqq X_{0}\) and \(Y\coloneqq X_{\delta}\) as defined in (7.2) and \(U_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Then for all \(0\leq s\leq t\leq T\), the first component \(u\) of the mild solution \(U\) of (7.1) satisfies_
\[\|u(t)-u(s)\|_{L^{p}(\Omega;V)}\leq L(t-s)^{\alpha}\]
_with constant_
\[L\coloneqq 2C_{Y}\biggl{[}\sqrt{2}\|U_{0}\|_{L^{p}(\Omega;Y)}+L_{1,F}T\frac{ \alpha+2}{\alpha+1}+B_{p}L_{2,G}T^{1/2}\Bigl{(}1+\frac{1}{\sqrt{2\alpha+1}} \Bigr{)}\biggr{]},\]
_where \(L_{1,F}\coloneqq L_{F}C_{U_{0},\mathbf{f},\mathbf{g},Y}+\|f\|_{L^{p}(\Omega;L ^{\infty}(0,T;V_{\delta-1}))}\), \(L_{2,G}\coloneqq L_{G}C_{U_{0},\mathbf{f},\mathbf{g},Y}+\|g\|_{L^{p}(\Omega;L ^{\infty}(0,T;\mathcal{L}_{2}(H,V_{\delta-1})))}\) with \(C_{U_{0},\mathbf{f},\mathbf{g},Y}\) as in (7.8), \(C_{Y}\) denotes the embedding constant of \(X_{\delta}\) into \(X_{\alpha}\), and \(B_{p}\) is the constant from Theorem 2.1._
Proof.: From the structure (7.4) of the semigroup as well as (7.3) of \(\mathbf{F}\) and \(\mathbf{G}\), we deduce the following variation-of-constants formula for the first component of the mild solution.
\[u(t) =\cos(t\Lambda^{1/2})u_{0}+\Lambda^{-\frac{1}{2}}\sin(t\Lambda^{1/2})v_{0}+ \int_{0}^{t}\Lambda^{-\frac{1}{2}}\sin((t-r)\Lambda^{1/2})F(r,u(r))\,\mathrm{d}r\] \[\qquad+\int_{0}^{t}\Lambda^{-\frac{1}{2}}\sin((t-r)\Lambda^{1/2} )G(r,u(r))\,\mathrm{d}W_{H}(r)\]
Hence, the difference can be split up as
\[\|u(t)-u(s)\|_{L^{p}(\Omega;V)}\leq\left\|[\cos(t\Lambda^{1/2})- \cos(s\Lambda^{1/2})]u_{0}+\Lambda^{-\frac{1}{2}}[\sin(t\Lambda^{1/2})-\sin(s \Lambda^{1/2})]v_{0}\right\|_{L^{p}(\Omega;V)}\] \[+\left\|\int_{s}^{s}\|\Lambda^{-\frac{1}{2}}[\sin((t-r)\Lambda^{ 1/2})-\sin((s-r)\Lambda^{1/2})]F(r,u(r))\|_{V}\,\,\mathrm{d}r\right\|_{p}\] \[+\left\|\int_{0}^{t}\Lambda^{-\frac{1}{2}}\sin((t-r)\Lambda^{1/2 })F(r,u(r))\right\|_{V}\,\,\mathrm{d}r\right\|_{p}\] \[+\left\|\int_{0}^{s}\Lambda^{-\frac{1}{2}}[\sin((t-r)\Lambda^{1/2 })-\sin((s-r)\Lambda^{1/2})]G(r,u(r))\,\,\mathrm{d}W_{H}(r)\right\|_{L^{p}( \Omega;V)}\] \[+\left\|\int_{s}^{t}\Lambda^{-\frac{1}{2}}\sin((t-r)\Lambda^{1/2 })G(r,u(r))\,\,\mathrm{d}W_{H}(r)\right\|_{L^{p}(\Omega;V)}\eqqcolon E_{1}+E_ {2}+E_{3}+E_{4}+E_{5},\]
where \(E_{\ell}\coloneqq E_{\ell}(t,s)\) for \(1\leq\ell\leq 5\). We proceed to bound these five expressions individually. Lemma 7.4 yields
\[E_{1} \leq\left\|\left\|[\cos(t\Lambda^{1/2})-\cos(s\Lambda^{1/2})] \Lambda^{-\frac{\alpha}{2}}\right\|_{\mathcal{L}(V)}\|\Lambda^{\frac{\alpha}{ 2}}u_{0}\|_{V}\] \[\quad+\left\|[\sin(t\Lambda^{1/2})-\sin(s\Lambda^{1/2})]\Lambda^{- \frac{\alpha}{2}}\right\|_{\mathcal{L}(V)}\|\Lambda^{\frac{\alpha-1}{2}}v_{0} \|_{V}\right\|_{p}\] \[\leq 2(t-s)^{\alpha}\|\|u_{0}\|_{V_{a}}+\|v_{0}\|_{V_{a-1}}\|_{p} \leq 2\sqrt{2}\|U_{0}\|_{L^{p}(\Omega;X_{a})}\cdot(t-s)^{\alpha}\] \[\leq 2\sqrt{2}C_{Y}\|U_{0}\|_{L^{p}(\Omega;Y)}\cdot(t-s)^{\alpha},\]
where we have used the embedding \(Y=X_{\delta}\hookrightarrow X_{\alpha}\) in the last line. Using the same trick of inserting \(\Lambda^{-\frac{\alpha}{2}}\), applying Lemma 7.4, and using the embedding \(V_{\delta}\hookrightarrow V_{\alpha}\) as well as linear growth of \(\tilde{F}\) from \(V\) to \(V_{\delta-1}\), we obtain
\[E_{2} \leq 2s(t-s)^{\alpha}\bigg{\|}\sup_{r\in[0,T]}\|\Lambda^{\frac{ \alpha-1}{2}}F(r,u(r))\|_{V}\bigg{\|}_{p}\leq 2C_{Y}s(t-s)^{\alpha}\bigg{\|} \sup_{r\in[0,T]}\|F(r,u(r))\|_{V_{\delta-1}}\bigg{\|}_{p}\] \[\leq 2C_{Y}s(t-s)^{\alpha}\bigg{(}L_{F}\bigg{(}1+\bigg{\|}\sup_{r \in[0,T]}\|u(r)\|_{V_{\delta}}\bigg{\|}_{p}\bigg{)}+\|f\|_{p,\infty,V_{\delta-1 }}\bigg{)}\leq 2C_{Y}L_{1,F}T(t-s)^{\alpha}.\]
Likewise, for the stochastic integral we conclude
\[E_{4} \leq 2C_{Y}B_{p}(L_{G}C_{U_{0},\mathbf{f},\mathbf{g},Y}+\llbracket g \rrbracket_{p,\infty,V_{\alpha-1}})s^{\frac{1}{2}}(t-s)^{\alpha}\leq 2C_{Y}B_{p}L_{2,G}T^{ \frac{1}{2}}(t-s)^{\alpha}.\]
Recalling that \(\sin(0\cdot\Lambda^{1/2})=0\), we can estimate
\[E_{3} \leq\left\|\int_{s}^{t}\|[\sin((t-r)\Lambda^{1/2})-\sin(0\cdot \Lambda^{1/2})]\Lambda^{-\frac{\alpha}{2}}\|_{\mathcal{L}(V)}\|\Lambda^{\frac{ \alpha-1}{2}}F(r,u(r))\|_{V}\,\,\mathrm{d}r\right\|_{p}\] \[\leq 2C_{Y}\int_{s}^{t}(t-r)^{\alpha}\,\,\mathrm{d}r\left\|\sup_{r \in[0,T]}\|F(r,u(r))\|_{V_{\delta-1}}\right\|_{p}\] \[\leq\frac{2C_{Y}L_{1,F}}{\alpha+1}(t-s)^{\alpha+1}\leq\frac{2C_{Y} L_{1,F}T}{\alpha+1}(t-s)^{\alpha}\]
and, analogously,
\[E_{5}\leq\frac{2C_{Y}B_{p}L_{2,G}}{\sqrt{2\alpha+1}}(t-s)^{\alpha+\frac{1}{2}} \leq\frac{2C_{Y}B_{p}L_{2,G}T^{1/2}}{\sqrt{2\alpha+1}}(t-s)^{\alpha}.\]
Adding the bounds for \(E_{1}\) to \(E_{5}\) results in the desired statement.
Having established Holder continuity of \(u\) of order up to \(1\), we can derive an error bound attaining the optimal order \(1\) for sufficiently good schemes and regular nonlinearity, noise and initial values. The following main theorem of this section generalises [61, Thm. 3.1] from the splitting scheme to general contractive schemes as well as more general \(F\) and \(G\).
**Theorem 7.6**.: _Suppose that Assumption 7.1 holds for some \(\alpha\in(0,1]\), \(\delta\geq\alpha\), and \(p\in[2,\infty)\). Let \(X\coloneqq X_{0}\) and \(Y\coloneqq X_{\delta}\) as defined in (7.2) and \(U_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Let \((R_{k})_{k>0}\) be a contractive time discretisation scheme on \(X\) which commutes with the resolvent of \(A\). Assume \(R\) approximates \(S\) to order \(\alpha\) on \(Y\). Denote by \(U\) the mild solution of (7.1) and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (7.6). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|\right\|_{p}\leq C_{\rm e} \left(C_{1}+C_{2}\log\left(\frac{T}{k}\right)\right)k^{\alpha}\]
_with \(C_{\rm e}\coloneqq(1+C^{2}T)^{1/2}\exp((1+C^{2}T)/2)\), \(C\coloneqq C_{F}\sqrt{T}+B_{p}C_{G}\), \(C_{2}\coloneqq K_{p}C_{\alpha}K_{G}\sqrt{T}\), and_
\[C_{1}\coloneqq C_{\alpha}\|U_{0}\|_{L^{p}(\Omega;Y)}+\Big{(}\frac{1}{\alpha+1} (C_{F}L+C_{\alpha,F}+2C_{Y}K_{F})+C_{\alpha}K_{F}\Big{)}T\]
\[+\frac{B_{p}\sqrt{T}}{\sqrt{2\alpha+1}}(C_{G}L+C_{\alpha,G}+2C_{Y}K_{G}),\]
\(K_{F}\coloneqq L_{F}K_{U_{0},f,g,Y}+\|f\|_{L^{p}(\Omega;L^{\infty}(0,T;V_{\delta -1}))}\)_, \(K_{G}\coloneqq L_{G}K_{U_{0},f,g,Y}+\|g\|_{L^{p}(\Omega;L^{\infty}(0,T;\mathcal{ L}_{2}(H,V_{\delta-1})))}\), \(L\) as defined in Lemma 7.5, \(K_{U_{0},f,g,Y}\) as in (7.9), \(K_{p}=10{\rm e}\sqrt{p}\), \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\), and \(B_{p}\) is the constant from Theorem 2.1._
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\min\{\alpha,1\}\) up to a logarithmic correction factor as \(k\to 0\)._
Proof.: By the discussion before Lemma 7.2, the conditions of Theorem 6.3 follow from Assumption 7.1. Second, we make use of Lemma 7.5 to obtain decay of rate \(\alpha\) for those terms limiting the rate of convergence in Theorem 6.3 to \(\frac{1}{2}\).
Contractivity of \(S\), Lipschitz continuity of \(\tilde{F}\) from \(V\) to \(V_{-1}\) and Lemma 7.5 together yield
\[M_{2,1} \leq\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\|{\bf F}(s,U(s))-{\bf F }(s,U(t_{i}))\|_{L^{p}(\Omega;X)}\,{\rm d}s\] \[=\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\|\tilde{F}(s,u(s))-\tilde{ F}(s,u(t_{i}))\|_{L^{p}(\Omega;V_{-1})}\,{\rm d}s\] \[\leq C_{F}\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\|u(s)-u(t_{i})\| _{L^{p}(\Omega;V)}\,{\rm d}s\leq C_{F}L\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}} (s-t_{i})^{\alpha}\,{\rm d}s=\frac{C_{F}L}{\alpha+1}t_{N}k^{\alpha}.\]
Combining this with the bounds for \(M_{2,2}\) to \(M_{2,4}\) from Theorem 6.3 leads to
\[M_{2}\leq\left(\frac{C_{F}L+C_{\alpha,F}+2C_{Y}K_{F}}{\alpha+1}+C_{\alpha}K_{F }\right)t_{N}k^{\alpha}+C_{F}\sqrt{t_{N}}\left(k\sum_{i=0}^{N-1}E(i)^{2}\right) ^{1/2}.\]
Here, we have used (7.7) to pass from the \(Y\)-norm of \({\bf f}\) to the \(V_{\delta-1}\)-norm of \(f\) appearing in \(K_{F}\). For the term \(M_{3,1}\), an application of the maximal inequality is required additionally. By the same reasoning as for \(M_{2,1}\), we then deduce
\[M_{3,1}\leq B_{p}C_{G}\bigg{(}\sum_{i=0}^{N-1}\int_{t_{i}}^{t_{i+1}}\|u(s)-u(t _{i})\|_{L^{p}(\Omega;V)}^{2}\,{\rm d}s\bigg{)}^{1/2}\leq\frac{B_{p}C_{G}L}{ \sqrt{2\alpha+1}}\sqrt{t_{N}}k^{\alpha}.\]
In conclusion from the bounds for \(M_{3,1}\) to \(M_{3,5}\),
\[M_{3}\leq\frac{B_{p}(C_{G}L+C_{\alpha,G}+2C_{Y}K_{G})}{\sqrt{2\alpha+1}}\sqrt{ t_{N}}k^{\alpha}+K_{p}C_{\alpha}K_{G}\sqrt{t_{N}}\cdot(\log N)k^{\alpha}+B_{p}C_{G} \left(k\sum_{l=0}^{N-1}E(l)^{2}\right)^{1/2}.\]
The final statement follows by summing the estimates for \(M_{1},M_{2}\) and \(M_{3}\) and then applying Gronwall's inequality from Lemma 2.6.
### Splitting scheme
Also for the abstract stochastic wave equation, the logarithmic correction factor vanishes when using the splitting scheme. Hence, we obtain convergence of the optimal rate.
**Corollary 7.7**.: _Suppose that Assumption 7.1 holds for some \(\alpha\in(0,1]\), \(\delta\geq\alpha\), and \(p\in[2,\infty)\). Let \(X\coloneqq X_{0}\) and \(Y\coloneqq X_{\delta}\) as defined in (7.2) and \(U_{0}\in L^{p}_{\mathcal{F}_{0}}(\Omega;Y)\). Consider the splitting scheme \(R\coloneqq S\) for time discretisation. Denote by \(U\) the mild solution of (7.1) and by \((U^{j})_{j=0,\dots,N_{k}}\) the temporal approximations as defined in (7.6). Then for \(N_{k}\geq 2\)_
\[\left\|\max_{j=0,\dots,N_{k}}\|U(t_{j})-U^{j}\|_{X}\right\|_{p}\leq C_{\rm S,e} C_{\rm S}\cdot k^{\alpha}\]
_with constants \(C_{\rm S,e}\coloneqq C_{\rm e}\) as in Theorem 7.6 and_
\[C_{\rm S}\coloneqq\frac{C_{F}L+C_{\alpha,F}+2C_{Y}K_{F}}{\alpha+1}T+\frac{B_{ p}\sqrt{T}}{\sqrt{2\alpha+1}}(C_{G}L+C_{\alpha,G}+2C_{Y}K_{G}),\]
_where \(L\) is as defined in Lemma 7.5, \(K_{F}\) and \(K_{G}\) are as in Theorem 7.6, \(C_{Y}\) denotes the embedding constant of \(Y\) into \(D_{A}(\alpha,\infty)\), and \(B_{p}\) is the constant from Theorem 2.1._
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(\min\left\{\alpha,1\right\}\) as \(k\to 0\)._
### Error estimates on the full time interval
In the same way as in the proof of Theorem 6.9, we see that the next result follows from Theorem 7.6.
**Corollary 7.8**.: _Suppose that the conditions of Theorem 7.6 hold for \(\alpha\in(0,1/2]\). Let \(p_{0}\in(p,\infty)\) and \(q\in(2,\infty]\) be such that \(\frac{1}{2}-\frac{1}{q}=\alpha\), and suppose that \(f,g\), and \(U_{0}\) have additional integrability_
\[f\in L^{p_{0}}(\Omega;L^{1}(0,T;V)),\quad g\in L^{p_{0}}(\Omega;L^{q}(0,T; \mathcal{L}_{2}(H,V))),\quad\text{ and }U_{0}\in L^{p_{0}}_{\mathcal{F}_{0}}(\Omega;X)\cap L^{p}_{ \mathcal{F}_{0}}(\Omega;X_{\delta}).\]
_Denote by \(U\) the mild solution of (7.1) and by \((U^{j})_{j=0,\dots,N_{k}}\) the temporal approximations as defined in (7.6). Define the piecewise linear extension \(\tilde{U}:[0,T]\to L^{p}(\Omega;X)\) of \((U^{j})_{j=0,\dots,N_{k}}\) by \(\tilde{U}(t)\coloneqq U^{j}\) for \(t\in[t_{j},t_{j+1})\), \(0\leq j\leq N_{k}-1\), and \(\tilde{U}(T)\coloneqq U^{N_{k}}\). Then for all \(N_{k}\geq 2\) there is a constant \(C\geq 0\) depending on \((T,p,p_{0},\alpha,u_{0},f,g,X,\delta)\) such that_
\[\left\|\sup_{t\in[0,T]}\|U(t)-\tilde{U}(t)\|_{X}\right\|_{p}\leq C\Big{(}1+ \log\Big{(}\frac{T}{k}\Big{)}\Big{)}k^{\alpha}.\]
In case we only estimate the first component \(u\), more can be said about the convergence rate on the full time interval. Under weaker integrability conditions and for general \(\alpha\in(0,1]\) we obtain the following.
**Corollary 7.9**.: _Suppose that the conditions of Theorem 7.6 hold. Define the piecewise linear extension \(\tilde{U}=(\tilde{u},\tilde{v}):[0,T]\to L^{p}(\Omega;X)\) of \((U^{j})_{j=0,\dots,N_{k}}\) by \(\tilde{U}(t)\coloneqq U^{j}\) for \(t\in[t_{j},t_{j+1})\), \(0\leq j\leq N_{k}-1\), and \(\tilde{U}(T)\coloneqq U^{N_{k}}\). Let \(\delta_{1}\coloneqq\min\{\delta,1\}\). Then the following two error estimates hold._
1. (general schemes) _It holds that_ \[\left\|\sup_{t\in[0,T]}\|u(t)-\tilde{u}(t)\|_{V}\right\|_{p}\leq 2C_{U_{0},{\bf f },{\bf g},X_{\delta_{1}}}k^{\delta_{1}}+C_{\rm e}\left(C_{1}+C_{2}\log\left( \frac{T}{k}\right)\right)k^{\alpha}.\]
2. (splitting scheme) _If_ \(R_{k}=S(k)\) _then_ \[\left\|\sup_{t\in[0,T]}\|u(t)-\tilde{u}(t)\|_{V}\right\|_{p}\leq 2C_{U_{0},{\bf f },{\bf g},X_{\delta_{1}}}k^{\delta_{1}}+C_{\rm S,e}C_{\rm S}\cdot k^{\alpha}.\]
Proof.: Since the mild solution is also a weak solution to (7.1), writing \(U=(u,v)\in L^{p}(\Omega;C([0,T];V\times V_{-1}))\) we see that \((u(t),\varphi)-(u_{0},\varphi)=\int_{0}^{t}(v(s),\varphi)\ {\rm d}s\) for all \(\varphi\in V_{-1}\). Therefore, \(u\) is continuously differentiable as a \(V_{-1}\)-valued function. By (6.4),
\[\max\{\|u\|_{L^{p}(\Omega;C([0,T];V_{\delta_{1}}))},\|u^{\prime}\|_{L^{p}( \Omega;C([0,T];V_{\delta_{1}}-1))}\}\leq\|U\|_{L^{p}(\Omega;C([0,T];X_{\delta _{1}}))}\leq C_{U_{0},{\bf f},{\bf g},X_{\delta_{1}}}. \tag{7.10}\]
Using the above and the interpolation estimate \(\|x\|_{V}\leq\|x\|_{V_{\delta_{1}-1}}^{\delta_{1}}\|x\|_{V_{\delta_{1}}}^{1- \delta_{1}}\) we find that
\[\|u(t)-u(s)\|_{V}=\|u(t)-u(s)\|_{V_{\delta_{1}-1}}^{\delta_{1}}\|u(t)-u(s)\|_{V_{ \delta_{1}}}^{1-\delta_{1}}\leq 2|t-s|^{\delta_{1}}\|u^{\prime}\|_{C([0,T];V_{ \delta_{1}-1})}^{\delta_{1}}\|u\|_{C([0,T];V_{\delta_{1}})}^{1-\delta_{1}}.\]
Therefore, by Holder's inequality and (7.10) we find that
\[[u]_{L^{p}(\Omega;C^{\delta_{1}}([0,T];V))}\leq\|u^{\prime}\|_{L^{p}(\Omega;C( [0,T];V_{\delta_{1}-1}))}^{\delta_{1}}\|u\|_{L^{p}(\Omega;C([0,T];V_{\delta_{1} }))}^{1-\delta_{1}}\leq 2C_{U_{0},\mathbf{f},\mathbf{g},X_{\delta_{1}}}.\]
By Lemma 6.5, we find that for \(U^{j}=(u^{j},v^{j})\),
\[\sup_{t\in[0,T]}\|u(t)-\tilde{u}(t)\|_{V}\leq k^{\delta_{1}}\|u\|_{C^{\delta_{ 1}}([0,T];V)}+\max_{j=0,\ldots,N_{k}}\|u(t_{j})-u^{j}\|_{V}.\]
Therefore, taking \(L^{p}\)-norms and using the error estimate of Theorem 7.6 we find that
\[\Big{\|}\sup_{t\in[0,T]}\|u(t)-\tilde{u}(t)\|_{V}\Big{\|}_{p} \leq 2C_{U_{0},\mathbf{f},\mathbf{g},X_{\delta_{1}}}k^{\delta_{1}} +\Big{\|}\max_{j=0,\ldots,N_{k}}\|u(t_{j})-u^{j}\|_{V}\Big{\|}_{p}\] \[\leq 2C_{U_{0},\mathbf{f},\mathbf{g},X_{\delta_{1}}}k^{\delta_{1}} +C_{\mathrm{e}}\left(C_{1}+C_{2}\log\left(\frac{T}{k}\right)\right)k^{\alpha}.\]
The second estimate is obtained from Corollary 7.7 in place of Theorem 7.6 in the last step.
### Application to the stochastic wave equation with trace class noise
As an example, we consider the classical stochastic wave equation on an open and bounded subset \(\mathcal{O}\subseteq\mathbb{R}^{d}\):
\[\begin{cases}\mathrm{d}\dot{u}=\left(\Delta u+F(u)\right)\mathrm{d}t+G(u)\; \mathrm{d}W(t)\quad\text{on }[0,T],\\ u(0)=u_{0},\;\dot{u}(0)=v_{0},\end{cases} \tag{7.11}\]
with Dirichlet boundary conditions. In the current subsection we consider trace class noise in \(L^{2}\) for any \(d\in\mathbb{N}\), and in Subsection 7.5 space-time white noise in case \(d=1\).
It is well-known that \(\Lambda=-\Delta\) is a positive and self-adjoint operator on \(L^{2}(\mathcal{O})\), which is invertible. The semigroup associated to (7.11) is the wave semigroup \((S(t))_{t\geq 0}\). Let \(\{W(t)\}_{t\in[0,T]}\) be a \(Q\)-Wiener process with \(Q\in\mathcal{L}(L^{2}(\mathcal{O}))\) so that \(Q\) is positive and self-adjoint. Assume
\[Q^{1/2}\in\mathcal{L}(L^{2}(\mathcal{O}),L^{\infty}(\mathcal{O})). \tag{7.12}\]
In particular, this implies \(Q^{1/2}\in\mathcal{L}_{2}(L^{2}(\mathcal{O}),L^{2}(\mathcal{O}))\) and that \(Q\) is trace class.
We consider the stochastic wave equation (7.11) on \(V\coloneqq L^{2}(\mathcal{O})\) and set \(H\coloneqq L^{2}(\mathcal{O})\). For the nonlinearity and the multiplicative noise, we choose Nemytskij operators \(F:V\to V\) and \(G:V\to\mathcal{L}_{2}(H,V)=\mathcal{L}_{2}(L^{2}(\mathcal{O}))\) determined by
\[F(u)(\xi)=\phi(\xi,u(\xi)),\quad(G(u)(h))(\xi)=\psi(\xi,u(\xi))Q^{1/2}h(\xi), \quad\xi\in\mathcal{O}. \tag{7.13}\]
Here, the measurable functions \(\phi,\psi:\mathcal{O}\times\mathbb{R}\to\mathbb{R}\) are Lipschitz and of linear growth in the second coordinate, i.e. there is a constant \(L\geq 0\) such that for all \(u,u_{1},u_{2}\in\mathbb{R}\), \(\xi\in\mathcal{O}\) it holds that
\[|\phi(\xi,u)|+|\psi(\xi,u)|\leq L(1+|u|),\quad|\phi(\xi,u_{1})-\phi(\xi,u_{2}) |+|\psi(\xi,u_{1})-\psi(\xi,u_{2})|\leq L|u_{1}-u_{2}|. \tag{7.14}\]
It is clear that \(F\) is Lipschitz from \(V\) to \(V\). To see that the same holds for \(G\), note that by (7.12)
\[|G(u)h(\xi)|=|\psi(\xi,u(\xi))||Q^{1/2}h(\xi)|\leq C_{\psi,Q}(1+|u(\xi)|)\|h \|_{H},\]
where \(C_{\psi,Q}\coloneqq L\|Q^{1/2}\|_{\mathcal{L}(L^{2}(\mathcal{O}),L^{\infty}( \mathcal{O}))}\). Therefore, arguing as in [36, Theorem 9.3.6 (3)\(\Rightarrow\)(4)] by Riesz' theorem we can find \(k_{u}:\mathcal{O}\to H\) such that for a.e. \(\xi\in\mathcal{O}\) for all \(h\in H\), \((k_{u}(\xi),h)_{H}=(G(u)h)(\xi)\), and \(\|k_{u}(\xi)\|_{H}\leq C_{\psi,Q}(1+|u(\xi)|)\). Therefore, for an orthonormal basis \((h_{n})_{n\geq 1}\) of \(H\), we find that
\[\|G(u)\|_{\mathcal{L}_{2}(H,V)}^{2} =\sum_{n\geq 1}\|G(u)h_{n}\|_{V}^{2}=\int_{\mathcal{O}}\sum_{n\geq 1}|(k_{u}( \xi),h_{n})|^{2}d\xi=\int_{\mathcal{O}}\|k_{u}(\xi)\|_{H}^{2}d\xi\] \[\leq C_{\psi,Q}^{2}\|1+|u|\|_{V}^{2}\leq C_{\psi,Q}^{2}(|\mathcal{O }|^{1/2}+\|u\|_{V})^{2}.\]
with \(|\mathcal{O}|\) denoting the Lebesgue measure of the set \(\mathcal{O}\). Likewise, we obtain Lipschitz continuity of \(G\). In particular, \(F\) and \(G\) satisfy the mapping properties of Assumption 7.1 for any \(\delta\in(0,1]\).
As an immediate consequence of Theorem 7.6 and Corollary 7.7, this yields the following convergence estimate generalising [61, Cor. 4.2] to arbitrary contractive schemes and slightly more general \(Q\)-Wiener processes \(W\).
**Theorem 7.10** (Wave equation with trace class noise in \(L^{2}\)).: _Let \(\mathcal{O}\subseteq\mathbb{R}^{d}\), \(d\in\mathbb{N}\), be a bounded and open set, \(V\coloneqq L^{2}(\mathcal{O})\), \(X\coloneqq V\times V_{-1}\), \(p\in[2,\infty)\), and \(0<\alpha\leq\delta\leq 1\). Suppose that \((u_{0},v_{0})\in L^{p}_{\mathcal{F}_{0}}(\Omega;X_{\delta})\). Let \(F\) and \(G\) be the Nemytskij operators as in (7.13) with \(\phi\) and \(\psi\) satisfying (7.14). Suppose the covariance operator \(Q\in\mathcal{L}(L^{2}(\mathcal{O}))\) satisfies (7.12). Let \(Y\coloneqq X_{\delta}\) be as defined in (7.2). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on both \(X\) and \(Y\). Suppose that \(R\) approximates \(S\) to order \(\alpha\) on \(Y\). Denote by \(U\) the mild solution of (7.1) with trace class noise and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (7.6). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{X}\right\|_{p}\leq C\log \left(\frac{T}{k}\right)k^{\alpha}.\]
_In particular, the approximations \((U^{j})_{j}\) converge at rate \(1\) if \((u_{0},v_{0})\in L^{p}_{\mathcal{F}_{0}}(\Omega;X_{1})\) and the splitting scheme \(R=S\) is used. The logarithmic factor can be omitted in this case._
In case \(\delta=1\), for implicit Euler and Crank-Nicolson we can take \(\alpha=1/2\) and \(\alpha=2/3\), respectively. This is due to convergence at rate \(\alpha\) on \(D((-A)^{2\alpha})\) and \(D((-A)^{3\alpha/2})\), respectively. Using higher order schemes, we can come as close to rate \(1\) as we want. In Theorem 7.12 we show that for smoother noise \(\alpha=1\) can be reached even for implicit Euler.
### Application to the stochastic wave equation with space-time white noise
We use the same notation as in Subsection 7.4, but this time with \(\mathcal{O}=(0,1)\) and \(Q=I\), so that (7.11) is the classical wave equation with space-time white noise. The required mapping properties can be checked as in [61, Cor. 4.3]. For convenience of the reader we include the details. The functions \(F\) and \(G\) are defined via (7.13), but this time we have to consider \(G\) as a mapping \(G:V\to\mathcal{L}_{2}(H,V_{-1})\).
The eigenvalues of the negative Dirichlet Laplacian \(\Lambda=-\Delta\) are \(\lambda_{i}=\pi^{2}i^{2}\), \(i\in\mathbb{N}\), with the corresponding orthonormal basis \(\{e_{i}=\sqrt{2}\sin(i\pi\cdot)\,:\,i\in\mathbb{N}\}\) of \(V\) consisting of eigenfunctions of \(\Lambda\). Clearly,
\[\sup_{i\in\mathbb{N}}\sup_{\xi\in[0,1]}|e_{i}(\xi)|\leq\sqrt{2},\quad\text{ and }\|\Lambda^{-\frac{\varepsilon+1}{4}}\|_{\mathcal{L}(V)}^{2}=\pi^{-( \varepsilon+1)}\sum_{i=1}^{\infty}i^{-(\varepsilon+1)}\eqqcolon c_{\varepsilon }<\infty\]
then hold for every \(\varepsilon>0\). Now let \(\varepsilon\in(0,1]\). Using the properties above, we conclude that
\[\|\Lambda^{-\frac{\varepsilon+1}{4}}G(u)\|_{\mathcal{L}_{2}(H,V)}^ {2} =\sum_{i=1}^{\infty}\sum_{j=1}^{\infty}|\langle G(u)e_{i},\Lambda^{ -\frac{\varepsilon+1}{4}}e_{j}\rangle_{V}|^{2}=\sum_{i=1}^{\infty}\sum_{j=1}^{ \infty}\lambda_{j}^{-\frac{\varepsilon+1}{2}}\left|\int_{\mathcal{O}}g(\xi,u( \xi))e_{i}(\xi)e_{j}(\xi)\;\mathrm{d}\xi\right|^{2}\] \[\leq 2\left(\sum_{j=1}^{\infty}\lambda_{j}^{-\frac{\varepsilon+1}{ 2}}\right)\|g(\cdot,u(\cdot))\|_{V}^{2}\leq 2L^{2}c_{\varepsilon}(|\mathcal{O}|^{1/2}+\|u \|_{V})^{2}.\]
Hence, \(G\) satisfies the linear growth condition of Assumption 7.1 with \(\delta=\frac{1-\varepsilon}{2}\). Repeating the arguments for \(\Lambda^{-1/2}[G(u_{1})-G(u_{2})]\) and using \(c_{1}=\pi^{2}/6\) results in
\[\|\Lambda^{-1/2}[G(u_{1})-G(u_{2})]\|_{\mathcal{L}_{2}(V)}^{2}\leq 2\left(\sum_{j=1} ^{\infty}\frac{1}{\pi^{2}j^{2}}\right)\|g(\cdot,u_{1}(\cdot))-g(\cdot,u_{2}( \cdot))\|_{V}^{2}\leq\frac{L^{2}}{3}\|u_{1}-u_{2}\|_{V}^{2}.\]
The nonlinearity \(F\) was already considered in Subsection 7.4. In conclusion, we obtain the following generalisation of [61, Cor. 4.3] to contractive time discretisation schemes.
**Theorem 7.11** (Wave equation with white noise).: _Let \(\mathcal{O}=(0,1)\), \(V\coloneqq L^{2}(\mathcal{O})\), \(X\coloneqq V\times V_{-1}\), \(p\in[2,\infty)\), and \(0<\alpha\leq\delta<1/2\). Suppose that \((u_{0},v_{0})\in L^{p}_{\mathcal{F}_{0}}(\Omega;X_{\delta})\). Let \(F\) and \(G\) be Nemytskij operators as above with \(\phi\) and \(\psi\) satisfying (7.14). Suppose the covariance operator \(Q=I\) on \(L^{2}(\mathcal{O})\). Let \(Y=X_{\delta}\). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on \(X\) and
\(Y\). Assume that \(R\) approximates \(S\) on \(Y\) to order \(\alpha\). Denote by \(U\) the mild solution of (7.1) with space-time white noise and by \((U^{j})_{j=0,\ldots,N_{k}}\) the temporal approximations as defined in (7.6). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{X}\right\|_{p}\leq C\log\left( \frac{T}{k}\right)k^{\alpha}.\]
_In particular, the approximations \((U^{j})_{j}\) converge at rate arbitrarily close to \(\frac{1}{2}\) if \((u_{0},v_{0})\in L^{p}_{\mathcal{F}_{0}}(\Omega;X_{1})\) and the splitting scheme \(R=S\) is used. The logarithmic factor can be omitted in this case._
For implicit Euler and Crank-Nicolson we can take \(\alpha=\delta/2\) and \(\alpha=2\delta/3\), respectively. Since we can choose \(\delta\) arbitrary close to \(1/2\) this leads to rates which are almost \(1/4\) and \(1/3\), respectively.
### Application to the stochastic wave equation with smooth noise
We have already seen that the splitting scheme leads to convergence rates of any order \(\alpha\in(0,1]\) depending on the given data. In this section we show that this can also be attained for other schemes such as implicit Euler and Crank-Nicolson under some smoothness conditions on the noise. To avoid problems with boundary conditions we only consider periodic boundary conditions. Consider
\[\begin{cases}\mathrm{d}\dot{u}=((\Delta-1)u+F(u))\;\mathrm{d}t+G(u)\;\mathrm{d }W(t)\quad\text{on }[0,T],\\ u(0)=u_{0},\ \dot{u}(0)=v_{0},\end{cases} \tag{7.15}\]
with \(\Lambda=1-\Delta\) and periodic boundary conditions on the \(d\)-dimensional torus \(\mathbb{T}^{d}=[0,1]^{d}\). For notational convenience we will write \(H^{\beta}=H^{\beta}(\mathbb{T}^{d})=V_{\beta}\). Note that \(\|\Lambda^{-\beta}\|_{\mathcal{L}(L^{2})}\leq 1\) for all \(\beta>0\). The additional \(+1\) in the definition of \(\Lambda\) is in order to ensure invertibility. Of course, \(F\) can be suitably redefined so that this is without loss of generality.
Let \(\delta\in(1,2]\) and write \(s=\delta-1\). Let
\[F(u)(\xi)=\phi(u(\xi)),\quad(G(u)(h))(\xi)=\psi(u(\xi))Q^{1/2}h(\xi),\quad\xi \in\mathbb{T}^{d}.\]
Here, the measurable functions \(\phi,\psi:\mathbb{R}\to\mathbb{R}\) are Lipschitz with Lipschitz constants \(L_{\phi}\) and \(L_{\psi}\), respectively. The Lipschitz estimates for \(F\) and \(G\) follow as in Subsection 7.4 since we will assume even more restrictive conditions on \(Q\). The growth estimates for \(F\) and \(G\) as in Assumption 7.1 (e) are more complicated. In case \(\delta=2\) the paraproduct constructions from [57] can be avoided, but we will consider the general case.
By the torus version of [57, Prop. 2.4.1] for \(u\in V_{\delta}\), there is a constant \(C_{s,\phi}\geq 0\) such that
\[\|F(u)\|_{V_{\delta-1}}=\|\phi(u)\|_{H^{\delta-1}}\leq C_{s,\phi}(\|u\|_{H^{ \delta-1}}+1)\leq C_{s,\phi}(\|u\|_{H^{\delta}}+1)=C_{s,\phi}(\|u\|_{V_{ \delta}}+1).\]
For \(G\) the estimate is still more complicated. For a Banach space \(E\), let \(\gamma(H,E)\) denote the space of \(\gamma\)-radonifying operators. Let \((\gamma_{n})_{n\geq 1}\) be an i.i.d. sequence of standard Gaussian random variables taking values in \(\mathbb{R}\). Suppose that \(\Lambda^{\frac{\delta-1}{2}}Q^{1/2}:L^{2}\to L^{\infty}\). Then by [36, Corollary 9.3.3], \(Q^{1/2}\in\gamma(H,H^{\beta,q})\) for all \(q\in[1,\infty)\) and all \(\beta\leq\delta-1\), and
\[C_{q,\beta}\coloneqq\|Q^{1/2}\|_{\gamma(H,H^{\beta,q})}\leq\|Q^{1/2}\|_{ \gamma(H,H^{\delta-1,q})}\leq c_{q}\|\Lambda^{\frac{\delta-1}{2}}Q^{1/2}\|_{ \mathcal{L}(L^{2},L^{\infty})}, \tag{7.16}\]
where \(c_{q}=\|\gamma_{1}\|_{L^{q}(\Omega)}\). Let \((h_{n})_{n\geq 1}\) be an orthonormal basis for \(H\) and fix \(N\geq 1\). Let \(\eta_{N}\coloneqq\sum_{n=1}^{N}\gamma_{n}Q^{1/2}h_{n}\in L^{2}(\Omega;V_{ \delta-1})\). Then \(\|\eta_{N}\|_{L^{2}(\Omega;V_{\beta})}\leq\|Q^{1/2}\|_{\gamma(H,H^{\beta,q})}\) for all \(\beta\leq\delta-1\). Hence,
\[\sum_{n=1}^{N}\|G(u)h_{n}\|_{V_{\delta-1}}^{2}=\|\psi(u)\eta_{N}\|_{L^{2}( \Omega;V_{\delta-1})}^{2}.\]
Next, we estimate \(\|\psi(u)\eta_{N}\|_{V_{\delta-1}}\) pointwise in \(\Omega\). By the torus version of [57, Proposition 2.1.1] (see [1, Proposition 4.1(1)]) and [57, Prop. 2.4.1], there is a constant \(C_{\delta,d,1}\geq 0\) such that
\[\|\psi(u)\eta_{N}\|_{V_{\delta-1}} =\|\psi(u)\eta_{N}\|_{H^{-1}}\] \[\leq\|\psi(u)\|_{L^{q_{1}}}\|\eta_{N}\|_{H^{\delta-1,q_{2}}}+\| \psi(u)\|_{H^{\delta-1,r_{2}}}\|\eta_{N}\|_{L^{r_{1}}}\] \[\leq L_{\psi}(\|u\|_{L^{q_{1}}}+1)\|\eta_{N}\|_{H^{\delta-1,q_{2}} }+L_{\psi}C_{\delta,d,1}(\|u\|_{H^{\delta-1,r_{2}}}+1)\|\eta_{N}\|_{H^{\delta-1, r_{1}}},\]
where \(\frac{1}{q_{1}}+\frac{1}{q_{2}}=\frac{1}{r_{1}}+\frac{1}{r_{2}}=\frac{1}{2}\) and \(q_{1},r_{1}\in(2,\infty]\) and \(q_{2},r_{2}\in[2,\infty)\). Taking \(r_{1}<\infty\) and using (7.16), we find that
\[\|\psi(u)\eta_{N}\|_{L^{2}(\Omega;V_{\delta-1})}\leq L_{\psi}C_{q_{2},\delta-1 }(\|u\|_{L^{q_{1}}}+1)+L_{\psi}C_{\delta,d,1}C_{r_{1},\delta-1}(\|u\|_{H^{ \delta-1,r_{2}}}+1)\]
for suitable constants \(C_{q_{2},\delta-1},C_{r_{1},\delta-1}\geq 0\). It remains to estimate \(\|u\|_{L^{q_{1}}}\) and \(\|u\|_{H^{\delta-1,r_{2}}}\) by \(\|u\|_{H^{\delta}}=\|u\|_{V_{\delta}}\) using suitable Sobolev embeddings and choosing \(q_{1}\in(2,\infty]\) and \(r_{2}\in(2,\infty)\) suitably. As soon as we have done that we can let \(N\to\infty\) and conclude the required estimate
\[\|G(u)\|_{\mathcal{L}_{2}(H,V_{\delta-1})}\leq K(1+\|u\|)_{V_{\delta}}.\]
To obtain \(H^{\delta}\hookrightarrow L^{q_{1}}\) we consider two cases. If \(\delta\leq d/2\) (e.g. \(d\in\{1,2\}\)) we can take \(q_{1}<\infty\) arbitrary. If \(\delta>d/2\), then we take \(q_{1}=\frac{2d}{d-2\delta}\), and thus \(q_{2}=\frac{d}{\delta}\).
To obtain \(H^{\delta}\hookrightarrow H^{\delta-1,r_{2}}\) we consider two cases. If \(d\in\{1,2\}\), then we can take \(r_{2}\in(2,\infty)\) arbitrary. If \(d\geq 3\), then we set \(r_{2}=\frac{2d}{d-2}\), and thus \(r_{1}=d\).
**Theorem 7.12** (Wave equation with smooth noise).: _Let \(V\coloneqq L^{2}(\mathbb{T}^{d})\), \(X\coloneqq V\times V_{-1}\), \(p\in[2,\infty)\), and \(0<\alpha\leq 1<\delta\leq 2\). Suppose that \((u_{0},v_{0})\in L^{p}_{\mathcal{F}_{0}}(\Omega;X_{\delta})\). Let \(F\) and \(G\) be Nemytskij operators as above with Lipschitz functions \(\phi\) and \(\psi\). Suppose the covariance operator \(Q\) on \(L^{2}(\mathcal{O})\) satisfies \(\Lambda^{\frac{\delta-1}{2}}Q^{1/2}\in\mathcal{L}(L^{2}(\mathbb{T}^{d}),L^{ \infty}(\mathbb{T}^{d}))\). Let \(Y\coloneqq X_{\delta}\) be as defined in (7.2). Let \((R_{k})_{k>0}\) be a time discretisation scheme which is contractive on both \(X\) and \(Y\). Assume that \(R\) approximates \(S\) to order \(\alpha\) on \(Y\). Denote by \(U\) the mild solution of (7.15) driven by a \(Q\)-Wiener process \(W\) and by \((U^{j})_{j=0,\dots,N_{k}}\) the temporal approximations as defined in (7.6). Then there exists a constant \(C\geq 0\) depending on \(T\) such that for \(N_{k}\geq 2\)_
\[\left\|\max_{0\leq j\leq N_{k}}\|U(t_{j})-U^{j}\|_{X}\right\|_{p}\leq C\log \left(\frac{T}{k}\right)k^{\alpha}.\]
The above result is not useful for the splitting scheme, since Theorem 7.10 is better in that case. However, if we specialize to implicit Euler and Crank-Nicolson, then we obtain rates \(\alpha=\frac{\delta}{2}\) and \(\alpha=\min\{\frac{2}{3}\delta,1\}\), respectively. In particular this leads to convergence of order one if \(\delta=2\) for many numerical schemes. Note that \(\delta=2\) more or less corresponds to a noise \(W\) which is in \(H^{1,q}(\mathbb{T}^{d})\) for all \(q<\infty\).
**Remark 7.13**.: _Theorem 7.12 gives an explanation for the numerical convergence rates obtained in [61, Fig. 6.1, right figure]. There, trace class noise determined by \(\psi(u)=u\) and \(Q\) with eigenvalues \(q_{j}=j^{-\beta}\), \(j\in\mathbb{N}\), \(\beta=1.1\) has been investigated. Denote by \((e_{j})_{j\in\mathbb{N}}\) the orthonormal basis of \(V\) and by \(\lambda_{j}=Cj^{2}\) the eigenvalues of \(\Lambda\) as in Subsection 7.5 for some constant \(C>0\). We calculate that_
\[\Lambda^{\frac{\delta-1}{2}}Q^{\frac{1}{2}}e_{j}=q_{j}^{\frac{1}{2}}\Lambda^{ \frac{\delta-1}{2}}e_{j}=j^{-\frac{\beta}{2}}\lambda_{j}^{\frac{\delta-1}{2} }e_{j}=C^{\frac{\delta-1}{2}}j^{\delta-1-\frac{\beta}{2}}e_{j}\]
_for \(j\in\mathbb{N}\). Thus, \(\Lambda^{\frac{\delta-1}{2}}Q^{\frac{1}{2}}\) maps \(L^{2}\) into \(L^{\infty}\) if \(\delta\leq 1+\frac{\beta}{2}\). Setting \(\delta\coloneqq\min\{1+\frac{\beta}{2},2\}=1+\frac{1.1}{2}=1.55\), we derive convergence of rate \(\frac{\delta}{2}=0.775\) for implicit Euler and \(\min\{\frac{2}{3}\delta,1\}=1\) for Crank-Nicolson. Taking numerical errors into account, this corresponds exactly to the numerical convergence rates obtained in [61, Fig. 6.1, right figure]._
|
2310.18366 | A Multilingual Virtual Guide for Self-Attachment Technique | In this work, we propose a computational framework that leverages existing
out-of-language data to create a conversational agent for the delivery of
Self-Attachment Technique (SAT) in Mandarin. Our framework does not require
large-scale human translations, yet it achieves a comparable performance whilst
also maintaining safety and reliability. We propose two different methods of
augmenting available response data through empathetic rewriting. We evaluate
our chatbot against a previous, English-only SAT chatbot through non-clinical
human trials (N=42), each lasting five days, and quantitatively show that we
are able to attain a comparable level of performance to the English SAT
chatbot. We provide qualitative analysis on the limitations of our study and
suggestions with the aim of guiding future improvements. | Alicia Jiayun Law, Ruoyu Hu, Lisa Alazraki, Anandha Gopalan, Neophytos Polydorou, Abbas Edalat | 2023-10-25T10:50:18Z | http://arxiv.org/abs/2310.18366v1 | # A Multilingual Virtual Guide for Self-Attachment Technique
###### Abstract
In this work, we propose a computational framework that leverages existing out-of-language data to create a conversational agent for the delivery of Self-Attachment Technique (SAT) in Mandarin. Our framework does not require large-scale human translations, yet it achieves a comparable performance whilst also maintaining safety and reliability. We propose two different methods of augmenting available response data through empathetic rewriting. We evaluate our chathot against a previous, English-only SAT chathot through non-clinical human trials (\(N=42\)), each lasting five days, and quantitatively show that we are able to attain a comparable level of performance to the English SAT chathot. We provide qualitative analysis on the limitations of our study and suggestions with the aim of guiding future improvements.
digital psychotherapy, chatbots, attachment theory, Mandarin
## I Introduction
According to the 2022 Global Burden of Disease study, mental disorders have been ranked amongst the top ten leading causes of burden1 worldwide since 1990 [1]. With the onset of the COVID-19 pandemic, there has been significant negative impact on the mental health condition of the global population from a variety of environmental stimuli [2], with, for example, the effect in the UK most severe among the 18-34 demographic group but visible in all age demographics [3]. Cases of patients suffering mental health issues associated with a range of negative emotions such as defeat, entrapment and loneliness increased significantly from pre-pandemic levels [4].
Footnote 1: Burden is defined according to a disease’s prevalence and harm [1].
As such, provision of mental health support has become more imperative in addressing mental health concerns arising as a result of public health emergencies. Yet, there persists a "mental health treatment gap", which describes the large disparity between the need for and availability of mental healthcare services [1]. This can be attributed to the following reasons: (i) stigma on mental health [5], (ii) unaffordable treatment and (iii) limited and unequal distribution of mental healthcare resources [6]. It is therefore desirable to incorporate and supplement existing methods with digital technologies and novel techniques.
The Self-Attachment Technique (SAT) is a self-administable intervention introduced in [7, 8] and [9]. In SAT, the user enacts both the role of the care-seeker, conceptualised as their childhood or emotional self and represented by the user's favourite childhood photo or VR avatar created from the photo, and that of the care-giver, conceptualised as their adult or thinking self. The adult self establishes an imaginative compassionate relation and then an affectional bonding with the childhood self using the photo or the avatar and their favourite jolly and love songs. Subsequently, for the bulk of the SAT intervention, the adult self re-parents the childhood self to emotional and social maturity by emulating the optimal parent-child interactions whenever the user experiences strong negative emotions, which are projected and externalised onto the childhood self. SAT has had promising results in its pilot study [10].
Prior works have incorporated technologies into the delivery of SAT protocols, with the most recent producing a chathot assistant [11] aimed at guiding users proficient with practising SAT protocols through protocol recommendation. Conversational agents [12] have significant potential in their application to psychotherapy [13, 14], as recent advancements in the field of Natural Language Processing with large neutral pretrained language models [15, 16] using a transformer-based architecture [17] have achieved state-of-the-art results in a variety of tasks that facilitate greater capability of human-computer interaction.
However, prior works are limited only to English, a situation emblematic of much of the recent progress in the application of machine learning models to Natural Language Processing [18, 19]. Monolingual NLP for certain languages can encounter the problem of resource availability, where there is a lower volume of available task-specific data to train a model to the same level of performance as higher-resource languages such as English.
In this paper, we present a computational framework for the delivery of SAT protocols in a Mandarin setting in order to gauge the feasibility of deploying existing English psychotherapeutic intervention into non-English languages, with the aim to contribute to achieving equitable access to mental healthcare for non-English speaking communities in the future.
We summarise our contributions as follows:
* We introduce a translation pipeline, leveraging machine translation and post-editing to produce language-specific data from existing task-specific English data.
* We introduce transformer reinforcement learning via Proximal Policy Optimisation (PPO) to train an empathetic, fluent and accurate generation model to produce
quality responses via empathetic rewriting.
* We introduce an alternate, supervised learning method for empathetic rewriting and provide quantitative comparison against the previous methods.
* We introduce a multilingual emotion recognition component, and apply knowledge distillation to reduce inference latency.
* We fully integrate the Mandarin version of the chatbot with previous [11] English versions to deploy a fully bilingual application.
* We formally evaluate the chatbot performance in multiple non-clinical trials, and provide qualitative analysis aimed at guiding future work.
## II Background
### _Self-Attachment Technique_
Self-Attachment Technique (SAT) [7] is a new psychotherap-apetic treatment informed by John Bowlby's Attachment Theory and has shown promise in early pilot studies [10]. It attributes affect dysregulation2 disorders to sub-optimal emotional attachments formed between an individual and their primary caregivers during their early childhood. For instance, individuals who experienced secure attachment (i.e., had available and responsive caregivers) in their childhood tend to exhibit stronger self-esteem and self-reliance, and hence healthier mental states as adults [7].
Footnote 2: Affect dysregulation is defined as the “impaired ability to regulate and/or tolerate negative emotional states” [20].
SAT is comprised of 20 self-administered protocols aimed at developing new secure attachment. The protocols invite individuals to envisage their current self caring and attending to their inner childhood self. This stimulates optimal neural growth, allowing individuals to better navigate and regulate their negative emotions, thereby tackling mental disorders stemming from insecure attachment [8]. The aims of the 20 protocols can be collated into eight groups:
* Compassion toward the childhood self.
* Affectional bonding with the childhood self; Vowing to care for the childhood self.
* Rebuilding the childhood self's emotional world; Loving the childhood self, zest for life; Bonding with Nature.
* Self-regulation of strong emotions; Reducing negative emotions.
* (Re)-learning to laugh and being playful.
* Learning to change perspective and laugh.
* Socialising the childhood self.
* Enhancing tolerance and resilience.
The previous English SAT chatbot [11] deduces the user's emotional state from open conversation, yet allows the user to select a different emotion whenever they feel that the one inferred is inaccurate. The chatbot then pursues a series of questions depending on the user's emotion to further refine protocol recommendation based on the user's past experience and current state. Protocols deemed unsuitable or those with which the user had previous adverse reactions, particularly protocols aimed at tackling negative emotions, are eliminated from recommendation. Users are encouraged to select and practise a protocol from a list of recommendations. Afterwards, they are prompted to give feedback on changes to their emotional state and undertake further protocols that are selected based on the feedback.
### _Empathy in Digital Psychotherapy_
According to psychotherapy research, an important component in the efficacy of psychotherapeutic interventions is the capability of the therapist to engage in an empathetic manner with the patient [21]. Similarly to prior works [11, 22], we focus on Godfrey T. Barrett-Lennard's second phase of empathetic dialogue with the aim of producing empathetic responses demonstrating compassion towards the user.
Prior works on empathetic dialogue systems [14, 22, 23] have highlighted the importance of empathy in digital mental health support. However, most prior works focus on English deployment, and at the time of writing, there is little open-domain, language-specific and task-specific data for open empathetic response generation in Mandarin.
### _Related works_
Applications of digital psychotherapeutic interventions, such as _Cognitive-Behavioural Therapy_ (CBT) [24] remain largely monolingual, though with increasing monolingual adoption in non-English languages in recent years, such as French in the case of Lopez et al. [25]. Works such as those by Bakker et al. [26] and Weaver et al. [27] present English-only digital platforms for CBT and cite the adaptation to more languages as a future research direction to increase impact.
Previous SAT chatbots [11] make use of crowd-sourced English data for their responses, and a fixed conversation flow that determines the appropriate response type at each conversation step. Responses are retrieved from a pool of possible responses ranked by a weighted metric combining sentence fluency, novelty to previous conversation, and perceived empathy. We aim to leverage the existing task-specific data produced by [11] through translation.
Several approaches exist to facilitate multilinguality in chatbot response generation, though we focus broadly on two translation approaches:
* **Inference-time** translations [28, 29], wherein the semantics of the output utterance is determined prior to performing translation on a selected response. This has a relatively low data footprint, and changes to the translation system can be deployed immediately. However, it requires higher computational demand at inference-time. Translation mechanisms can be embedded within the response generation step such as in Graca et al. [30] and Dimitra et al. [31] to provide customer support and public administration functionalities respectively. Ralston et al. [28] apply this approach to the provision of mental health support to students by wrapping conversation logic within source-target and target-source translation steps using multiple external APIs. Lin et al. [29] identify the high
cost of this approach, as well as the susceptibility to noisy data at inference-time, and propose learning language-agnostic representations to allow for better multilingual adaptation through training on translated data.
* **Pre-computed**[32] translations, wherein the responses are saved and retrieved at the relevant stages of the conversation. This approach allows for all the responses to be known prior to the retrieval step, allowing for finer control over the responses presented to the user. Due to the sensitive nature of conversations in mental health contexts, this feature is inherently beneficial for ensuring the safety of the produced responses. However, in order to deploy to multiple languages, this approach requires the creation of language files for each supported language, as seen in the mental health support chatbot from Nieminen et al. [32]. _Retrieval-based_ methods are nonetheless commonly used in conversational agents for mental health in a monolingual context, such as in Alazraki et al. [11], where a retrieval model allows the English SAT chatbot to guide users through carrying out self-attachment therapy protocols, and in Vaira et al. [33], where a similar model is used to provide support to new mothers.
For the purpose of this paper, we focus on pre-computing responses in order to ensure the safety and reliability of the conversation provided to the user, as well as to prevent translation errors that may negatively impact user experience.
### _Patient Safety_
The SAT chatbot is a mental health application targeted at patients suffering from mental health conditions. In the interest of their safety, we take the following measures:
1. _Safe & Non-Toxic Chatbot Conversations_ Empathetic rewritings are produced from generative language models trained using a controlled dataset that has been vetted for safety. Utterances are also scored for empathy via an empathy classifier (the scores assigned are discrete labels) and only those that have attained a high empathy score are selected. Finally, as an additional precaution, all utterances are manually vetted for toxic content before being included in the dataset.
2. _Terminating Therapy_ SAT Protocols involve users interacting with their childhood self, which can inadvertently trigger strong emotions in patients suffering from childhood trauma. Should patients be uncomfortable with the suggested protocol at any point, they are given the option to decline treatment. The application will also take note of the protocol and omit it in the remainder of the session.
It should also be stressed that the SAT chatbot, while a mental health application, is not equipped, without a concurrent intervention by a human psychotherapist, to treat patients suffering from serious mental health conditions such as severe depression. Participants are only permitted to take part in the human evaluation trials once they have been thoroughly informed of the associated risks, and clear consent has been received.
### _Data Protection_
While patients do not need to provide personal information to interact with the chatbot, the _contents_ that patients discuss with the chatbot are themselves considered personal data under the UK's Data Protection Act (DPA) [34] and General Data Protection Regulations (GDPR) [35]. To ensure adherence to data protection laws, the SAT chatbot does not store user interactions beyond the treatment sessions. We also do not store metadata from user's devices (e.g. geolocation, IP/MAC addresses, IMEI codes etc.).
Data compliance was also ensured during the human trial. The human trial conducted in this paper has been approved by the Imperial College Research Ethics Committee. Prior to the trial, participants were informed on how their personal information would be handled, and were required to provide consent before participating. Moreover, participant responses are anonymised, and all responses collected are used strictly for the purposes of the current study.
## III Dataset
### _Data Analysis_
The EmpatheticPersonas dataset [11] (EP) is a crowdsourced dataset intended for the development of the SAT chatbot. This dataset is comprised of two main components. Firstly, it contains 1,181 written expressions of **emotion**, aimed at training an emotion classifier. The examples in the dataset are approximately evenly distributed across four emotion classes: there are 284 examples relating to Fear/Anxiety, 297 relating to Anger, 300 relating to Sadness and 300 relating to Joy/Contentment. Secondly, the dataset contains 2,144 **empathetic rewritings** of 45 base utterances. 1,100 of these have also been annotated for empathy using a discrete scale from 0 to 2 (where 0 represents non-empathetic utterances, 1 represents slightly empathetic ones and 2 corresponds to highly empathetic utterances). The annotated rewritings are aimed at training an empathy classifier.
We produced an additional native Mandarin dataset consisting of 120 emotional utterances balanced across four emotion classes for testing purposes.
### _Dataset Translation_
As crowd-sourcing data is a time consuming and costly process, we leveraged publicly available machine translation tools to aid the translation process of the existing EP dataset into Mandarin. We formulated our translation pipeline as follows:
1. We used a publicly available machine translation tool (Google Translate) to obtain a base English(EN)-Mandarin(ZH) translation of the EP dataset.
2. We performed post-editing (v1) on the translated dataset to remedy major translation errors affecting sentence semantics (Fig. 1).
3. An additional post-editing step (v2) was introduced after early trials identified a need to inject language-specific terms and colloquialisms to further improve the localisation quality of the translations (Fig. 2).
It is worth noting that the introduction of post-editing steps allows screening of candidate responses for potentially harmful or dangerous utterances in addition to remedying errors.
Reference-based sentence evaluation metrics such as BLEU [36] and ROUGE [37] evaluate the quality of a translation against reference target sentences. As human-translated target sentences were not available, we instead evaluated the efficacy of our translation using the reference-free [38] sentence fluency metrics SLOR [39] and PRSIM-SRC3[40] along with sentence perplexity (PPL), with results shown in Table I.
Footnote 3: We show NLL in our work as opposed to the Log-likelihood shown in the original work [40].
We observe from Table I that both post-edit revisions yield better fluency scores across all three metrics, suggesting that the inclusion of post-edits did improve the fluency of the utterances over base machine translated text, and improved translation quality with respect to the source sentence. We note that v2 yields a slightly higher improvement over the base version in SLOR (3.92 vs 3.84) than v1 (3.87 vs 3.84), whilst v1 scores higher on PRISM-SRC (34.71 vs 35.04) and Perplexity (18.08 vs 18.83). We hypothesise that this may be due to the fact that the edits made in v2 are of a finer-grain nature, using 'rarer' tokens that are more commonly associated with colloquialisms.
We also compare the quality of our utterances to the English version through human evaluation in Section V-C, where we show that our post-edited utterances attain a comparable level of user experience to the English version.
## IV Implementation
Our chatbot uses a rule-based conversation flow as in [11], where the chatbot first works to recognise the user's emotional state and from this it guides the next stages of the conversation to establish a context for protocol recommendation, with previously ineffective protocols removed from the set of potential recommendations. We introduce an addition to the conversation flow as shown in Fig. 3, to account for the user's change in emotional state after carrying out a protocol and produce appropriate responses. The chatbot contains two core components: _emotion recognition_ and _empathetic rewritings_.
### _Emotion Recognition_
As the conversation flow is dictated by the user's emotional state, the chatbot needs to correctly identify the user's emotions. We developed an emotion classifier capable of identifying four emotions: sadness, anger, fear/anxiety and joy/contentment. We double finetuned the pretrained language model (PLM) XLM-R4[41] using an emotion dataset in native Mandarin (NLPCC) [42], followed by the EP emotion data. Our model's results are shown in Table II, where at least 90% accuracy and F1-scores are attained across all test sets. The first finetuning was introduced to enhance performance in native Mandarin. However, our findings show that the model performs sufficiently well even without finetuning on the NLPCC dataset (see'single' results in Table II). This is beneficial for low resource languages where there may be a lack of native in-domain data.
Footnote 4: Available at [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
### _Knowledge Distillation_
While large PLMs have allowed for state-of-the-art performance in various NLP tasks, their size makes them computationally expensive and memory intensive to operate. Hence, adopting such models in real time applications becomes highly impractical due to cost and latency issues [43].
Fig. 1: Example of major translation errors targeted in post-editing. Most translation errors stemmed from literal translations of EN colloquialisms into ZH.
Fig. 3: Updated conversation flow component for producing empathetic responses based on the user’s change in emotion (after practising a protocol) and their original emotional state.
Fig. 2: Example of minor translation errors edited to increase localisation accuracy. Note that the original translation maintained the coherence of the source sentence, but used words that were not the most appropriate to the context.
To optimise the emotion classifier for runtime efficiency, we performed Knowledge Distillation [44] as a compression technique to reduce the size of the model while maintaining its performance. Using the double finetuned model (Section IV-A) as the teacher model, we performed Knowledge Distillation on a L6xH384 mMiniLMv2 student model5[45], a task-agnostic model distilled from an XLM-R-large model. We performed double finetuning, with distillation occurring at each stage, inspired by the multi-stage distillation framework shown in [46].
Footnote 5: Available at [https://github.com/microsoft/unilm/tree/master/minlim](https://github.com/microsoft/unilm/tree/master/minlim)
Distillation was performed using the Triple Loss method [43], which incorporates distillation loss (\(L_{dist}\)) and cosine embedding loss (\(L_{cos}\)), in addition to the classic supervised training loss (\(L_{ce}\)), during training.
1. **Classic Supervised Training Loss, \(L_{ce}\)** This is the cross-entropy loss between the student model's predicted distribution (\(c_{i}\)) and the target training labels (\(q_{i}\)) which is in the form of a one-hot vector. \[L_{ce}=\sum_{i}q_{i}*log(c_{i})\] (1)
2. **Distillation Loss, \(L_{dist}\)** This is the cross-entropy loss between the student model's _softened_ predicted distribution (\(s_{i}\)) and the teacher's _softened_ predicted distribution (\(t_{i}\)) [47]. \[L_{dist}=\sum_{i}t_{i}*log(s_{i})\] (2) These softened predictions are also known as the softmax-temperature probability distribution, given by: \[p_{i}=\frac{exp(z_{i}/T)}{\sum_{j}exp(z_{j}/T)}\] (3) where \(T\) denotes temperature and \(z_{i}\) denotes the probability of class \(i\).
3. **Cosine Embedding Loss, \(L_{cos}\)** While most Knowledge Distillation methods use only losses 1 and 2, the cosine embedding loss is specific to Triple Loss. It aims to align the student's and teacher's hidden vector representations and is noted to improve performance [43]. The loss is as follows: \[L_{cos}=1-cos(T(x),S(x))\] (4)
Thus, the final training loss is taken as the average of the three losses:
\[L_{total}=\frac{L_{ce}+L_{dist}+L_{cos}}{3} \tag{5}\]
The emotion classifier training pipeline is illustrated in Fig. 4. Following hyperparameter tuning, we obtained a performance (accuracy and F1) of \(\sim\)81% and \(\sim\)85% on the native Mandarin and English test sets respectively (Table III). Considering that the mMiniLMv2 has only 40% of the XLM-R-base teacher model's capacity (see Table IV), the model performs extremely well, retaining a significant proportion of its teacher model's performance (approx. 90% in the worst case and up to 97% in the best case.). We also note a significant reduction in average inference time using the distilled model (Table V) compared to the base XLM-R model. This had a noticeable impact on trial participant feedback (see Section V-C).
Additionally, we compared the distilled model's performance against the previous SAT chatbot emotion classifier (see Table III). When comparing against the English RoBERTa-base classifier deployed in [11], the distilled model has 90% of its performance at 40% of its capacity (Table IV). On the other hand, the non-clinical trial results point to a significant improvement in participant sentiment toward our emotion classifier compared to the one from [11].
Overall, considering the computational advantages and minimal performance trade-off, the above results illustrate the
Fig. 4: Model training pipeline for the emotion classifier. It involves a two stage finetuning, with distillation occurring at each stage. Teacher models (XLM-R) are represented in yellow and student models (mMiniLMv2) in blue.
potential of performing Knowledge Distillation, whereby classification performance can be largely recreated with a significantly smaller and more efficient model.
### _Empathetic Rewriting_
In order to increase the level of empathy expressed in the chatbot's responses, we augmented the existing translated responses by having lower-empathy utterances rewritten to be more empathetic. As the chatbot has a rule-based conversational flow, rewriting has the additional benefit of boosting diversity in its conversation, thus potentially leading to greater user engagement.
We adopted the generative language model Chinese GPT-26 to generate the empathetic rewritings in Mandarin. This model was trained using reinforcement learning (RL) with proximal policy optimisation [48], based on [49]. Prior to training, we performed a supervised warm-start since literature has shown that it leads to more effective learning [23, 49].
Footnote 6: Available at [https://huggingface.co/uer/gpt2-chinese-clucoropusssmall](https://huggingface.co/uer/gpt2-chinese-clucoropusssmall)
The training setup is illustrated in Fig. 5. To facilitate training, we devised a reward model to reward utterances that are first and foremost empathetic, but also fluent and semantically relevant.
The **empathy reward**\(r_{e}\) is the key component of the empathetic rewriting task. This component aims to reward rewritings that convey a high degree of empathy, and penalise rewritings conveying low empathy. To quantify the degree of empathy conveyed by an utterance, we developed an empathy classifier using an XLM-R model trained and evaluated on the empathy-annotated EP data, obtaining an overall accuracy and F1-score of 90%. The logit of the highly empathetic class computed by the classifier is then taken as the empathy reward.
The **semantic reward**\(r_{s}\) aims to reward rewritings that deliver the same semantic meaning as the base utterance. Without this component, utterances that are highly empathetic but do not carry the correct semantic information may be generated as the model seeks to exploit the empathy reward. To measure the semantic similarity of a rewriting to its base utterance, we trained and evaluated an XLM-R model on the empathetic rewritings in the EP dataset, obtaining an overall accuracy and F1-score of 96%. The semantic reward is the logit of the semantic class corresponding to the base utterance.
The **fluency reward**\(r_{f}\) was adapted from the fluency function in [11] and included to prevent rewritings that are highly empathetic but are incoherent/grammatically incorrect. This is computed as:
\[r_{f}(er)=\frac{1}{PPL(er)}-RP(er) \tag{6}\]
where \(er\) denotes the empathetic rewriting, \(\frac{1}{PPL(er)}\) is the inverse of the perplexity (computed by a GPT-2 model) and \(RP(er)\) denotes a cumulative penalty for every repeated word within that rewriting (excluding stop words). Attempting to remove the repetition penalty term resulted in the model seeking to exploit the semantic and empathy reward by repeating keywords/empathetic terms.
The final reward was implemented as a multi-objective function comprised of the weighted sum of the empathy, fluency and semantic rewards, written as:
\[r=w_{e}r_{e}+w_{f}r_{f}+w_{s}r_{s} \tag{7}\]
Similarly to related works [32], we pre-generated and manually inspected empathetic responses for any toxic speech or distressing content (e.g. relating to violence or self harm) before approving them to be used by the chatbot. It is worth noting, however, that no problematic content was found in the utterances generated by the final trained model. In the future, a hate-speech detector could be devised to automate this inspection process.
### _Supervised Empathetic Rewriting_
While the RL-based methodology yielded overall quality responses, we should note that this method can be extremely sensitive. As PPO is a stochastic policy method, its actions are drawn from a probability distribution. This means that actions vary each time, resulting in starkly different outcomes between different runs of training. Moreover, performance varies significantly based on the weights attached to the reward components (i.e., \(w_{e}\), \(w_{s}\) and \(w_{f}\)), which makes hyperparameter tuning difficult.
In response to this, we also introduce a simpler, supervised learning (SL) approach to empathetic rewriting. We fine-tuned a GPT-2 model by prompting it with the user's emotional state \(\mathcal{S}_{e}\) and a basic, low-empathy utterance \(\mathcal{S}_{L}\), and used a high-empathy utterance \(\mathcal{S}_{H}\) as the learning target. We employed the empathy classifier (EC) used in the RL method to form an additional binary classification learning objective over all the utterances \(X_{g}\) generated at each training step:
\[L_{EC}=CrossEntropyLoss(EC(X_{g}),1) \tag{8}\]
Where \(1\) is the label for highly empathetic sentences. We then updated the model using the combined loss
\[L_{Total}=L_{LM}+L_{EC} \tag{9}\]
where \(L_{LM}\) is the language modelling loss produced by the GPT-2 model.
We then pre-generated and manually inspected responses in the same manner as the RL approach. We compare the responses generated by the two approaches through non-clinical human trials against the English SAT chatbot [11] in Section V-C.
## V Non-Clinical Trial
### _Study Design_
Formal evaluation of the SAT chatbot was carried out through non-clinical human trials. Separate trials were conducted on chatbots using responses generated via the reinforcement learning approach and the supervised learning method. For the purpose of the trial, participants were required to be fluent in both Simplified Mandarin and English in order to fully experience the bilingual chatbot. We note that users fluent in either language are nonetheless able to utilise the chatbot to practise SAT. Given the limited participant pool, knowledge of SAT protocols or psychotherapy was beneficial but not required. Participants were nonetheless provided with information detailing the SAT protocols prior to the trials. In total, 42 participants (20 female, 22 male) aged 25 to 60 consented to and participated across three trials.
Throughout each trial, participants were instructed to interact with the application once per day over a period of five days. There had to be a minimum of three interactions in Mandarin and one in English. Participants were also asked to note down any unnatural sounding utterances generated by the chatbot when using the Mandarin setting.
At the end of each trial, an anonymous feedback questionnaire was issued to each participant, aimed at evaluating their experience using the chatbot. The questionnaire sought to collect user feedback on: (i) the chatbot's emotion recognition capabilities, (ii) the quality of and the empathy conveyed by the chatbot's responses, (iii) the overall experience of using the chatbot and (iv) the perceived usefulness of the chatbot. Participants were asked to evaluate each aspect by providing their level of agreement with a particular statement on a Likert scale.
### _User Interface_
Fig. 6 shows the user interface of the web platform that was deployed for the non-clinical trial. Protocols were available to view on the platform upon selection in both English and Mandarin.
### _Evaluation_
Participants were first asked to evaluate the emotion classifier's capabilities. When assessing whether the chatbot was good at guessing emotions, 89% and 93% of participants agreed with this statement for the emotion classification in English and Mandarin respectively compared to previous works [11], where only 63% of participants agreed (see Fig. 7). Taking into account that the distilled model is half the size of the one used in [11], this highlights the success of Knowledge Distillation at achieving performant yet compact models.
Participants were also asked to evaluate the quality of the chatbot's utterances. When asked to gauge whether the chatbot came across as highly empathetic throughout the conversation, 85% of the participants that had interacted with the RL chatbot agreed, while this proportion was 86% for the participants that had interacted with the SL chatbot (see Fig. 8). This result is consistent with the previous English-only implementation, where 88% of participants had agreed with this statement.
When asked if they found that the chatbot provided fluent and natural-sounding responses, 96% of participants that had used the RL chatbot agreed, while this proportion was 77% for the participants that had engaged with the SL chatbot (see Fig. 9).
Fig. 5: Reinforcement learning setup for empathetic rewriting.
Fig. 6: Trial platform web interface.
With regards to the level of engagement when conversing with the chatbot, 85% of participants agreed that they were engaged when the chatbot used RL-trained utterances, while 93% agreed when the chatbot used SL-trained utterances (see Fig. 10). The perceived user engagement of our platform is thus significantly higher than the previous English-only implementation, where only 69% of trial participants had agreed with the above statement.
Finally, participants were asked to evaluate the chatbot's usefulness. 89% of participants agreed that the platform was useful. This is roughly consistent with the proportion of users who had agreed that the English-only platform was useful, which was 92% (see Fig. 11).
feedback. If possible, having Mandarin-speaking clinicians participating in the trials would also be extremely valuable.
Another notable limitation was the study size. Whilst our trial recruited more participants than the previous study [11], the participant sample was still relatively small. Moreover, the study sizes across the three trials conducted were inconsistent (13, 14 and 27), with imbalanced demographics across the sexes. This is once again due to the stringent requirements of the trial screening which limited the participant pool. In future trials, recruitment should continue to increase the trial sample size and focus on balancing demographics.
### _Future Work_
We note that the human trial was conducted with the purpose of quantifying the efficacy of using the multilingual chatbot, and was not aimed at determining the therapeutic effects of SAT on a Chinese-speaking population. An 8-week psychological intervention can be conducted in the future, where participants are exposed, step by step, to the SAT protocols through weekly sessions, and in which a Mandarin-capable SAT chatbot can be used in guiding users through carrying out SAT protocols on a daily basis. Future work could also investigate the application of this translation-based method to the delivery of other rule-based psychotherapy methods, such as CBT, on a Chinese-speaking population.
Code-switching, sometimes referred to as code-mixing, is a phenomenon prevalent in multilingual communities, whereby individuals alternate between two or more languages within the same conversation [50]. Since a potential input to the chatbot could contain code-switched text, it would be interesting to see how model performance can be optimised for such inputs.
Moreover, it would be worth investigating the performance of the same model used in this paper on different languages, especially those with low resource availability. An extension could also be designed for expanding the range of emotions recognised by the classifier, and to assess the efficacy of formulating the emotion classification task as a multi-label problem, as human emotions can be complex and are typically not mutually exclusive.
A common reflection amongst trial participants focuses on the inherent rigidity of rule-based conversation. Therefore, future work could investigate the incorporation of open-dialogue to facilitate more natural conversations.
## VII Acknowledgement
Students Ruoyu Hu and Neophytos Polydorou were supported by UK Research and Innovation [UKRI Centre for Doctoral Training in AI for Healthcare grant number EP/S023283/1]. We appreciate the general support from the Empowered Human Foundation in Canada and the UK.
|
2301.11365 | Open RAN Testbeds with Controlled Air Mobility | With its promise of increasing softwarization, improving disaggregability,
and creating an open-source based ecosystem in the area of Radio Access
Networks, the idea of Open RAN has generated rising interest in the community.
Even as the community races to provide and verify complete Open RAN systems,
the importance of verification of systems based on Open RAN under real-world
conditions has become clear, and testbed facilities for general use have been
envisioned, in addition to private testing facilities. Aerial robots, including
autonomous ones, are among the increasingly important and interesting clients
of RAN systems, but also present a challenge for testbeds. Based on our
experience in architecting and operating an advanced wireless testbed with
aerial robots as a primary citizen, we present considerations relevant to the
design of Open RAN testbeds, with particular attention to making such a testbed
capable of controlled experimentation with aerial clients. We also present
representative results from the NSF AERPAW testbed on Open RAN slicing,
programmable vehicles, and programmable radios. | Magreth Mushi, Yuchen Liu, Shreyas Sreenivasa, Ozgur Ozdemir, Ismail Guvenc, Mihail Sichitiu, Rudra Dutta, Russ Gyurek | 2023-01-26T19:19:42Z | http://arxiv.org/abs/2301.11365v1 | # Open RAN Testbeds with Controlled Air Mobility
###### Abstract
With its promise of increasing softwarization, improving disaggregability, and creating an open-source based ecosystem in the area of Radio Access Networks, the idea of Open RAN has generated rising interest in the community. Even as the community races to provide and verify complete Open RAN systems, the importance of verification of systems based on Open RAN under real-world conditions has become clear, and testbed facilities for general use have been envisioned, in addition to private testing facilities. Aerial robots, including autonomous ones, are among the increasingly important and interesting clients of RAN systems, but also present a challenge for testbeds. Based on our experience in architecting and operating an advanced wireless testbed with aerial robots as a primary citizen, we present considerations relevant to the design of Open RAN testbeds, with particular attention to making such a testbed capable of controlled experimentation with aerial clients. We also present representative results from the NSF AERPAW testbed on Open RAN slicing, programmable vehicles, and programmable radios.
Open RAN, Interoperability Testing, IOT, Testbed, open-source, eNB, gNB, aerial, UAV, drone.
## I Introduction
Open Radio Access Network (specifications defined in the O-RAN Alliance) has emerged as a serious and perhaps critically necessary alternative to the proprietary radio access network (RAN) solutions that have characterized cellular networks. In particular, Open RAN provides a richer eco-system based on the virtualization of network functions providing greater economies of scale and reduced cost. The open architecture of Open RAN, and the definition of interfaces among modules that have been thus far treated as essentially monolithic, are expected to ensure inter-operation between products from different providers, and a competitive market, leading to improved quality and lower cost of ownership. It also enables the inclusion of commodity controllers, and the ability of operators to develop their custom control applications on top of those controllers, bringing the power of software-defined networking to RANs on an open-interface basis.
Such disaggregation comes at the cost of increased overhead, and early Open RAN systems are widely expected to have higher overheads and lower efficiency compared to extant single-vendor systems that, after all, have evolved and been integrated for decades. Optimistic views consist of expectations of workable, if inefficient, implementations soon, followed by rapid improvements in performance. Pessimistic views incline to doubts regarding how long such a process might take, or whether such systems can approach the efficiency of proprietary monolithic systems, or even be workable at scale. However, there are significant gains in terms of economies of scale through virtualization as well as additional functionality that provides a much richer set of capabilities (e.g., RIC apps, etc).
To dispassionately and pragmatically assess the workability of Open RAN, the community must move beyond early experiments and greenfield deployments to demonstrable repeatability of predictable system performance. Designing dependable test facilities for Open RAN components and systems, therefore, is among the most important outstanding tasks of the Open RAN community at this time. A key promise of Open RAN is interoperability (multi-vendor), and the key to verifying such claims is through interoperability testing (IOT). Recognizing the importance of IOT, the O-RAN Alliance has dedicated two entire work groups (WG4 and WG5) to specifying interfaces, and both groups have published specifications on interoperability testing and profiles in addition to unit test specifications (see [1, 2] and other specifications of WG4 and WG5). Such profiles allow the interoperability of any set of components to be tested in test configurations that can be realized in lab-environment test benches. However, to engender the above-mentioned growing confidence, Open RAN ecosystem players (contributors, as well as vendors, operators, and users) need to be able to test components in a comprehensive end-to-end test facility - one that is embedded in a realistic setting and span in the real world, including at least in part an outdoor setting, with a non-trivial number of UEs interacting with a non-trivial number of base stations. In the rest of this paper, we reserve the term "testbed" to indicate facilities capable of such complete RAN system tests.
Ungeoupled Aerial Vehicles (UAVs), have long been generally acknowledged as important clients of any future wide-area wireless communications system. However, the full scope of such devices as denizens of the wireless communication world is only coming to be appreciated recently. A key observation is that UAVs are not only wireless communications clients for command-and-control (the most obvious use case), but play roles in at least two other ways in the wireless ecosystem. First, trivially, as such devices increase in intelligence, and are tasked with increasingly more sophisticated missions, these missions are likely to pose additional - and likely much heavier - communication requirements; for example streaming live on-site video back to the cloud, or engaging in other data-heavy cloud-assisted distributed computation tasks. More importantly, and more significantly in the current context, with increasing on-board compute intelligence, such devices are capable of engaging not just as clients, but as crucial parts
of the wireless communication infrastructure itself. This is especially important in an open interoperable ecosystem such as Open RAN aspires to be, as open competition spurs innovative contributors to explore previously unoccupied ecosystem roles.
The visioning and design exercise for an Open RAN testbed that aspires to provide interoperability and system testing capabilities, if such a facility expects to support the full evolutionary arc of aerial devices, must include reflection specific to these considerations. In this paper, we leverage our joint experience in (i) architecting and operating an advanced wireless testbed with aerial robots as primary citizens, and (ii) industry Open RAN testing and dependability expectations, to provide a starting point that we hope will be useful to such architects and designers. In the next section, we briefly review some existing test facilities with the capability (or potential near capability) of acting as Open RAN testing resources and juxtapose them with industry Open RAN testing norms, as well as basic support requirements for UAVs. In Section III, we discuss in further detail the class of use cases that represent the potential synergistic use of UAVs in Open RAN systems. Finally, we provide a deep consideration of one extant testbed - our own NSF AERPAW platform at NC State University - to showcase the process of reviewing testbed capabilities to articulate both strengths and shortcomings in light of an ideal Open RAN testbed with native UAV support.
## II Visioning an Open RAN/UAV Testbed
### _Existing Wireless Testbeds and System Testing_
There are numerous testbeds that are accessible to researchers to experiment with wireless technologies including 5G, Open RAN, and UAVs. In Table I, we provide a list a few of these testbed facilities that are accessible to researchers from academia, government, and industry. Note that we do not intend to present Table I as either comprehensive or authoritative. There are likely many facilities that we are unaware of, or for which no information is publicly available to us. Even for those we have surveyed, Table I represents our best knowledge as obtained from publicly available sources (as cited); we regret any unintended mischaracterization. Our survey was also heavily biased toward facilities in the USA.
Nevertheless, since our focus is on test facilities publicly or generally available to researchers and practitioners in the US, and on facilities sizeable enough for UAVs to be practically a part of the test ecosystem, we believe that Table I provides representative, and meaningfully extensive, information for the Open RAN testbed designer of the near future. We have chosen to characterize each facility listed by means of a few high-level considerations. Obviously, explicit currently stated support of Open RAN testing, and UAV support/integration, are features we looked for. Related to Open RAN, we also looked at the RF spectrum the facility is capable of and allowed to operate in, by noting if it lists an Innovation Zone (IZ) license from the Federal Communications Commission (see for example [3]), and also its deployment context (indoor facilities may be able to use isolation such as Faraday cages and operate without an FCC Innovation Zone or experimental licenses).
Related to UAV support, we also looked at whether such UAVs (or any component of the testbed, for those without UAVs) support controlled mobility. We consider this feature an important one for future Open RAN testbeds. A significant proportion of wireless communications system complexity arises from (or is exacerbated by) the mobility of system components, most usually that of User Equipment (UE); therefore it it important for the testbed to support experiments involving mobility, hand-over, and disconnect-reconnect events. However, the core of the scientific method is the repeatability of experiments and the reproduction of experimental results. To provide this for experiments related to mobility, the relative motion of various system components must be possible to precisely reproduce on demand, for as many runs of an experiment as necessary.
Another key feature we looked for was _emulation support_. The single most valuable characteristic of actual wireless test facilities is the availability of a real Radio Frequency (RF) environment, providing real-world challenges such as fading, multi-path, and statistical uncertainty, simultaneously with the experiment repeatability. The _simulation_ of RF environments by means of mathematical models, no matter how sophisticated, abstracts a measure of realism from test results; further, the experimenter has no need of an experimental facility (or even actual radios) for simulation exercises, which are an appropriate earlier stage in proving research before considering testbed validation. The exercise of _emulation_, on the other hand, provides an important added value to a testbed, in that it is a digital twin of a real RF system, capable of operating in real-time, in which physical radio equipment can actually be immersed. Emulation systems are driven by calibration (to some real RF environment) rather than modeling and may be realized by digital twinning, or more often by analog RF circuitry. In extreme cases, a test facility may be entirely based on emulation, as in the case of the Colosseum system (originally created by DARPA and currently operated at Northeastern University under NSF aegis; see Table I). More typically, emulation support is an adjunct part of a physical test facility that can serve as an early and less costly stage of full testbed validation.
Even before moving on to discussions of testing requirements specific to Open RAN or UAVs, we can note a few points from Table I. Naturally, those we were able to survey were largely public-use testbeds, since those are the ones that are most likely to provide information publicly about themselves. This dovetails with our focus since the interoperability focus of Open RAN implies that for engendering maximum confidence, the testbed facility should be open to anybody that is interested in repeating experiments and verifying results.
Unsurprisingly, there is no testbed on the list that provides full Open RAN as well as UAV support today, even without considering controlled mobility. Less obviously, we find that the combination of UAV support and controlled mobility is rather rare; only a handful of testbeds on our list provide even partial mobility control in conjunction with UAV support.
Interestingly, we note that a number of testbeds provide emulation support, in keeping with our expectation that this is a key required feature of wireless testbeds. However, when emulation is considered jointly with mobility control, a non-obvious consideration may be worth mentioning. For a testbed
that provides mobile airborne components, any emulation system must not only emulate the physical RF environment, but also the physics of airflow and aerial navigation, including wind gusts and other disturbing factors (analogous to noise and interference in the RF environment), as well as the dynamics, features, and constraints of a specific UAV. The ability to autonomously navigate one or more UAVs in the 3D space based on RF observations in the environment is also an important capability with various use cases. Furthermore, subtle moves of the UAVs (e.g., a multicopter pitching to move forward) can change the orientation of highly directional RF antennas (especially relevant for mmWave transmissions). With this
\begin{table}
\begin{tabular}{p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline
**Testbeds (alphabeti-cal)** & **Location** & **Emulation Support** & **Open RAN** & **UAV Support** & **Controlled Mobility** & **FCC-IZ** & **Main Focus** & **Deployment Environment** & **Access** \\ \hline AERPAW [4] & Raleigh, NC & ✓ & Partial & ✓ & ✓ & ✓ & UAVs, SDRs & Rural, Urban & Public \\ \hline ARA [5] & Central Iowa, IA & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & Rural wireless & Rural & Public \\ \hline Arena [6] & Boston, MA & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & ✓ & SDRs & Indoor grid & Public\({}^{\ast}\) \\ \hline ARLIS [7] & College Park, MD & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 5G security & Virtual & Public\({}^{\ast}\) \\ \hline ARM / Tech Mahindra 5G Lab [8] & NK & NK & ✓ & \(\times\) & \(\times\) & \(\times\) & 5G testing & NK & Private \\ \hline Booz Allen 5G & Annapolis Junction, MD & & NK & NK & \(\times\) & \(\times\) & \(\times\) & Mission critical & NK & Private \\ \hline CCI xG Testbed [10] & Arlington, VA & NK & ✓ & \(\times\) & \(\times\) & \(\times\) & SDRs, AI & Indoor & Public\({}^{\ast}\) \\ \hline Colosseum [11] & Burlington, MA & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) & ✓ & Emulation, SDRs & Cloud & Public\({}^{\ast}\) \\ \hline CORNET [12] & Blacksburg, VA & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) & SDRs & Indoor, Rooftop & Public \\ \hline COSMOS [13] & Manhattan, NY & ✓ & ✓ & \(\times\) & \(\times\) & ✓ & mmWave, backhaul & Urban & Public \\ \hline Drexel Grid [14] & Philadelphia, PA & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & Emulation, SDRs & Indoor grid & Public\({}^{\ast}\) \\ \hline Ericsson Open Lab [15] & NK & ✓ & ✓ & \(\times\) & \(\times\) & CloudRAN, virtualized 5G & Indoor & Private \\ \hline INL Wireless Testbed [16] & Idaho Falls, ID & \(\times\) & \(\times\) & ✓ & Partial & \(\times\) & Wireless security & Rural & Private \\ \hline IRIS [17] & Los Angeles, CA & \(\times\) & \(\times\) & \(\times\) & ✓ & \(\times\) & Robotic wireless networks & Indoor & Public\({}^{\ast}\) \\ \hline LinQuest Labs [18] & Chantilly, VA & ✓ & NK & ✓ & NK & \(\times\) & 5G security, UAV, NTN & Cloud, indoor & Public\({}^{\ast}\) \\ \hline NASA MTBs [19] & \(\times\) & \(\times\) & \(\times\) & ✓ & \(\times\) & \(\times\) & Multirotor UAV testing & Indoor & Public\({}^{\ast}\) \\ \hline New York UAS Test Site [20] & Rome, NY & \(\times\) & \(\times\) & ✓ & Partial & \(\times\) & BVLOS UAV testing & Rural, Urban & Public\({}^{\ast}\) \\ \hline NIST 5G Coexistence Testbed [21] & Boulder, CO & ✓ & NK & \(\times\) & \(\times\) & \(\times\) & 5G coexistence testing & Indoor & Public\({}^{\ast}\) \\ \hline NIST Testbed [22] & NBIT & NK & \(\times\) & & \(\times\) & \(\times\) & \(\times\) & Spectrum sharing & Indoor & Public\({}^{\ast}\) \\ \hline NITOS [23] & Volos, Greece & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) & Cloud-based wireless services & Rooftop & Public \\ \hline Northeastern UAS Chamber [24] & Burlington, MA & \(\times\) & \(\times\) & ✓ & NK & \(\times\) & Drone flights & Drone cage, anechoic chamber & Public\({}^{\ast}\) \\ \hline ORBIT [25] & N. Brunswick, NJ & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & SDRs & Indoor grid & Public \\ \hline PNNL 5G Innovation Studio [26] & Richland, WA & \(\times\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) & Commercial 5G & Indoor & Private \\ \hline POWDER-RENEW [27] & Salt Lake City, UT & ✓ & ✓ & \(\times\) & \(\times\) & ✓ & SDRs, massive MIMO & Urban & Public \\ \hline RELLIS 5G testbed [28] & Bryan, TX & \(\times\) & NK & NK & NK & 5G (AT&T) & Outdoor & Public\({}^{\ast}\) \\ \hline Cyber Living Innovation Lab [29] & Fairfax, VA & NK & ✓ & NK & NK & \(\times\) & 5G security, robotics & Indoor & Public\({}^{\ast}\) \\ \hline SOAR [30] & Buffalo, NY & \(\times\) & \(\times\) & ✓ & Partial & \(\times\) & Drone flights & Drone cage & Public\({}^{\ast}\) \\ \hline TIP Community Lab [31] & Overland Park, Kansas & NK & ✓ & \(\times\) & \(\times\) & \(\times\) & \(\times\) & O-RAN 5G NR & NK & Private \\ \hline UNH Interoperability Lab [32] & Durham, NH & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) & Interoperability testing & Indoor & Public\({}^{\ast}\) \\ \hline Virginia Tech Drone Park [33] & Blacksburg, VA & \(\times\) & \(\times\) & ✓ & Partial & \(\times\) & Drone flights & Drone cage & Public\({}^{\ast}\) \\ \hline \end{tabular}
\end{table} TABLE I: Existing testbeds with advanced wireless and UAV experimentation capabilities. Public testbeds indicated with an asterisk (*) may be open only to partners or require contacting testbed operators rather than being generally available through an experimentation portal. Features for which public information could not be found are marked as Not Known (NK).
in mind, it is perhaps unsurprising that the _combination_ of emulation support and mobility control is quite rare in the extant testbeds.
### _Extant Industry Open RAN Testing Practices_
The facilities listed in Table I are largely those focused on system testing, some of which currently already support deploying some particular Open RAN system in part or in full. Researchers or ecosystem developers may find this sufficient since it is possible for them to test or study their products or innovations in contiguous areas supported by "some" Open RAN implementation. However, vendors, carriers, and other ecosystem players who are involved in the business of actually building or operating a data network as a service need to focus far more deeply on component testing, and (critically for Open RAN) cross-vendor interoperability testing - especially the large swathes of new interoperability modes enabled by Open RAN's disaggregation modes.
Such testing proceeds by identifying Key Performance Indicators (KPIs) of interest, and then measuring them for Devices Under Test (DUT) or System Under Test (SUT) for comparison purposes, as well as possible absolute acceptance criteria. It would seem a reasonable expectation that an Open RAN system testbed should enable such KPIs to be measured, not just end-to-end, but at interoperation points or interfaces (and for specific O-RAN alliance defined interfaces, including F1/W1/E1/X2/Xn).
However, once one enters the domain of detailed KPIs, there is little standardization of what to measure. To an extent, the detailed definition of KPIs is part of the specialized knowledge of vendors, operators, and testing service providers that are perceived to provide a competitive advantage, and hence considered confidential. Because many of the KPIs may be specific to specific vendors, there are also a very large number of them. Commercial 5G networks test and validate literally thousands of KPIs; the testing regime of well-known mobile operators actually includes over ten thousand KPIs. Many KPIs have sub-KPIs and the RF optimization KPIs are substantial. This will only increase further with the greater use of disaggregation in Open RAN networks. There are numerous Open RAN interoperability and validation labs today. There are private and public testbeds supported by vendors, consortia, universities, and the government. Not all labs concentrate on all parts of the toolchain and ecosystem, most focus on specific aspects; validation testing will be greatly dependent on the use case and focus of the lab. In the Open RAN ecosystem, the RAN Intelligent Controllers (RICs) allow for x-Apps and r-Apps to use the RIC framework as an engine, but with custom functionality. This implies that every such app can be expected to have a fairly large number of KPIs associated with it depending on its particular functionality. There is the potential for cross-KPIs between the different apps as well.
In light of this, we are forced to go back to fundamentals in recommending KPI capabilities for Open RAN testbeds. At the highest level of abstraction, there are certain priority KPIs that are foundational for a validation environment, and detailed consideration of many custom KPIs for various operators and vendors (although we are not in a position to list them here) can be seen to trace back to one or the other of these few foundational KPIs:
* Ability for UE to attach to the network;
* uplink and downlink;
* uplink and downlink;
* Latency;
* Retainability;
* Accessibility; and
* Optimization
Each of these KPIs drives multiple other test parameters and features such as performance, load testing, and RF design and optimization. At this time, practical Open RAN testing in the real world is largely confined to component testing and using KPIs related to the top few items in the above list; in the future, more testing related to the Accessibility and Optimization KPIs is likely to proceed.
Finally, an Open RAN testbed must include at least one complete reference Open RAN implementation, both to serve as a benchmark for other components to be tested against, and also to enable system tests to proceed for experimenters who wish to innovate in some, but not all, parts of the Open RAN ecosystem. While Open RAN provides for a multi-vendor environment in building a network from radios, vRAN software, hardware servers, and related software and services, it is important to note that "open" does not automatically or necessarily equate to "interoperable". The same need for system integration of multi-vendor Open RAN networks that has driven the need for open test environments must inform the testbed designer in choosing such reference implementations that are actually workable, and hopefully as compliant with O-RAN interface definitions as possible, so as to be broadly compatible with components and devices that testbed clients may bring in the future. In Table II we have summarized what we perceive to be key high-level components for an Open RAN validation environment testbed.
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Open RAN Components**} & \multicolumn{1}{|c|}{**Test/Evaluation Components**} \\ \hline \(\bullet\) 5G core access and/or edge & \(\bullet\) A Faraday cage / environment \\ \(\bullet\) O-RAN Radios: \(\beta\)BN/eNB & \(\bullet\) 5G signal analyzer – test and validate \\ (some at controllable UAVs) & measurements \\ \(\bullet\) VRAN SW & \(\bullet\) RTSA: Real-time spectrum analyzer \\ \(\bullet\) GPS system (s)/Antenna- for synchronization & \(\bullet\) Network analyzer- antenna system and cable measurements \\ \(\bullet\) Forward Error Correction (FEC) & \(\bullet\) Antenna testing: anechoic chamber-measure patterns \\ \(\bullet\) EdgeServer, part of the core network in a box & \(\bullet\) Smaller Shielded enclosures, Faraday \\ \(\bullet\) (Open) RIC platform & \(\bullet\) Traffic generator \\ \(\bullet\) APps, Aypps & \(\bullet\) Interferers – for testing purposes \\ \(\bullet\) UEs (some at controllable UAV for certain use cases) & \(\bullet\) Various Adapters: need for every type of connector \\ \(\bullet\) TotR switch & \(\bullet\) Jumper cables \\ \(\bullet\) Cell site routers (CSR) & \(\bullet\) Attaneuors \\ \(\bullet\) Acceleration for Open RAN & \(\bullet\) Power splitters / power dividers \\ \hline \end{tabular}
\end{table} TABLE II: Example components for an Open RAN validation environment testbed.
### _Supporting UAVs in a Testbed_
In its simplest form, any aerial robot (i.e. an airborne device that stays aloft for significant periods of time and is capable of directed motion) can be considered a UAV, but the term is usually reserved for devices that are capable of full (or at least a high degree of) autonomous operation. A UAV can therefore exhibit not only primitive autonomous behavior (pre-programmed/way-point trajectory, heat-seeking, collision avoidance, auto-return-to-launch on predetermined conditions such as GPS-lock-loss), but also more complex operations such as computed conditional sensor-driven on-the-fly trajectory control (such as search-and-rescue), participation in coordinated trajectory control locally (platoon or swarm behavior) or globally (such as UTM - the US Federal Aviation Authority's Unmanned Aircraft System Traffic Management - or similar), or dynamic self-aware re-tasking (such as degrading mission parameters for safety if battery reserves fall to risky levels).
In distinguishing between testbed support of UAVs, it is important to realize that a UAV implies close integration of the onboard computing and communication equipment with the vehicle's command and control. It is helpful to think of two extreme cases as representative of the two classes. On the one hand, we can mount a computing/communication device (such as an ordinary smartphone) on a UAV. The UAV's autonomy, trajectory computation, or command and control, remain completely as before. The coupling between the UAV and the cellphone it carries as a payload is simply mechanical (but may include antenna mounts or high-gain antennas custom-positioned for the UAV, and common power supply). At the other extreme, the UAV contains only a single computing/communication device, which is capable of being tasked with complex missions (such as air quality analysis, image analysis based search-and-rescue), and also subsumes the trajectory computation (whether autonomous, command-and-control-based, or based on some coordination) for the UAV; in this case, the vehicle becomes in effect a peripheral of the onboard computer.
First, we consider the task of integrating support in a wireless testbed for UAVs only, used as vehicles for an airborne UE. This includes the case where the air vehicle has no autonomy and is controlled by a ground-based operator using a handheld or other radio remote control equipment; and even the case where the air vehicle does not have any controlled mobility (such as free-floating balloons) or any mobility at all (such as tethered aerostats or helikites). The basic challenge for a wireless testbed to support UAVs is posed not only by the fact that they are mobile (which, after all, ground UEs also exhibit, when users walk or drive), but the fact that they have a widely varied altitude as well as azimuth compared to traditional UEs on the ground. Both spectrum and latency are KPIs of interest for a UE. The front-haul and mid-haul latencies must provide very low latency to maintain system synchronization and function under a varying altitude of the UE, and the spectrum used for communication can significantly affect the achievable coverage and throughput. A further challenge is that of antenna occlusion, which some UAVs attempt to mitigate by multiple antenna locations around their bodies. Some UAVs mount antennas on gimbals in an effort to maintain constant directional properties, others allow for servos to allow controlled pointing of antennas. These challenges are exacerbated by the fact that most base stations, whether commercial or built out of commodity open technology, exhibit their own antenna coverage patterns, which are optimized for ground coverage. Studies have shown that the consequence of this optimization is the formation of multiple lobes at increasing altitudes, in complex patterns, that cannot be predicted easily as a function of the altitude of the UAV.
The UAV will have to be tested in a controlled environment to ensure the network functions and meet O-RAN specifications. Creating a Faraday environment to do the controlled validation testing will pose challenges compared to traditional Open RAN lab Faraday environments. Then the testing will need to be expanded to an open environment and optimized based on interferers, physical obstacles, and spectrum bands used - as the propagation and throughput are connected to the spectrum band used for communications. In Fig. 1, we summarize six proposed stages for Open RAN UAV validation.
While UAVs allow intelligent control of position and trajectory jointly with RAN intelligence (Apps executing at the RICs), the softwarized character of Open RAN also opens up exciting possibilities of allowing the onboard computer to take part in the Open RAN ecosystem in ways other than just as a UE. We devote the next section to these considerations.
## III Use Cases for UAVs in Open RAN
Considering the aerial controlled mobility and communication among fixed and portable nodes, UAVs will facilitate enhancements to Open RAN with flexible deployments and
Fig. 1: Proposed stages for Open RAN UAV validation.
on-demand, on-time network access. Several use case examples on Open RAN-based air mobility scenarios are provided as follows (see Fig. 2).
_Scenario 1. UAVs serve as UEs_: This use case focuses on exploring the functionalities of O-RAN RICs for managing and orchestrating network components aimed at 3D critical mission operations (e.g., secure, search and rescue) assisted by UAVs, as they are able to exhibit agile, fast, and autonomous behavior by organizing themselves to exchange information. Considering a scenario involving UAVs connected to an Open RAN ground BS, UAVs as UEs can carry high-resolution cameras and/or sensors, collecting real-time video and transmitting it back to the ground BS, e.g., to be used to identify possible targets of interest through deep neural network object detection model, and in addition report information about application performance to _rApps_. In the meantime, the E2 nodes of O-RAN are responsible for updating UAV control with insights produced by their applications (_xApps_ and _rApps_) to support the RAN optimization process. In this context, Open RAN is able to support the demands of highly dynamic scenarios of critical-mission operations integrated with UAVs due to its flexibility and characteristics of component dissociation.
_Scenario 2. UAVs act as O-RUs_: As described in O-RAN specifications [34, 35], UAVs can play a role as O-RUs and process several simple tasks. As the extension, this scenario focuses on the use of UAVs as O-RUs to handle more complicated network tasks, e.g., to quickly deploy an aerial network to assist or extend the terrestrial network where communication and computing resources can move closer to users to meet diverse and stringent 5G application requirements, such as ultra-low latency and ultra-high reliable connectivity. Considering a scenario in which each UAV-BS is equipped with an O-RU to serve ground mobile users, the objective is to optimize the performance of serving offloading tasks via both controlling UAV-BSs to guarantee the quality of communication channels to ground users and efficiently distributing offloading tasks to appropriate Open RAN elements according to the current association. Because of the 3D air mobility capability of UAVs and disaggregation of Open RAN architecture, they may potentially deliver better data offloading capabilities and better resource utilization.
_Scenario 3. UAVs act as O-DUs and O-CUs_: 1) Using UAVs as O-DUs allows for flexibly hosting RLC/MAC/High-PHY layers based on a lower layer functional split, where UAVs can dynamically connect to multiple O-RUs allowing on-demand resource pooling for virtual baseband functions of high PHY layer, MAC, RLC, and synchronization; 2) using UAVs as O-CUs helps to easily control the operation of multiple O-DUs within/beyond the coverage area, e.g., the radio resource control for flexibly managing the life cycle of the connection, routing or duplication for split beares, and the service data adaptation for managing the QoS of the traffic flows through autonomous 3D air mobility capability of UAVs.
_Scenario 4. Drone swarm based Open RAN_: This use case envisions multi-role drones without ground facilities that forms an ad-hoc/swarm based Open RAN. Based on _Scenarios 2-3_, we can consider a set of containers to virtualize different O-RAN elements such as O-RUs, O-DUs, and O-CUs deployed in drones and distributed computing nodes of the network. Given these containers with different functions, the objective is to create a robust Open RAN testbed in a swarm of drones towards full decentralization and controlled air mobility.
_Scenario 5. Flying wireless backhaul in Open RAN_: Wireless backhaul as an economically sustainable solution has been included by 3GPP as part of the integrated access and backhaul study item [36, 37] for the 5G NR standard. As an extension in Open RAN architecture, this scenario focuses on building a large-scale, self-organizing network of drones that are connected using a wireless mesh backhaul, which caters to dynamic bandwidth-hungry and latency-sensitive applications. Based on _Scenario 4_ with role-specific operations, drones can hover above or close to the O-RU and serve as an airborne last-hop link connecting RAN to the core network. Additionally, they can act as relays between two O-RUs separated by a longer distance to extend coverage forming a multi-hop mesh network for communications and control. Multi-drone backhaul in Open RAN is capable of flexibly adapting itself to cater to highly dynamic applications and events, and easily being scaled up to cover urban scenarios using long-range radios.
_Scenario 6. D2D communications underlaying drone-assisted Open RAN_: Implementation of device-to-device (D2D) communication such as sidelink can be an extension of the network into areas that traditional propagation of the fixed O-RU cannot reach. Particularly, drones can serve as UEs or relays deployed much more swiftly and improve the network throughput performance by dynamically adjusting their locations to provide direct or relayed D2D links to any out-of-coverage users. Additional sidelink capabilities such as multi-hop [38] and multi-link (in 3GPP Rel. 19) can provide higher resiliency in this mode, especially offering a valuable set of capabilities for mission-critical services such as disaster response rescue and operation.
_Testbed Considerations_: The above poses a rich and variegated set of potential operational scenarios, and it is impractical to attempt to enumerate specific design issues. Instead, we again propose foundational considerations and hark back to our discussion in Section II-A. The general capabilities of the testbed that we can identify in order to support such innovative scenarios are:
* The capability of mobility control of custom air vehicles,
* The ability to emulate not only the RF environment, but of airflow and UAV flight, and
* The inclusion of onboard computers, suitable for integration into UAVs, that can support user programming to create software components of the Open RAN ecosystem.
## IV AERPAW Testbed Review for Open RAN
Thus far, we have reflected on general requirements of an Open RAN testbed that is able to integrate UAVs with controlled mobility. In the remainder of this paper, we take a deep dive into the AERPAW testbed, reviewing it in light of the considerations we have derived above. We choose AERPAW because we are intimately familiar with it; the authors of this paper include the PIs of the AERPAW project, and key
architects and DevOps personnel working on the AERPAW facility. However, it is also true that AERPAW was conceived and built to support controlled air mobility in a testbed for use by a national community of researchers. Thus, it is a reasonable facility in which to conduct such a thought exercise of how a fully-featured Open RAN testbed may be built up along the same lines. AERPAW has the foundation for becoming a highly valuable Open RAN UAS test-bed.
AERPAW is the third testbed funded under the PAWR initiative to support advanced and emerging wireless research. It is a multi-year, multi-phase project that started in September 2019 and it is expected to be finalized by 2025. AERPAW experimentation capabilities became generally available with initial set of resources and features in November 2021. Additional platform resources, sample experiments, and experimentation capabilities are expected to be released at the end of Phase-2 (by May 2023) and Phase-3 (by May 2024). AERPAW is primarily and essentially a testbed of physical resources, not computing resources. The crucial part of these physical resources are: (i) the RF environment and the airspace that the AERPAW operating areas represent; (ii) the physical equipment (SDRs, commercial RF equipment, UAVs, and UGVs) that AERPAW provides to leverage those environments for experimental studies; and (iii) the expertise (and consequent exemptions) in conducting such studies in compliance with FCC and FAA regulations that AERPAW represents.
Physically, the testbed is hosted at sites in and around the NC State campus in Raleigh, NC. Central to AERPAW's unique characteristic is the availability of UAVs and UGVs in the testbed that can be placed under the direct programmatic control (of trajectories) of the researcher. In conjunction with the programmable USRPs that are also available for direct programming by the researchers, as well as other real-world, commercial radio equipment, this provides the NextG wireless researcher a facility for research experiments not practicable in any other facility at this time.
_Fixed Nodes, Portable Nodes, and Vehicles:_ At a very high level, the facility includes a number of tower locations (fixed nodes), at each of which some combination of AERPAW programmable SDRs and commercial radio equipment are permanently installed. The SDRs are controlled by servers, or companion computer (CCs), installed in each location that also represent edge-computing capabilities. These fixed node locations are distributed over the extensive Lake Wheeler Agricultural Fields of NC State (see Fig. (a)a), and some nodes are also installed in the Centennial Campus (see Fig. (b)b). The complement of these fixed nodes are AERPAW's portable nodes, also consisting of a computer and SDR(s), but smaller ones so that an AERPAW portable node can be mounted on a UAV/UGV. The CC on a portable node, an Intel NUC, also controls the UAV/UGV itself. A smaller version of the portable node that can get carried at the smaller UAV is also available, to do experiments with mobile phones and LoRa sensors that are connected to a LutePanda as the CC.
More information on AERPAW is available at the AERPAW Facility website and User Manual linked therefrom, and previous publications (also listed on the website). In what follows, we attempt not a comprehensive overview of AERPAW, but rather a review in light of the desirable characteristics we identified above.
### _Span, Scale, Access_
Fig. (a)a and Fig. (b)b show the outdoor deployment footprint of AERPAW's fixed nodes in NC State Lake Wheeler and NC State Centennial Campus, respectively. The equipment that are expected to be available publicly for experimentation by the end of (AERPAW's Phase-2 (expected May 2023) are also illustrated. Currently, it is possible to experiment with UAVs
Fig. 2: Use case examples for Open RAN-based air mobility: (a) UAVs as UEs; (b) UAVs as O-RUs; (c) UAVs as O-DUs and O-CUs; (d) UAV swarms in O-RAN; (e) Flying wireless backhaul in O-RAN; (f) D2D communications underlaying UAV-assisted O-RAN.
at Lake Wheeler Field Labs; AERPAW does not currently support UAV operation by experimenters in Centennial Campus but supports UGV operation, and UAV operation will likely become available in the future for experimenters.
This geographical span is reasonable for an Open RAN testbed, even with experiments including UAVs. However, scale is a different matter. With nine fixed nodes, six portable nodes, eight programmable UAVs, and some non-programmable commercial radio systems such as an Ericsson base station and five Keysight RF sensors, AERPAW can support a large variety of meaningful advanced wireless research - including proof-of-concept Open RAN experiments at small scales. But to support the full gamut of Open RAN testing and Open RAN related research experiments, AERPAW would need to add a large number and variety of commercial or stock UEs, and a larger number of programmable UAVs; a few more programmable fixed and portable nodes would also likely be useful.
In Open RAN, the potential softwarization or virtualization of various system components is a particularly attractive feature for innovators. This requires allowing experimenters direct programming access to all parts of the facility, and at the highest levels of access. Managing such access while ensuring the safety and regulatory compliance of the facility is a distinct challenge for any testbed that aspires to achieve this.
On this front, AERPAW is already well positioned, having been designed from the outset as a _batch-mode facility_. Experimenters develop experiments in a virtual environment and submit experiments for execution on the physical testbed once development is complete. AERPAW Operations personnel (Ops) then execute these submitted experiments in the physical testbed environment and collect the output of the experiments as designed by the Experimenters, which are available for Experimenters to view and analyze back in the virtual environment.
This is not an arbitrarily decided constraint, but a considered architectural choice. In operating a facility with programmable radios and programmable air vehicles, we are obligated to make, and uphold, certain guarantees to the FCC and FAA. However, we also want to allow Experimenters the ability to program those radios and air vehicles, ideally without needing to become fully conversant with FCC and FAA regulation details, obtain exemptions, or expertise in techniques for ensuring compliance. Batch mode operation allows us to interpose critical filters and monitors into the Experiment code execution flow that allow us to guarantee safe and compliant operation. It is one of the most valuable features of the AERPAW platform that we assume this guarantee ourselves, rather than passing
Fig. 4: Experiment workflow for users of AERPAW.
Fig. 3: AERPAW fixed node deployments at (a) NC State University Lake Wheeler Field Labs, Raleigh, NC; and (b) NC State University Centennial Campus, Raleigh, NC.
on the responsibility for compliant operations (and liability for non-compliance) to the Experimenter.
Figure 4a and 4b show the entity relationships in AERPAW, and the experimenter's experiment design workflow. Experimenters request "Development Sessions" in which they program a virtual environment that is programmatically indistinguishable from the computing environment in the physical testbed. Once completed, they submit such experiments for "Testbed Execution Sessions". The containers housing the experimenter's code is bodily moved to the corresponding nodes in the physical testbed, where they are executed as before, but with additional supervisory containers monitoring for any RF violation or unsafe air-vehicle operating conditions, overriding as necessary. As an additional line of defense, human operators in the field are able to issue aborts if the automated system should fail to override.
### _Spectrum and Licenses_
AERPAW supports multiple frequencies for experimentation with its fixed and portable nodes and vehicles. In particular, AERPAW is one of the few FCC Innovation Zones (FCC-IZs) in the United States [39, SS1.6] with frequency bands that are highlighted in Table III. The maximum effective isotropically radiated power (EIRP) limits for fixed stations (FSs) and mobile stations (MSs) are also specified in the table. The FCC-IZ for Lake Wheeler Field Labs site for AERPAW covers an area of approximately 10.5 square miles, while the Centennial Campus FCC-IZ covers an area of approximately 3 square miles. Experimenters can also port their FCC experimental licenses at AERPAW's FCC Innovation Zone. As noted in Table III, due to the sensitivities of certain bands and the wide interference footprint of transmissions from an aerial vehicle, FCC does not allow airborne use in certain bands [40].
AERPAW currently supports a subset of the frequency bands through additional FCC experimental licenses (FCC Call Sign: WK2XQH [41]), which are offered to AERPAW's users to carry out over-the-air experiments on the platform. In particular, for SDR experiments, AERPAW has experimental licenses at 3.3-3.55 GHz and 902-928 MHz, with plans to incorporate this band into the AERPAW FCC-IZ in the future. The experimental licenses for the Ericsson network include 1.7/2.1 GHz for the LTE system and 3.4 GHz for the 5G system. AERPAW also has plans to support generally available experiments using its mmWave SDR framework by the end of Phase-3 using Sivers phased arrays operating at 28 GHz. Spectrum monitoring and passive I/Q data collection experiments can be supported using USRPs and Keysight RF sensors between 100 MHz to 6 GHz.
A particular spectrum band that is of recent interest to safety and navigation related command-and-control communications for UAVs, and that AERPAW will explore experimental licenses in the future, is 5030-5091 MHz for which FCC recently released a Notice of Proposed Rule Making (NPRM) [42]. Another band that may potentially be used for ensuring vehicle-to-vehicle (V2V) separation with cooperative surveillance in the future for urban air mobility (UAM) scenarios is 1104 MHz (also known as UAT2) [43, 44, 45]. Additional spectrum bands that are specifically of interest for UAV/UAM scenarios can be found in [40].
### _Mobility Control_
AERPAW is also, by its original design, already adequate in providing controlled mobility, both for repeatability of experiments and for experimentation with programmatic trajectory control by experimenters; and both for aerial vehicles as well as ground vehicles. Figure 5 shows the AERPAW vehicle control stack. In AERPAW the main autopilot we support at this time is ArduPilot [46] as it is open source and well-trusted. ArduPilot is supporting MAVLink [47] as a communication protocol, and, therefore, all AERPAW vehicle software sends and receives MAVLink commands. For the safety of the testbed and of the AERPAW operators, only a reduced subset of MAVLink commands is allowed to pass through the MAVLink Filter and reach the autopilot.
Keeping in mind the caveat on the reduced subset of MAVLink commands allowed passing to the autopilot, at one extreme, an experienced AERPAW user can, however, discard the entire stack shown at the top of Fig. 5 and write their own MAVLink application using any other framework they wish (e.g., they could use MAVSDK [48] if they prefer a C++ based library).
However, to smooth the learning curve, we implemented a vehicle library named aerawlib [49], which features a finite state machine model, with hooks for vehicle (and/or radio) actions at each state. Several examples are available either to
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline
**Frequency Band** & **Type of Operation** & **Allocation** & **FS Max EIRP** & **MS Max EIRP** \\ \hline
617-634.5 MHz (DL) & Fixed & Non-federal & 65 & - \\ \hline
663-698 MHz (IL) & Mobile & Non-federal & - & 20 (dBm) \\ \hline
907-5912.5 MHz (IL) & Fixed and Mobile & Shared & 65 (dBm) & 20 (dBm) \\ \hline
1755-1760 MHz (IL) & Mobile & Shared & - & 20 (dBm) \\ \hline
2155-2160 MHz (DL) & Fixed & Non-federal & 65 (dBm) & - \\ \hline
2390-2483.5 MHz & Fixed and Mobile & Shared & 65 (dBm) & 20 (dBm) \\ \hline
2500-2690 MHz\({}^{2}\) & Fixed and Mobile & Non-federal & 65 (dBm) & 20 (dBm) \\ \hline
3550-3700 MHz\({}^{1,2,3}\) & Fixed and Mobile & Shared & 65 (dBm) & 20 (dBm) \\ \hline
3700-5390 MHz\({}^{2,2}\) & Mobile & Non-federal & - & 20 (dBm) \\ \hline
386-40-0.0 GHz & Fixed and Mobile & Non-federal & 65 (dBm) & 20 (dBm) \\ \hline \end{tabular}
\end{table} TABLE III: AERPAW’s FCC Innovation Zone frequencies. Footnotes: 1) Commission rules do not permit airborne use on all or portions of these bands. 2) Any experimental use must be coordinated with authorized users and registered receive-only fixed satellite earth stations. 3) Operations must be coordinated with a spectrum access system administrator.
be used as-is or to be modified by experimenters to fit their needs. The most popular example at the moment is the pre-determined trajectory sample application, where users specify a series of 3D waypoints to be traversed in order, including choices of the speed and wait times at each waypoint.
The AERPAW framework also allows the experimenter's programs to take decisions on the fly, thus enabling autonomous applications, such as a radio-based search and rescue (SAR), where the next direction of movement can be chosen based on the current radio measurements.
#### Autonomous Coordinated Multi-UAV Experiments
An additional feature supported by the application programming library provided by AERPAW is the ability of applications to synchronize the control of multiple vehicles. This is achieved either by using centralized control (where a coordinator program sends synchronized commands to multiple vehicles), or decentralized applications, (where programs on the companion computer of each of the vehicles coordinate without a centralized conductor). This ability can be leveraged to allow for swarm control. Fig. 6 shows the traces followed by two drones in a coordinated drone experiment, where one drone (the tracer) follows a list of waypoints, while the second drone (the orbiter) shadows the tracer by moving at the same time in the same direction, and upon reaching the target waypoint, it orbits around the tracer once before they both move to the next waypoint.
This experiment is initially designed and tested in the emulation environment and subsequently executed in the testbed environment. More complicated swarm experiments with a larger number of drones and including communication links with SDRs can be easily carried out using the same workflow. Autonomous decisions can be integrated into the experiment, where the drones can make next waypoint decisions based on the observations of wireless signals.
Other testbeds can, of course, use alternate methodologies for providing programmatic online trajectory control to experimenters, and repeatability of mobility profiles for experiments. We have described AERPAW's approach above not to advocate it as the only way, but rather to articulate the level of programmability and repeatability that experimenters should be able to expect from a testbed facility.
### _Emulation Support_
AERPAW has well-articulated emulation support for both RF and air/mobility aspects of experiments. In the "Development session" mentioned earlier, users can prepare their experiments with perfectly repeatable trajectories and wireless propagation. The main goal of providing the emulation environment is to allow users to develop their experiments in a safe and fully repeatable environment.
Fig. 6(a) depicts an example experiment comprising a portable node on the left and a fixed node on the right while deployed in the emulation environment. In emulation mode, the experimenters' code (encapsulated in the two E-VM, and shown in green in the picture), is running with no modifications in comparison with an experiment in testbed mode. In contrast, in emulation mode, the vehicle and the wireless channel are emulated, thus allowing for a full software emulation, amenable to cloud deployment.
For vehicle emulation, we use an open-source available emulator that has been developed by the ArduPilot community, which features as its main characteristic the use of the _same_ firmware as the autopilot we use on all our vehicles (at this time, drones, rovers, helikite, and a push-cart). Careful comparisons between the performance of the emulated vehicles and the testbed vehicles show that the vehicle emulator is performing very realistically.
In contrast, for the wireless channel emulator (CHEM), to the best of our knowledge, there is no open-source solution that satisfies all our requirements; therefore, we developed our own solution. Fig. 6(b) shows the main components involved in the CHEM. In general, each radio-enabled node in the testbed is capable of both transmitting and receiving radio signals, which we capture at baseband, IQ level. The IQ samples are sent to the channel emulator, which then "propagates" them to the corresponding receivers. The propagation in CHEM is
Fig. 5: AERPAW vehicle control stack.
Fig. 6: Sample vehicle experiment with two coordinated drones: the tracer (red) goes through a list of waypoints, while the orbiter (yellow) orbits around the tracer while at the waypoint.
controlled by the channel control module, which dynamically computes a channel matrix based on both dynamic information (e.g., the current mobile node positions and orientations), as well as static information (e.g., position of the fixed nodes, antenna patterns, transmitter gains, etc.).
The CHEM supports several features, including free space and two-ray ground propagation models, two noise models, MIMO channels, up to 100 MHz of instantaneous bandwidth, multi-rate processing, different antenna patterns, multiple frequencies, and, importantly for efficiency, suppressing silences for bursty traffic.
Once again, we have described AERPAW's approach above not to advocate it as the only way, but to articulate the level of emulation support we find required for an Open RAN testbed. Regarding AERPAW itself, while it has a good base from which to provide emulation support for Open RAN experiments, it would remain a non-trivial task to develop/procure and incorporate the large volume of software modules that would be required to be integrated into this framework in order to provide emulation support for a comprehensive complement of Open RAN experiments. In the next section, we return to this topic briefly.
### _Programmability, Radios, Software Stack_
AERPAW does not currently incorporate a full reference O-RAN implementation, although some component parts exist. The edge-cloud model of companion computers at every AERPAW Radio Node (including both fixed and portable nodes) allows for an easy transition into Open RAN softwarized radio modules, as such modules become available and integrated into the testbed.
The Software Defined Radios of AERPAW represent a potential strength in a possible transition path to full Open RAN support since experimenting with evolving or innovative radio protocols is reduced to an exercise of software development and integration.
AERPAW team provides a variety of SDR sample experiments for experimenters to work with using open-source software and USRP SDRs from NI. Any AERPAW user can start with one of these experiments and develop their code further to research e.g. different protocols and waveforms. AERPAW presently supports four different sets of open-source software for SDR experiments: ssrRAN [39, SS4.1.1], OpenAirInterface [39, SS4.1.2], GNURadio [39, SS4.1.3], and Python scripts [39, SS4.1.4]. A variety of sample experiments are provided in AERPAW's user manual for each case under Section 4.1 [39, SS4.1].
In Table IV, we provide a list of SDR sample experiments that are currently available or to be available by the end of AERPAW's Phase-2 (May 2023). An additional set of SDR experiments is expected to be added for general availability by the end of Phase-3 (expected May 2024). All these experiments are tested both in the development environment and the testbed environment of AERPAW. While experimenters can
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|} \hline
**Software** & **Sample Experiment** & **Comments** \\ \hline \multirow{6}{*}{**ssrRAN**} & SE1: Multi-node LTE SISO & Complete end-to-end LTE network with multiple srsUE, and one srsEPC \\ \cline{2-4} & SE2: LTE Cell Scan & Search for LTE cells and capture key parameters of interest \\ \cline{2-4} & SE3: Two-Node LTE MIMO & Complete end-to-end 2x2 MIMO LTE network, using srsUE with srsENB and srsEPC \\ \cline{2-4} & SE4: Multi-Node IoT & Basic NB-IoT signalling between the eNB and UE nodes \\ \cline{2-4} & SE5: LTE Handover & Complete end-to-end LTE network with S1 handover, using srsUE with srsENB and open5GS \\ \cline{2-4} & SE6: Single-Node 5G SA & Complete end-to-end 5G SA network, using srsUE with srsENB and open5GS \\ \hline \multirow{2}{*}{**OAI**} & GE1: Two-Node LTE SISO & Complete end-to-end LTE network, using GAUEAN and srsUE \\ \cline{2-4} & O2: Single-Node 5G SA & complete end-to-end LTE network, using GAICBN and srsUE \\ \hline \multirow{2}{*}{**GNU Radio**} & GE1: OFDM TX-RX & Send and receive data using an OFDM waveform \\ \cline{2-4} & GE2: Channel Sounder & Pseudo-random sequence of bits are transmitted/received for channel sounding \\ \cline{2-4} & GE3: LoRa PHY TX0RX & LoRa transceiver with all the necessary receiver components \\ \hline \multirow{2}{*}{**UHD Python- API**} & UHD1: Spectrum Monitoring & Sweep based spectrum monitoring between 87 MHz and 6 GHz \\ \cline{2-4} & UHD2: IQ Collection & IQ samples are collected at desired center frequencies with some sampling rate for a specified amount of duration \\ \hline \end{tabular}
\end{table} TABLE IV: AERPAW example experiments with SDRs.
Fig. 7: AERPAW emulation environment overview.
also bring their own software to the platform, AERPAW can not guarantee that they will work smoothly with the existing AERPAW hardware and software, and the development environment. For further details, readers are referred to AERPAW's user manual [39, SS4.1].
AERPAW also includes similar prepared experiment profiles for commercial radio equipment available in the testbed (see Table V), but they are relevant in the Open RAN context mainly as potential support equipment, so we do not discuss them further here.
### _Summary - Open RAN Related Components of AERPAW_
While AERPAW has not been designed initially as an Open RAN testbed, its open, modular, and flexible design allows possible expanded support for Open RAN use cases as a living lab for UAVs with comparative ease. The AERPAW team filled out, upon request, a survey in November 2022 developed by the recently established Open RAN working group of the National Spectrum Consortium (NSC) [51]. This survey was shared by NSC members with existing testbed platforms that may potentially support Open RAN experiments in the future. In Table VI, we present a revised version of NSC's Open RAN survey and included comments on AERPAW's features and capabilities that can support Open RAN experiments with controlled aerial mobility. In particular, we highlight open and programmable end-to-end network capabilities as well as commercial 5G equipment deployments in AERPAW, on-site access to wireless spectrum, different experimentation capabilities supported, compute nodes, unique use case testing scenarios, testing types, among other related platform features.
The information provided in Table VI relates specifically to the match and extensibility of AERPAW as a meaningful Open RAN testbed for use cases with controlled air mobility. However, the exercise of preparing this table affords us practical insights into designing and building such an Open RAN testbed, to complement our observations in Section II, and we pass these on to the community here.
## V Representative Results Related to Open RAN and Controlled Air Mobility
In this section, we present two early representative experiments from AERPAW that are of relevance for Open RAN experiments. We also elaborate on other possible experiments of relevance to Open RAN that may be supported in AERPAW in the future.
### _RAN Slicing xApp Experiments_
In this section, we provide representative results using the RAN slicing xApp and srsRAN, using the framework by the NSF POWDER Wireless platform [52], executed at the AERPAW testbed. (Note that these features have not yet been integrated into the AERPAW's development and transition-to-testbed environments; we are exploring integration options at this time). The goal is to dynamically create network slices and observe the effects of slice reconfiguration with a TCP stream on the performance of a UE. A near Real-time RIC is deployed as part of two separate Kubernetes clusters. Detailed steps are provided in [53], we will provide a high-level overview of the architecture. The _RIC cluster_ is used for deploying the platform and applications which are part of the RIC, whereas the _Aux cluster_ is used to deploy other auxiliary functions. The RIC Kubernetes cluster installation is done through configuration scripts and pre-generated helm charts for each of the RIC components. Once the process is done, we created a persistent volume through a storage class for the influxDB on the RIC platform namespace. Once the RIC platform is deployed, a modified E2 termination is created which has few services enabled to communicate and exchange messages between RIC and E2 Agent [53].
Once the Kubernetes clusters are deployed, we can deploy the Near Real-time RIC using a RECIPE file which provides customized parameters for the configuration of a particular deployment group. This Recipe file can be tinkered with if we want to change any configuration to suit our requirements. Next is the installation of srsRAN components such as srsUE, srsEnB, and srsEPC which use ZeroMQ networking libraries. Since we use ZeroMQ mode, the 4G/5G network can be set up using a single machine that hosts both the RIC and srsRAN components. Finally, the xAPP is onboard and deployed on top of the Near real-time RIC and full integration is completed.
Using this setup, we create two network slices in a work-conserving mode and bind two srsUEs to these network slices. Some representative results are presented in Fig. 8 for two
\begin{table}
\begin{tabular}{p{42.7pt}|p{113.8pt}|p{113.8pt}} \hline
**Software** & **Sample Experiment** & **Comments** \\ \hline
**Ericsson** & EEL: 5G Modern RF Logging and Throughput & Quected modem logs various KPIs from 4G/5G Ericsson network \\ \hline
**Keysight** & KRESET: Spectrum monitoring & Monitor and record spectrum up to 6 GHz \\ \cline{2-3}
**Sensors** & KRESET: Signal classification & Classify and detect a variety of signals based on RF signature \\ \cline{2-3} & KRESET: Signal source tracking & TDOA based localization of a signal source by passive monitoring of its RF signature \\ \hline \end{tabular}
\end{table} TABLE V: AERPAW example experiments with commercial RF hardware.
Fig. 8: Representative results on O-RAN slicing xApp using srsRAN with two UEs.
different bandwidths, which show the throughput of one of the UEs. We configure the slice scheduler in steps to alter the proportionate scheduling in different ways and observe the effects on the TCP stream for the UE [54, 55]. An Iperf server is created on the UE namespace to observe the effects of dynamic RAN slicing and a corresponding Iperf client [56]. We create two slices, referred to as _fast_ and _slow_, where each slice can be dynamically configured to share the bandwidth. For the baseline scenario, the full bandwidth of 15 PRBs (100 PRBs) is initially allocated to the unsliced UE which gives a throughput of around 35-40 MBps (170 MBps) as illustrated in Fig. 8.
After this, the resources are distributed with the 80:20 configuration among the two UEs. The results in Fig. 8 show that the UE's throughput falls to 27 MBps (140 MBps) for this configuration, and when the priorities are inverted between the fast and slow slices to 20:80, the throughput further reduces to 6-7 MBps (40 MBps). Finally, when the priorities are equalized to 50:50 configuration, the throughput increases to 16-17 MBps (70 MBps) for the first UE. The results can be easily extended to a larger number of UEs and more complicated resource configurations.
Our future work includes implementing this same scenario in AERPAW's development and testbed environments with multiple controllable vehicles. The throughput needs and the link qualities of UEs will change dynamically over time as the vehicles move around, and there is a need to have a dynamic slicing mechanism that satisfies the requirements of individual network slices. AERPAW can support development and testing in such dynamic RAN slicing scenarios, first in the emulation environment, and then in the testbed mode with realistic propagation conditions. Programmable mobility
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Capability** & **O-RAN Related Components** & **AERPAW Availability** \\ \hline \multirow{8}{*}{**Open and** **Programmable End-to-End Network**} & Multiple SDRs connected to power and network backhaul & USRPs, Keysight RF sensors \\ \cline{2-3} & Indoor wireless operations in a lab & N/A \\ \cline{2-3} & Outdoor wireless operations & Rural farm and urban campus \\ \cline{2-3} & Open 5G mobile cores & Open5GS \\ \cline{2-3} & Open fronthaul interface for testing open RUs & Not currently available \\ \cline{2-3} & Open source software stacks ready to use with or without & srsRAN, OAI, GSURadio, I/Q collection with sample experiments [39] \\ \cline{2-3} & additional software development & Not currently available \\ \cline{2-3} & Open source RIC implementation & Not currently available \\ \cline{2-3} & BYOD operation & Yes on a case-by-case basis) \\ \cline{2-3} & BYOS operations & Yes (on a case-by-case basis) \\ \cline{2-3} & Bare metal for software installations & Not currently available \\ \cline{2-3} & Containers for software installations & Yes - both in emulation and testbed modes \\ \cline{2-3} & Remote access to network resources & Yes during development (emulation) mode, not normally during testbed mode \\ \hline \multirow{2}{*}{**End-to-End Network with Commercial Equipment and Swapapable**} & Commercial equipment & Ericsen 4G/5G network \\ \cline{2-3} & Indoor wireless operations & N/A \\ \cline{2-3} & Outdoor wireless operations & Rural farm area \\ \cline{2-3} & Commercial 5G mobile cores & Ericsen NSA core network (Release-15) \\ \cline{2-3} & Includes or more of a commercial RIC, CU, DU, and RU & Not currently available \\ \cline{2-3} & Open fronthaul interface enabling testing of open RUs to support different physical layers & Not currently supported \\ \hline \multirow{4}{*}{**On-site Access to Spectrum**} & Unlicensed or ISM band & 900 MHz for aerial communications with SDR front ends \\ \cline{2-3} & CRBS spectrum and CRBS SAS features & N/A \\ \cline{2-3} & Licensed spectrum from a spectrum owner & N/A \\ \cline{2-3} & Experimental or Innovation Zone licensed spectrum & Yes – FCC Innovation Zone with 13 bands in 0.6-40 GHz [39] \\ \hline \multirow{2}{*}{**Techniques**} & Channel emulation systems & Software emulation available now [39], Keysight Propsim (32 ports) \\ \cline{2-3} & Multiple modes of massive MIMO & Not presently available – mmWave UAV capabilities with 4x4 Sivers \\ \cline{2-3} & Emulation capabilities for the RIC, CU, DU, RU, and UE & Presently not available \\ \hline \multirow{4}{*}{**Compute Capacity**} & One optical hop & Yes \\ \cline{2-3} & Edge compute & Yes – Dell 5820 Server at fixed nodes, Intel NUC (9) at portable nodes carried by AERPAW vehicles \\ \cline{2-3} & Public cloud computing & Not presently supported \\ \hline \multirow{4}{*}{**Unique Use Case Testing**} & Drone support & Multiple different custom drones for different use cases \\ \cline{2-3} & Rural and urban environment & Yes (autonomous drone experiments available only in rural) \\ \cline{2-3} & Military base & N/A \\ \cline{2-3} & Smart agriculture & Deployment in Lake Wheeler agricultural farm of NC State [39] \\ \hline \multirow{4}{*}{**Testing Types**} & Research and development & Free access by NSF-funded academic researchers, charge-based access for other researchers \\ \cline{2-3} & Compliance (3GPP, ETSI, Q-RAN, etc.) & 3GPP compliant open-source and commercial 4G/5G hardware/software \\ \cline{2-3} & Interoperability & Partial \\ \cline{2-3} & Security & Partial \\ \cline{2-3} & Performance/sires testing & Partial \\ \hline \multirow{4}{*}{**Others**} & Research staff availability & Yes (multiple research associates/students for research support) \\ \cline{2-3} & Operational staff availability & Yes (multiple research associates/students to support experiments) \\ \cline{2-3} & Wireless certification program & Not presently supported \\ \cline{2-3} & Established connections to standards/specifications organization & NextG Alliance, Open Generation Alliance, GUTMA, Linux Foundations InterUSS Platform [50] \\ \hline \end{tabular}
\end{table} TABLE VI: AERPAW features and capabilities related to Open RAN.
with multiple vehicles in both environments and will make it possible to have a testing environment that provides repeatable measurements involving precise mobility control for the UEs, and in some cases, mobile relays and mobile base stations with wireless backhaul.
### _I/Q Sample Collection Experiments_
In Fig. 9, we provide representative results for the UHD2: IQ collection sample experiment shown in Table IV. The UAV is programmed to fly at five different altitudes and the USRP B205mini at the UAV collects IQ samples centered at 3.51 GHz with a sampling rate of 2 MHz. The only signal that can be observed in the spectrogram in the same band is an LTE signal of 1.4 MHz bandwidth, transmitted from a USRP B205 mini that runs srRAN at our LW1 fixed node. We post-process the collected I/Q samples using Matlab's 4G toolbox, obtain RSRP for each I/Q sample location, and plot the RSRP over the trajectory. Additional details of the measurement setup and representative results are available in [57] using further post-processing with Matlab's 4G toolbox, such as coherence time and coherence bandwidth with respect to the distance between the UAV and the fixed node, kriging interpolation of the received signal across the whole 3D volume, channel estimation, synchronization procedures, among others.
A similar experiment can be carried out to capture I/Q samples and evaluate the KPIs for any Open RAN based 5G system with varying locations of UAVs and UGVs. One or more of the SDR, commercial wireless, or vehicle control sample experiments from AERPAW's sample vehicle experiment repository, such as the one illustrated in Figure 6 above, can be used simultaneously with the I/Q sample collection experiment, to collect the raw I/Q data at the finest granularity and post-process them in Matlab's 4G and 5G toolboxes to generate desired KPIs. Such data collected in realistic propagation conditions can be made publicly available to the research community for furthering the research in controlled aerial mobility technologies.
## VI Conclusion
Open RAN expands the capabilities of 5G to support features and functions tied directly to use cases. Disaggregation and virtualization are well suited to UAVs/drones which will continue to grow and become a much greater part of the 5G network from a UE or acting as an O-RU, O-DU, or O-CU component of the network architecture. However, testing and validation are critical to successful integration into 5G and the expansion of Open RAN network capabilities.
Creating a testbed that supports UAVs poses challenges to meeting all the demands from the physical network to Open RAN interoperability needs. For the UAV market to grow and flourish testing and validation are necessary. As rules and regulations remain volatile in the immediate future, a UAV Open RAN lab can provide extremely valuable technical results to inform such actions.
In this paper, we have provided conclusions drawn from our experience and expertise gained from designing AERPAW, a one-of-a-kind public advanced wireless testbed that provides programmable radio and vehicle control in a realistic outdoor area of considerable span, and also reflected on its fit as a possible Open RAN / UAV testbed in future. We hope these observations may be helpful to the community of designers of other such facilities.
## Acknowledgement
The authors would like to thank the PAWR Project Office (PPO) and AERPAW project partners including project personnel from Mississippi State University, Wireless Research Center, RENCI, University of South Carolina, and Purdue University, for their contributions to developing the AERPAW infrastructure and for their feedback on this manuscript.
|
2302.08364 | On the Mechanical, Electronic, and Optical Properties of 8-16-4
Graphyne: A 2D Carbon Allotrope with Dirac Cones | Due to the success achieved by graphene, several 2D carbon-based allotropes
were theoretically predicted and experimentally synthesized. We used density
functional theory and reactive molecular dynamics simulations to investigate
the mechanical, structural, electronic, and optical properties of 8-16-4
Graphyne. The results showed that this material exhibits good dynamical and
thermal stabilities. Its formation energy and elastic moduli are -8.57 eV/atom
and 262.37 GPa, respectively. This graphyne analogue is a semi-metal and
presents two Dirac cones in its band structure. Moreover, it is transparent,
and its intense optical activity is limited to the infrared region. Remarkably,
the band structure of 8-16-4 Graphyne remains practically unchanged at even
moderate strain regimes. As far as we know, this is the first 2D carbon
allotrope to exhibit this behavior. | Raphael M. Tromer, Marcelo L. Pereira Junior, Kleuton A. L. Lima, Alexandre F. Fonseca, Luciano R. da Silva, Douglas S. Galvao, Luiz A. Ribeiro Junior | 2023-02-16T15:30:28Z | http://arxiv.org/abs/2302.08364v2 | On the Mechanical, Electronic, and Optical Properties of 8-16-4 Graphyne: A 2D Carbon Allotrope with Dirac Cones
###### Abstract
Due to the success achieved by graphene, several 2D carbon-based allotropes were theoretically predicted and experimentally synthesized. We used density functional theory and reactive molecular dynamics simulations to investigate the mechanical, structural, electronic, and optical properties of 8-16-4 Graphyne. The results showed that this material exhibits good dynamical and thermal stabilities. Its formation energy and elastic moduli are -8.57 eV/atom and 262.37 GPa, respectively. This graphyne analogue is a semi-metal and presents two Dirac cones in its band structure. Moreover, it is transparent, and its intense optical activity is limited to the infrared region. Remarkably, the band structure of 8-16-4 Graphyne remains practically unchanged at even moderate strain regimes. As far as we know, this is the first 2D carbon allotrope to exhibit this behavior.
2D Carbon Allotrope, Graphynes, 8-16-4 Graphyne, Sun-Graphyne, Molecular Dynamics, Density Functional Theory
## I Introduction
Following the discovery of graphene in 2004 [1], there is a renewed interest in 2D carbon materials [2; 3; 4; 5]. Their physicochemical properties can be controllable depending on the synthesis process [6; 7]. Several 2D carbon allotropes have been proposed [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18], and a few of them have been already experimentally realized [19; 20; 21; 22]. Among the recently synthesized structures, it is worth mentioning monolayer amorphous carbon (MAC) [19], 2D biphenylene network (BPN) [20], monolayer fullerene network (2DC\({}_{60}\)) [22], and the multilayer \(\gamma\)-Graphyne [21].
MAC and BPN share some graphene properties, such as a zero semi-metal bandgap and the Dirac cones (corresponding to a linear dispersion). MAC has randomly distributed defects with five, six, seven, and eight rings of carbon atoms. Its lattice arrangement differs from disordered graphene. The BPN lattice contains a periodic arrangement of four, six, and eight carbon rings. On the other hand, \(\gamma\)-Graphyne and 2DC\({}_{60}\) present a small direct bandgap of 0.48 eV [21] and direct semiconducting bandgap of 1.6 eV [22], respectively.
The lattice structure of \(\gamma\)-Graphyne can be understood as graphene uniformly expanded by inserting two-carbon acetylenic units between all the aromatic rings. The graphyne structures were predicted in 1987 [23]. Graphynes are a generic name for structures where acetylene groups are inserted into single bonds. Graphyne-based molecules (fullereneynes) [24], nanotubes [25], and nanoscrolls [26] have been reported in the literature.
2DC\({}_{60}\) crystals are formed by C\({}_{60}\) polymers covalently bonded in a planar configuration. This clustering mechanism yielded two stable crystals of polymeric C\({}_{60}\) in closely packed quasi-hexagonal and quasi-tetragonal phases [22]. \(\gamma\)-Graphyne and 2DC\({}_{60}\) overcome the problem of a null bandgap shown by other 2D carbon-based materials, which limits their applications in digital electronics.
Other carbon-based materials (and materials composed of different atomic species) with Dirac cones were proposed [9; 27; 28; 29; 30]. S-graphene [31], and 6,6,12-graphyne [32] are examples of 2D carbon materials that can present an additional Dirac point in their band structure profiles. An external strain applied to these materials can tune the Dirac point. In moderate strain regimes, the two Dirac cones merge into only one cone. Under high strain rates, it was observed a bandgap transition from semimetallic to semiconductor, and the Dirac cones disappeared with the opening of the bandgap.
It might be of interest for some applications in flexible electronics that the optoelectronic properties of the material do not undergo any substantial change when subjected to external stress. To our knowledge, no discussions on a 2D carbon-based material with this property have been presented in the literature. Therefore, this work aims to fill this gap.
Herein, we study the mechanical, structural, electronic, and optical properties of 8-16-4 Graphyne [33], a 2D carbon-based material that presents two Dirac cones in its band structure. For simplicity, from now on we named it Sun-Graphyne (S-GY) (see Figure 1), considering that its lattice structure resembles other graphyne varieties and contains two rings with eight atoms [34; 35; 23]. Moreover, this name is also due to the atomic arrangement of its unit cell, which resembles the sun drawing images. We investigated the S-GY electronic, mechanical, and optical properties using density functional theory (DFT) and _ab initio_ and classical molecular dynamics (MD) methods. Our analyses revealed that S-GY is thermally stable up
to high temperatures.
The S-GY structure composes the class of theoretically designed graphyne structures. We started from octagraphene [36], and acetylene groups are inserted into the bonds of the square octa-graphene rings. It differs from the so-called T-graphyne (also called octa-graphyne) [37; 38], which contains only one eight-atom ring. S-GY differs from other Dirac materials because its electronic band structure does not significantly change under a moderate strain regime.
## II Methodology
### DFT Calculations
DFT simulations were used to study the S-GY electronic and optical properties. The simulations were carried out using the SIESTA code [39], and within the framework of the generalized gradient approximation (GGA) with the Perdew-Burke-Ernzenhof (PBE) exchange-correlation functional [40; 41].
Norm-conserving Troullier-Martins pseudopotentials were used to describe the core electrons [42]. The calculations considered van der Waals (vdw-DFT) corrections [43; 44; 45]. Double-zeta plus polarization (DZP) was used as the basis set. A vacuum region of 20 A was employed to prevent spurious interactions among the periodic images. We assumed a cut-off value of 300 eV for the kinetic energy. The k-grid was \(10\times 10\times 1\) for geometry optimizations and \(30\times 30\times 1\) for electronic and optical calculations, respectively.
We used _ab initio_ MD (AIMD) approach in simulations considering finite temperatures, with an NVT ensemble with an integration time step of 1.0 fs, for a total simulation time of 2 ps. A Nose-thermostat maintains the temperature constant when equilibrium is reached [46].
To perform the optical calculations, we considered a standard external electric field of 1.0 V/A along the x, y, and z directions [47]. From the Kramers-Kronig relation and Fermi's golden rule, we can obtain the real \(\epsilon_{1}\) and imaginary parts of the dielectric constant: \(\epsilon_{2}\), respectively:
\[\epsilon_{1}(\omega)=1+\frac{1}{\pi}P\int_{0}^{\infty}d\omega^{\prime}\frac{ \omega^{\prime}\epsilon_{2}(\omega^{\prime})}{\omega^{\prime 2}-\omega^{2}}, \tag{1}\]
where \(P\) is the Cauchy principal value, and
\[\epsilon_{2}(\omega)=\frac{4\pi^{2}}{V_{\Omega}\omega^{2}}\sum_{i\in\mathrm{VB },\,j\in\mathrm{CB}}\sum_{k}W_{k}\left|\rho_{ij}\right|^{2}\delta(\varepsilon _{kj}-\varepsilon_{ki}-\hbar\omega), \tag{2}\]
where \(W_{k}\) is the k-point weight in the reciprocal space, \(\rho_{ij}\) the dipole transition matrix element, c\(\omega\) the photon frequency, and \(V_{\Omega}\) the unit cell volume. VB and CB are the valence and conduction bands, respectively [16].
Once the real and imaginary parts of the dielectric constant are obtained, the other relevant optical coefficients, such as the absorption coefficient \(\alpha\), the refractive index (\(\eta\)), and reflectivity (\(R\)), can be derived as:
\[\alpha(\omega)=\sqrt{2}\omega\bigg{[}(\epsilon_{1}^{2}(\omega)+\epsilon_{2}^ {2}(\omega))^{1/2}-\epsilon_{1}(\omega)\bigg{]}^{1/2}, \tag{3}\]
\[\eta(\omega)=\frac{1}{\sqrt{2}}\bigg{[}(\epsilon_{1}^{2}(\omega)+\epsilon_{2} ^{2}(\omega))^{1/2}+\epsilon_{1}(\omega)\bigg{]}^{2}, \tag{4}\]
and
\[R(\omega)=\bigg{[}\frac{(\epsilon_{1}(\omega)+i\epsilon_{2}(\omega))^{1/2}-1 }{(\epsilon_{1}(\omega)+i\epsilon_{2}(\omega))^{1/2}+1}\bigg{]}^{2}. \tag{5}\]
### Classical MD Simulations
We also performed classical reactive MD simulations to investigate the S-GY fracture patterns, dynamics, and thermal stability. The fully atomistic MD simulations were performed using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code [48; 49]. A reactive force field is required since fracture involves bond break and formation. For this reason, we used the Adaptive Intermolecular Reactive Empirical Bond Order potential (AIREBO) [50].
Newton's equations of motion were integrated using the Velocity-Verlet Algorithm. The integration time step was 0.1 fs, and an NPT ensemble was considered using the Nose-Hoover thermostat [51]. The simulations were performed from room temperature up to 7000 K and null pressure. The S-GY lattice was initially equilibrated at 200 ps to eliminate residual stress.
The systems considered here have an area of approximately 104 nm\({}^{2}\), consisting of a \(14\times 14\times 1\) (3136 atoms) unit cell (which contains 16 carbon atoms), indicated in Figure 1, and periodic boundary conditions. Along the \(z\)-direction, a 100 A lattice constant (vacuum buffer zone) was used to avoid the interaction between the S-GY and its images.
For the strain simulations, only the \(x\)-direction was considered due to the symmetry of the S-GY unit cell (see Figure 1). The strain simulations were carried out by applying external stress by increasing the simulation box size with a constant uniaxial strain rate of \(10^{-6}\) fs\({}^{-1}\) at room temperature. We can obtain the stress-strain curves and elastic properties from this simulation protocol. From representative MD snapshots, we can analyze the fracture patterns and dynamics.
The thermal stability of the S-GY was investigated using a heating ramp protocol. The temperature was increased from room temperature up to 7000K using a constant rate of 2 K/ps and an NVT ensemble during 1 ns. From the obtained results, we estimated the S-GY melting point. The MD snapshots and trajectories were obtained using the visualization and analysis software VMD [52].
## III Results
### Stability and Structural Properties
The schematic representation of the optimized S-GY lattice is shown in Figure 1. Its symmetric unit cell is highlighted in the square. The unit cell vectors are \(|\vec{v_{1}}|=|\vec{v_{2}}|=7.36\) A. S-GY has an all-carbon structure periodically arranged with two different eight-atom rings. The bond length values are \(\overline{\text{C}_{1}\text{C}_{2}}=1.24\) A and \(\overline{\text{C}_{2}\text{C}_{3}}=1.42\) A for one ring, and \(\overline{\text{C}_{3}\text{C}_{4}}=1.42\) A and \(\overline{\text{C}_{4}\text{C}_{5}}=1.48\) A for the other ring, as shown in Figure 1. These values are typically found in similar 2D carbon allotropes [53; 54]. It is worth mentioning that S-GY formation energy is -8.57 eV/atom, similar to graphene (-8.8 eV/atom) [55; 56; 47].
To test the S-GY structural stability, we performed AIMD simulations considering a total time of 2 ps at a temperature of 1000 K. In Figure 2, we show the representative AIMD snapshot corresponding to the final moment of the simulation. This figure shows that the S-GY structure remains practically unchanged at high-temperature regimes, with only small perturbations (some of the rings are slightly tilted) due to thermal fluctuations. The calculated formation energy value and the AIMD results indicate that S-GY is structurally stable. The recent advances in the graphyne syntheses [57; 35; 58] suggest that S-GY is possibly synthesizable.
### Electronic Properties
In Figure 3, we present the S-GY electronic band structure for the unstrained (3(a-b)) and with biaxial-strain (for 3(c-d) 5% and 3(e-f) 10% of strain) cases. Their corresponding density of states is also presented. As a general trend, we can see two Dirac points at the middle \(X\to M\) and \(M\to Y\) integration paths, with a narrow gap of about 5 meV. This feature indicates that the S-GY electrons would behave as massless Dirac fermions, similar to the graphene case. We can also see that S-GY is symmetric from \(M\rightarrow\Gamma\) or \(M\leftarrow\Gamma\), presenting isotropic transport channels.
We can see that the S-GY electronic band structure presented in Figures 3(a-b) remains practically unchanged even for moderate strain regimes. There is no symmetry inversion as in other 2D carbon-based materials with Dirac's cones. Figures 3(c-d) and 3(e-f) present the S-GY band structure for 5% and 10% for the biaxial strain, respectively. Both band configurations resemble the unstrained case. In other graphyne materials [32], the band structure can be tuned under the applied stress. In these cases, the two Dirac cones merge into only one cone for a biaxial strain of 6% [32]. Moreover, a bandgap was observed for a strain value of 8% [32], contrasting with the present case, where a Dirac cone remains stable even for larger strain values.
To further investigate the band configuration of S-GY, we also analyzed the effect of the bilayer interactions on the electronic properties, as shown in Figure 4. As can be
Figure 1: Schematic representation of the S-GY structure. The symmetric unit cell is highlighted by the square.The unit cell vectors are \(|\vec{v}_{1}|=|\vec{v}_{2}|=7.36\) Å.
seen from this figure, the leading electronic features are similar to the monolayer case but with two extra Dirac cones. The new cones denote the contribution of an additional layer to the electronic properties.
### Optical Properties
In Figure 5, we present the optical coefficients as a function of photon energy from 0 to 20 eV with an external electrical field polarized applied along the x and y directions. The absorption (Figure 5(a)) starts close to 0 eV (\(\pi\) to \(\pi^{*}\) transitions, inferred from the density of states). This trend is expected since S-GY is a semi-metallic material with an electronic bandgap of 5 meV, as mentioned above. We observe several peaks from in
Figure 4: Electronic band structure configuration for the S-GY bilayer.
Figure 3: S-GY electronic band structure for the unstrained (a-b), and with (c-d) 5% and (e-f) 10% of applied biaxial-strain.
Figure 2: Representative AIMD snapshots corresponding to the initial and final moments of the simulation. Here we considered a total time of 2 ps and a temperature of 1000 K.
frared to ultraviolet regions. The maximum absorption intensity is about \(3\times 10^{5}\)cm\({}^{-1}\) for photon energies up to 12 eV and increases to \(5\times 10^{5}\)cm\({}^{-1}\) for higher photon energies. There is one peak within the visible region at 1.9 eV. We can also observe that S-GY has isotropic optical properties along the x and y directions, as expected from its topology.
The refractive index is shown in Figure 5(b). Except for its value at null photon energy, the maximum intensity corresponds to the peak at 1 eV. \(\eta\) slightly decreased up to 2 eV, remaining practically constant for all remaining spectra. A similar trend is observed for the S-GY reflectivity, as shown in Figure 5(c). Note that the maximum intensity for \(R\) occurs between 1 and 2 eV. The \(R\) activity is limited to the infrared region and decreases to values near zero for photon energies higher than 3 eV. These results suggest that the incident light on S-GY is almost entirely absorbed, i.e., it is a transparent material.
We also performed the same optical analysis for the bilayer case. As expected, we do not observe substantial differences in the optical activity between the monolayer and bilayer cases, as illustrated in Figure 6. The bilayer case has higher values of light absorption, about \(8\times 10^{5}\)cm\({}^{-1}\) (see Figure 6(a)). \(R\) and \(\eta\) behave similarly between the two systems (see Figures 6(b) and 6(c), respectively), except at null photon energy.
We analyzed the spatial patterns of the frontier crystalline orbitals. The results for the lowest unoccupied crystalline orbital (LUCO) and the highest occupied crystalline orbital (HOCO) are presented in Figure 7(a) and 7(b), respectively. As a general trend, both crystalline orbitals are spread over the lattice, consistent with electronic delocalization and the small bandgap value.
### Mechanical Properties
Figure 8 presents the stress-strain curve for the uniaxial (x-direction) tensile loading. As the S-GY lattice is symmetric along the x and y directions, its mechanical properties are isotropic.
S-GY exhibits a quasi-linear elastic region when subjected to stain values up to 40%. It undergoes an abrupt transition involving a fractured configuration (null stress) after a critical strain (\(\epsilon_{C}\)) of 40.86%, as depicted in Figure 8. The ultimate stress value (\(\sigma_{C}\)) is 94.40 GPa, considerably smaller than the corresponding graphene one (about 228.72 GPa [59]). The ultimate stress is the corresponding tensile stress for the critical strain.
We considered 2% of stress for calculating Young's modulus (\(Y_{M}\)), which is 262.37 GPa (or 87.6 N/m if we
Figure 5: (a) Absorption coefficient, (b) refractive index, and (c) reflectivity as a function of photon energy for the S-GY monolayer. \(E||X\) and \(E||Y\) denote the polarization direction for the externally applied electric field.
Figure 6: (a) Absorption coefficient, (b) refractive index, and (c) reflectivity as a function of photon energy for the S-GY bilayer. \(E||X\) and \(E||Y\) denote the polarization direction for the externally applied electric field.
Figure 7: Spatial distribution for the (a) LUCO and (b) HOCO.
consider the structure's thickness as 3.34 A). This value is significantly smaller than that of graphene, and other similar 2D carbon-based structures [59; 60]. In particular, comparing the value of the S-GY's Young's modulus with that of other known graphyne structures, it is about half, the same and twice that of \(\gamma\)-graphyne, \(\beta\)-graphyne, and \(\alpha\)-graphyne, respectively [61]. The high S-GY porosity can explain these differences.
The S-GY fracture patterns and dynamics were also investigated here. In Figure 9(a-f), we show some representative MD snapshots highlighting critical moments of the stress dynamics. The colour scheme denotes the values for the von Mises stress per atom along the structure. These values provide helpful information on the fracture dynamics [59].
Figures 9(a) at 0.0% of strain, 9(b) at 15.0% of strain, and 9(c) show that S-GY can preserve its structural integrity up to 35% of strain. The first bond break occurs at 40.90% of strain, as shown in Figure 9(d). After this critical value, the lattice undergoes an abrupt brittle-like fracture with fast and linear crack propagation along the perpendicular direction of the stretch at 40.91% of strain (see Figure 9(e)). This process separates the S-GY lattice into two parts connected by linear atomic chains (LACs) at 40.93% of strain, as illustrated in Figure 9(f). The fracture starts from the acetylene bonds. The whole process can be better understood from video01 in the Supplementary Material.
### Melting Point
The melting point analysis was performed using the heating ramp protocol. In Figure 10, we present the total energy (black) and heat capacity (\(C_{V}\), in blue) values as a function of temperature. The total energy increases quasi-linearly with temperature values between 300K-2100K, quasi-parabolic between 2100K-5500K, and quasi-linearly again for temperatures ranging within the interval 5500K-7000K.
The most pronounced peak in the CV curve denotes a melting point of about 2800 K (see Figure 10). In the first stage of the melting process (300K-2100K), the S-GY lattice retains its integrity. At 2800K, the thermal vibrations lead to morphology changes, and the melting process occurs at the second heating stage (2100K-5500K). S-GY melting point is smaller than those for the monolayer graphene (4095K) [62], monolayer amorphous carbon (3626K) [59], and biphenylene network (4024K) [60]. The last stage of the heating process (between 5500K-7000K), with a significant change in the slope for the total energy curve, is associated with the complete structural destruction of the structure and its conversion to a gas phase.
Finally, in Figure 11, we show representative MD snapshots for the heating ramp simulation of S-GY. The temperatures vary from 2100K up to 7000K. In Figure 11(a), we can see that the thermal vibrations lead to changes in the lattice morphology with several C-C bond breakings and reconstructions. However, the overall structural configuration is still similar to the S-GY topology. The complete amorphization of the lattice occurs at 2800K, as shown in Figure 11(b).
For temperatures between 3500K-5000K, we observe the formation of graphene-like domains, i.e., lattice fragments composed of six-membered rings, as illustrated in Figures 11(c-e). The complete atomization of the lattice occurs for temperatures higher than 5000K (see Figure 11(f)). The whole process can be better understood from video02 of the supplementary material.
## IV Conclusions
We used DFT and reactive fully atomistic MD simulations to propose a new 2D carbon allotrope named Sun-Graphyne. This material has an all-carbon structure periodically arranged by two eight-atom carbon rings.
We investigated the S-GY thermal and structural stability. AIMD simulations confirmed its structural and thermal stabilities. In these simulations, the S-GY retains its structural morphology up to 1000K. Its DFT formation energy is -8.57 eV/atom, similar to the graphene one (-8.8 eV/atom). These results suggest that S-GY is structurally stable.
Electronic structure calculations revealed that S-GY is a semi-metal material with a narrow gap of about 5 meV. Interestingly, its electronic band structure presents two Dirac cones and isotropic transport channels. The Dirac cones indicate that the electrons behave as massless Dirac fermions, similar to graphene.
The S-GY electronic band structure remains unchanged even for moderate strain regimes, which, as far as we know, is unique for 2D carbon allotropes. Also,
Figure 8: Stress-strain curve for S-GY as a function of the uniaxial applied strain along the x-direction.
there is no symmetry inversion under strain as in similar 2D carbon-based materials with Dirac's cones.
S-GY has isotropic optical properties along the plane directions. Its reflectivity is limited to the infrared region and decreases to values near zero for photon energies higher than 3 eV. The incident light on its surface is almost entirely absorbed, i.e., this material is transparent.
Regarding the mechanical properties, S-GY possesses a quasi-linear elastic region when subjected to stain values up to 40%. It abruptly transitions to a fractured state (null stress) after a critical strain of 40.86%. Its ultimate stress value and Young's modulus are 94.40 GPa and 262.37 GPa, respectively. These values are considerably smaller than the graphene ones [59] and comparable to those of other known graphyne structures [61]. Moreover, these values are much lower than similar 2D carbon-based structures, which can be attributed to the S-GY porosity.
The S-GY melting point (2800K) is smaller than those for the monolayer graphene (4095K) [62], monolayer amorphous carbon (3626K) [59], and biphenylene network (4024K) [60]. For temperatures between 3500K-5000K, the S-GY melting process tends to form graphene-like domains.
Considering recent advances in graphyne synthesis, S-GY exhibits some unique properties and is possibly synthesizable. We hope the present study can stimulate further studies for this remarkable new structure.
## Acknowledgements
This work was financed by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES) - Finance Code 001 and grant 88887.691997/2022-00, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), FAP-DF, and FAPESP. We thank the Center for Computing in Engineering and Sciences at Unicamp for financial support through the FAPESP/CEPID Grants #2013/08293-7 and #2018/11352-7. L.A.R.J acknowledges the financial support from FAP-DF grants 00193\(-\)00000857/2021\(-\)14, 00193\(-\)00000853/2021\(-\)28, 00193\(-\)00000811/2021\(-\)97, and, 00193.00001808/2022\(-\) 71 CNPq grants 302922/2021\(-\) 0 and 350176/2022\(-\) 1. L.A.R.J. gratefully acknowledges the support from ABIN grant 08/2019 and Fundacao de Apo 'a Pesquisa (FUNDAPE), Edital 02/2022 - Formulario de Inscricao N.4. L.R.S. acknowledges the National Institute of Science and Technology of Complex Systems (Brazil). L.A.R.J. acknowledges Nucleo de Computacao de Alto Desempenho (NACAD) and for providing the computational
Figure 10: Total energy (black) and heat capacity (\(C_{V}\), blue) values as a function of temperature for the S-GY monolayer.
Figure 9: Representative MD snapshots for the S-GY subjected to a strain applied along the x-direction.
facilities. This work used resources of the Centro Nacional de Processamento de Alto Desempenho em Sao Paulo (CENAPAD-SP). The authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer, contributing to the research results reported within this paper. URL: [http://sdumont.lncc.br](http://sdumont.lncc.br).
|
2308.12367 | SafeAR: Safe Algorithmic Recourse by Risk-Aware Policies | With the growing use of machine learning (ML) models in critical domains such
as finance and healthcare, the need to offer recourse for those adversely
affected by the decisions of ML models has become more important; individuals
ought to be provided with recommendations on actions to take for improving
their situation and thus receiving a favorable decision. Prior work on
sequential algorithmic recourse -- which recommends a series of changes --
focuses on action feasibility and uses the proximity of feature changes to
determine action costs. However, the uncertainties of feature changes and the
risk of higher than average costs in recourse have not been considered. It is
undesirable if a recourse could (with some probability) result in a worse
situation from which recovery requires an extremely high cost. It is essential
to incorporate risks when computing and evaluating recourse. We call the
recourse computed with such risk considerations as Safe Algorithmic Recourse
(SafeAR). The objective is to empower people to choose a recourse based on
their risk tolerance. In this work, we discuss and show how existing recourse
desiderata can fail to capture the risk of higher costs. We present a method to
compute recourse policies that consider variability in cost and connect
algorithmic recourse literature with risk-sensitive reinforcement learning. We
also adopt measures "Value at Risk" and "Conditional Value at Risk" from the
financial literature to summarize risk concisely. We apply our method to two
real-world datasets and compare policies with different risk-aversion levels
using risk measures and recourse desiderata (sparsity and proximity). | Haochen Wu, Shubham Sharma, Sunandita Patra, Sriram Gopalakrishnan | 2023-08-23T18:12:11Z | http://arxiv.org/abs/2308.12367v3 | # SafeAR: Towards Safer Algorithmic Recourse by Risk-Aware Policies
###### Abstract
With the growing use of machine learning (ML) models in critical domains such as finance and healthcare, the need to offer recourse for those adversely affected by the decisions of ML models has become more important; individuals ought to be provided with recommendations on actions to take for improving their situation and thus receive a favorable decision. Prior work on sequential algorithmic recourse--which recommends a series of changes--focuses on action feasibility and uses the proximity of feature changes to determine action costs. However, the uncertainties of feature changes and the risk of higher than average costs in recourse have not been considered. It is undesirable if a recourse could (with some probability) result in a worse situation from which recovery requires an extremely high cost. It is essential to incorporate risks when computing and evaluating recourse. We call the recourse computed with such risk considerations as Safer Algorithmic Recourse (SafeAR). The objective is to empower people to choose a recourse based on their risk tolerance. In this work, we discuss and show how existing recourse desiderata can fail to capture the risk of higher costs. We present a method to compute recourse policies that consider variability in cost and connect algorithmic recourse literature with risk-sensitive reinforcement learning. We also adopt measures "Value at Risk" and "Conditional Value at Risk" from the financial literature to summarize risk concisely. We apply our method to two real-world datasets and compare policies with different levels of risk-aversion using risk measures and recourse desiderata (sparsity and proximity).
1
Footnote 1: Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
## 1 Introduction
Machine learning (ML) models are increasingly being used to make decisions in a wide array of scenarios including healthcare [1], insurance premiums [12], and loan approvals [13]. Given their impact on society, the importance of algorithmic recourse has increased [23]. Algorithmic recourse refers to a computed recommendation provided to an end user which suggests specific changes they can make to convert an unfavorable outcome (eg:loan rejection), into a favorable one. For a recourse to be helpful, the suggested change ought to be actionable; for example, one can change their savings balance but not their age). Existing recourse work has considered the cost of taking the recommended actions [24, 25, 26]. However, they do not consider the risk of higher costs. In this work, _risk_ means the potential for higher costs during the recourse due to the possibility of reaching adverse states; this can happen due to uncertainties in action effects (not deterministic). The cost could be in terms of time required, effort, financial resources, etc. Without incorporating risks--which is ignoring uncertainties or only minimizing the expected costs--the recipient of an algorithmic recourse may be caught unaware and unprepared for situations with high costs. By offering recourse policies with risk measures, we can help people be aware of how much risk is involved in each policy and choose a safer one. The _recourse policy_ here refers to the recommended actions for all possible states a person might encounter, as opposed to a single deterministic sequence of actions.
To further understand the need for risk considerations, let's look at the existing algorithmic recourse approaches that use _counterfactual explanation_ (CE) methods to give recourse recommendations. CE methods find "the most similar instances to the feature vector describing the individual, that also get the desired prediction from the model" [11]. The assumption is that minimizing feature-space differences translates to a recourse that requires less cost to reach the desired outcome. Rather than providing a single vector of feature changes, recourse can also provide a series of CEs or a sequence of actions [26, 12] that incrementally change user's features and bring them closer to the features with the desired outcome. Some key desiderata to evaluate CEs are [2]: (1) _validity_: whether it gives the desired outcome, (2) _proximity_: how much the changes are measured by a distance function, and (3) _sparsity_: how many features are changed, and (4) _realism_: how realistic recourse recommendations are for an individual, including the feasibility of actions. However, using CEs to find recourse policies do not necessarily result in a sufficiently _safe_ recourse policy, because they might ignore the risk of taking actions, which may (probabilistically) leave a person in a worse situation. Such a recourse policy may even be dangerous to suggest. For instance, asking a person to change jobs may result in them losing their current job and being jobless (as
illustrated in Recourse Policy B in Figure 1). Finding alternatives with lower risks but a slightly higher expected cost maybe preferred by an individual. In the context of CE methods, this means that sometimes a more "distant" state (set of feature values) maybe a better recourse target if the actions required to reach it carry less risk of higher costs.
To explicitly incorporate risk into algorithmic recourse, our work introduces the problem of computing _safer algorithmic recourse_ (**SafeAR**). This has hitherto not been discussed in the literature on algorithmic recourse. The objectives of SafeAR are to suggest different recourse policies with different risk profiles and to empower the affected individual with risk-averse alternatives to decide for themselves.1 Reinforcement learning (RL) methods can be used to compute such recourse policies. Typically, a policy in RL finds the best action given a (feature) state, which maximizes the expected reward (or minimizes the cost) and can incorporate uncertainty in cost and action effects. To account for risk, we incorporate the variance in costs during policy computation, and connect risk-sensitive reinforcement learning ideas [16, 17, 18] with algorithmic recourse. Our contributions are:
Footnote 1: SafeAR does not advocate for the policy with the lowest risk as it may have a higher average cost. The emphasis is to provide multiple recourse policies for individuals to choose from, some of which can be safer and more suitable based on their risk tolerance.
* Develop the concept of SafeAR, highlighting the value of considering risks in algorithmic recourse, which existing recourse measures do not cover.
* Formulate algorithmic recourse problems as Finite Horizon Markov Decision Processes (MDPs) and demonstrate a method (Greedy Risk-Sensitive Value Iteration, G-RSVI) to compute risk-aware policies for finite horizon MDPs
* Introduce succinct measures of risk into the evaluation of algorithmic recourse, borrowing from the financial literature; these measures are Value at Risk (VaR) [19] and Conditional Value at Risk (CVaR) [10].
* Evaluate the policies with different risk-profiles computed by G-RSVI methodology on two real datasets (UCI Adult Income, German Credit) and assess the policies in terms of risk measures, as well as sparsity and proximity to show that the latter do not implicitly factor in risks.
* Conduct an initial investigation into the disparity between gender groups in terms of risks in the aforementioned datasets.
## 2 Motivating Example
To better illustrate the concept of SafeAR with risk-aware policies, consider the following motivating example on loan approvals (Figure 1). A company uses a trained black box ML model to determine loan approvals. The model uses a set of features of the loan applicent (housing, job, savings, age, and education) and initially rejects the applicent. In this recourse scenario, the action costs are in terms of discrete time units, each action taken has a probability of success, and failure could transition into a less favorable state. Let us now look at three recourse polices that could be given to the applicent illustrated in Figure 1:
* _Policy A: Nearest CE, Expected Cost 3.3_. This could be found by a recourse algorithm that optimizes for feature sparsity. It would require the applicent to _Own-a-House_
Figure 1: Recourse policies for credit loan approvals. Policy A has only one feature change (low sparsity) but with high failure rate; Policy B has the lowest expected cost but might result in a situation that costs more to recover; Policy C has a slightly higher expected cost than Policy B but lower variance in cost (risk), which can be considered as a safer Policy.
Figure 2: Recourse policy visualization, highlighting variance in cost for three recourse policies from Figure 1; thickness of each outcome (line) is proportional to the probability.
(one feature change). This policy ignores the uncertainty in the applicant's ability to purchase a house within 1 month (time cost), and there is \(70\%\) chance that the applicant would remain in the same state. So the expected time cost would be much more than 1 month.
* _Policy B: Risk-Neutral Policy, Expected Cost 1.5_. This policy only optimizes the expected cost. It requires the applicant to _Find-a-Better-Job_, and doing so helps increase the savings and reaches the desired outcome with \(90\%\) probability. However, there is a small chance (\(10\%\)) that this action would result in losing their current job and ending up unemployed, from which the cost to recover would be higher. However, the expected total cost when considering probabilities is still lower than Policy A. If optimizing for expected cost alone, this policy would be returned and has the potential to lead the applicant to a worse situation in which they would incur a high cost to recover from. This is the type of risk in a recourse policy that a user might want to know about and manage.
* _Policy C: Risk-Averse Policy, Expected Cost 2.2_. It provides a safer policy to the applicant, where failures do not lead to a worse situation. The actions for this recourse are _Improve-Education-in-Part-Time_ and then _Increase-Savings_. The risk of higher costs in this policy is lower than Policy B, but it has a higher expected cost. Policy C might not be found by methods that minimize proximity, as improving educational background could be considered a larger change than getting a higher paying job.
Figure 2 illustrates the probabilities of possible outcome trajectories and their associated costs for this example. This is a visualization paradigm in which we capture the probability of an outcome trajectory by line-thickness and the costs along the x-axis. With the risk-averse Policy C, the applicant is able to receive the desired outcome in 3 time-steps (cost) with \(98\%\) probability, and the risk of it taking more than 3 is much less than Policy A or B, even if the expected cost is more than Policy B. Computing such diverse policies in terms of risk and surfacing the risk-information to empower the affected individual is the motivation behind SafeAR.
## 3 Related Work
Existing algorithmic recourse methods [11] can be grouped into three categories: one set of methods involve finding the nearest CEs as the smallest changes to the individual's feature vector. Solutions focus on _proximity_[12], _sparsity_, and _diversity_[13, 14, 15] using multi-objective optimization [1] and decision trees [14]. Also, generative algorithms [15, 16] are used to ensure _plausibility_, by generating CEs within data distributions. These do not give a sequence of actions or policy to follow, and have no mention of risk.
In the second category, recourse is achieved by recommending a sequence of actions [14, 15] or by providing a path over the feature-space along dense regions of the data manifold [17] considering _feasibility_ and _actionability_. Methods also incorporate _causality_ through structural causal models (SCMs) [13] to explicitly model inter-variable causal relationships [12] and provide an ordered sequence of CEs [14, 15]. Lastly, robust recourse methods [13, 15] address the issues for data changes and model parameter shifts. None of these methods consider the risk of higher costs due to the probability of adverse outcomes.
For computing risk-aware recourse policies, we turn to the reinforcement learning (RL) literature. RL methods can provide recourse policies that consider uncertainties in transitions when taking recourse actions. There is existing work that models the recourse problem as MDPs. "ReLAX" [10] generates recourse plans by deep reinforcement learning but under deterministic feature transitions, ignoring uncertainties and thus risk. FASTAR [14] presents a framework that translates an algorithmic recourse problem into a discounted MDP and demonstrates comparable recourse performance as CE methods. Although FASTAR models uncertainties, it only optimizes for the expected cost and does not incorporate any risk measures. Our first method for SafeAR (G-RSVI) computes recourse policies by considering both the expected cost and the risk of higher costs.
One way of measuring risks in cost in RL is through the variance in the total cost (over all steps) [15, 16]. There are also other RL methods that can factor in risks in policies [13, 14, 15]. To communicate the idea of SafeAR in this work, we use a modification of value iteration (G-RSVI) to incorporate cost-variance trade-off into the computation to get risk-averse policies. MDPs can naturally incorporate action costs, probabilistic action dynamics, action feasibility, and causal constraints. These can all can be personalized to the recipient, as properties like action costs can be unique to each person. To the best of our knowledge, SafeAR is the first attempt in connecting the literature on _risk-sensitive_ RL to algorithmic recourse.
## 4 Safe Algorithmic Recourse
### Algorithmic Recourse
Let \(f:\mathcal{X}\rightarrow\mathcal{Y}\) be a decision function operationalized by a ML algorithm or model, where \(x\in\mathcal{X}=\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{D}\) is the set of instances described by \(D\) features of an individual, and \(\mathcal{Y}=\{y^{-},y^{+}\}\) are the unfavorable and favorable decision outcomes, respectively. An individual with features \(x_{o}\) initially gets an unfavorable outcome \(f(x_{o})=y^{-}\), and the general objective of algorithmic recourse is to find actions resulting in a path \(x_{o},\ldots,x^{*}\) that leads to final feature instance \(x^{*}\) so that \(f(x^{*})=y^{+}\). Our work is agnostic to the type of ML model \(\tilde{f}\) and only requires the model outputs to be categorized into unfavorable outcomes \(y^{-}\) and favorable outcomes \(y^{+}\). For simplicity of discourse, we use a binary classifier for \(f\).
### Risk-Aware Recourse Policies Using Finite Horizon Markov Decision Processes
To compute SafeAR recommendations, we frame the problem as solving a finite horizon Markov Decision Process (MDP), defined as tuple of \(\langle S,A,T,R,H\rangle\). \(H\) is the maximum number of steps in the finite horizon MDP and policies, and \(h:=[1:H]\) is the step number over the horizon.
**States (\(S\))**\(S\) is a set of all possible states for individuals in the recourse. Each of the states maps to one instance (\(x\in\mathcal{X}\)) in the combined feature space (input space) of the decision model \(f\). For a valid state space, there must exist a mapping \(g:=S\to X\), where \(\forall s\in S,\exists x\in X\) such that \(g(s)=x\). In this work, we keep the mapping \(g\) as one-to-one, meaning the state space is equivalent to the feature space. However, the state space \(S\) can be _richer_ than the feature space \(\mathcal{X}\) because the states and actions for recourse can involve more or different features than the ones used in the decision model \(f\). For example, "resting heart rate" can be a feature in a health insurance premiums calculator \(f\), but "average calories burned" is not. However, the latter maybe in the recourse state, as it is directly affected by actions (e.g., _Exercise_), and in turn can have a causal effect on "resting heart rate". Using only the same features as in \(f\) may not be suitable for computing recourse policies, as they may not cover the states and actions that a person actually has to change during the recourse. This gives us a reason to expect a separate action transition and costs model for recourse, rather than assuming it can be extracted from the data used in \(f\). Another strong reason to expect a separate action model is for recourse personalization, as advocated for in Venkatasubramanian and Alfano (2020).
**Actions (\(A\)) and Transitions (\(T\))** For the state space \(S\), we have a set of feasible actions \(a\in A\). The effect of an action can change multiple features. The features can be categorized into three types Karimi et al. (2022): 1) immutable features (e.g., birthplace), 2) mutable and actionable features (e.g., occupation, bank balance) that define the action space of the recourse, 3) mutable but non-actionable features (e.g., credit score) that cannot be directly modified by an individual. Mutable features can be modified as the consequence of changing other features. The state transition model would need to capture causal relationships between features and ensure the realism of recourse. The state transition model \(T:=p(s^{\prime}|s,a)\) is defined as the transition probability between two states (\(\{s,s^{\prime}\}\in S\times S\)) given the action \(a\).
**Rewards (\(R\))**\(R:=r(s,a,s^{\prime};f)\) is the reward or cost incurred by reaching state \(s^{\prime}\) by performing an action \(a\) at a state \(s\). "Reward" and \(r(.)\) are the typical terms and notations used in RL literature, but rewards can be positive or negative (cost). We will henceforth use "cost" in this work since we are focusing on the recourse cost to the recipient, i.e. \(r(s,a,s^{\prime};f)\) tells us the cost incurred to the recipient when the transition \((s,a,s^{\prime})\) occurs during the recourse. Additionally, when the ML model \(f\) gives the favorable outcome in a state (\(f(s)=y^{+}\)), then no more actions are needed in the recourse. To capture this, we add a zero-cost action in all favorable (goal) states, transitioning to the same state.
As for the real-world semantics of the cost, it can be a combined measure of multiple factors such as elapsed time, material expenses, opportunity cost, etc. The cost maybe averaged across a group or tailored for each person, which requires domain knowledge--so do the feasible actions and the transitions. CE methods such as DiCE Mothilal et al. (2020) and FACE Poyiadzi et al. (2020) also require domain knowledge to design distance (cost) functions, where the function \(r(.)\) can be defined in terms of how much the state changes by an action using _sparsity_ of feature changes and _proximity_ of the recipient's state changes over pre-defined distance functions.
**Recourse Policies** A recourse policy is the same as an MDP policy, expressing how to act in each state of each step in the horizon to get to a favorable state. This is formalized as \(\pi=(\pi_{1}\dots\pi_{H})\), where \(\pi_{i}:=S\to A\) maps each state to an action for each step \(i\).
## 5 SafeAR Methodology
In this section, we present a method for computing risk-averse recourse policies and measures to evaluate the risk for SafeAR.
### Greedy Risk-Sensitive Value Iteration
In this section we present a greedy algorithm to compute risk-averse policies by incorporating cost-variance into the policy computation. We first define \(\hat{R}^{\pi}_{h}(s)=\sum_{i=h}^{H}r(s_{i},a_{i},s_{i+1})|(s_{h}=s),\pi\) as the total cost accrued over the horizon \(H\) from a rollout obtained by following a policy \(\pi\) starting at state \(s\) and step \(h\). In risk-neural settings, the recourse policy \(\pi\) maximizes the expected total cost \(\mathbb{E}[\hat{R}^{\pi}_{h}(s)]\) or the mean value \(\mu[\hat{R}^{\pi}_{h}(s)]\). G-RSVI also considers the variance in cost to manage risk and seeks to find a policy \(\pi\) to maximize the following value function:
\[V^{\pi}_{1}(s)=\mu(\hat{R}^{\pi}_{1}(s))-\beta\cdot\sigma(\hat{R}^{\pi}_{1}(s)), \tag{1}\]
for each state \(s\) starting at the first step \(h=1\). We denote \(V_{h}^{\pi}(s)\) as the risk-sensitive value of state \(s\) in step \(h\) by following policy \(\pi\). \(\beta\geq 0\) is the tuning parameter that represents each individual's risk profile, and a higher value means more risk averse. When \(\beta=0\), the problem reduces to finding the policy with the least expected cost only, which is the standard optimization objective in MDPs. Here, \(\sigma\) returns the standard deviation of the total cost, and \(\sigma^{2}\) returns the variance. In G-RSVI, we optimize Equation 1 by greedily maximizing \(V_{h}^{\pi}(s)\) at each step starting from the end step \(H\) and moving backwards to the first step. At each step \(h\), the action is selected to maximize the risk-sensitive value using:
\[V_{h}=\max_{a}\mu[r(\cdot)+V_{h+1}]-\beta\sigma[r(\cdot)+V_{h+1}], \tag{2}\]
where \(V_{h+1}\) is the value computed in the previous step. The risk-sensitive action value or Q-value \(Q_{h}(s,a)\) at step \(h\) is defined as:
\[Q_{h}(s,a) =\mathbb{E}_{s^{\prime}}[r(s,a,s^{\prime})+V_{h+1}(s^{\prime})]\] \[-\beta\sigma[r(s,a,s^{\prime})+V_{h+1}(s^{\prime})]. \tag{3}\]
If only optimizing the expected reward, this procedure would find the optimal policy because the optimal sub-structure assumption for dynamic programming holds. However, G-RSVI does not guarantee to find the optimal policy for Equation 1. It does, however, provide one straightforward way to incorporate risks into recourse policy computation, and it completes computation with a single sweep over the state and horizon space. There are a variety of heuristic methods one can use with different trade-offs to compute risk-aware policies. In this work, to focus on the exposition of the concept of SafeAR, we limit our approach to discrete state, discrete actions and finite horizon MDPs.
Our G-RSVI algorithm is shown in Algorithm 1. We compute the policy by sweeping backwards from the last horizon step (Line 2). For all state-action pairs at each step, the action values \(Q_{h}(s,a)\) is computed by Equation 3 (Lines 5-9). The best action for each state in each step is then chosen by the one with maximal \(Q_{h}(s,a)\). It also gives us the state value \(V_{h}(s)\) and the policy for each step (lines 11, 12). Other ways of scoring values and selecting actions can be used other than Equation 3 in our algorithm. For example, one can optimize for CVaR, although that requires specifying a confidence level. For this initial work on SafeAR with risk-aware policies, we limit the scope to using Equation 3. This also helps us compare against FASTAR method Verma et al. (2022) as the baseline, where the FASTAR equivalent policy is obtained by setting \(\beta=0\) since FASTAR optimizes for expected value only. We leave the analysis of different risk-sensitive algorithms for recourse to future work.
### Risk Measures for Recourse Policies
To evaluate the risk associated to a recourse policy, we propose the following measures.
Success Rate (\(\rho_{H}\)):It estimates the probability of success within the finite horizon \(H\) by following the recourse policy. For example, \(\rho_{5}=0.9\) means a favorable outcome state will be reached within \(5\) steps 90% of the time. This is not equivalent to _validity_, which only determines whether an instance of feature combinations with a favorable decision exists. \(\rho_{H}\) is affected by the uncertainty of action outcomes in the recourse policy.
Mean-Variance Cost (\(\mu_{cost},\sigma_{cost}^{2}\)):It computes the expected value and variance in the total cost of following recourse policies. Since the distribution of costs is not necessarily Gaussian, this statistics can be misleading or hard to interpret. Hence, we propose additional measures.
Value at Risk (VaR\({}_{\alpha}\)):VaR (Holton 2013) is to provide a succinct probabilistic guarantee on the recourse policy cost. We evaluate VaR (Holton 2013) of the recourse cost to answer the question "What is the highest cost at a given level of cumulative probability (confidence level)". For example, VaR\({}_{95}=5.6\) means that with \(95\%\) probability, the recourse cost is at most \(5.6\). Formally, assuming the total cost of recourse \(x_{c}\) is the value of a random variable \(X_{c}\) with a cumulative probability distribution \(F_{X}(x_{c})\), under confidence level \(\alpha\in[0,1]\) VaR\({}_{\alpha}\) is computed as:
\[\text{VaR}_{\alpha}(X_{c})=\min\{x_{c}|F_{X}(x_{c})\geq\alpha\}. \tag{4}\]
Conditional Value at Risk (CVaR\({}_{\alpha}\)):CVaR (Rockafellar and Uryasev 2000) is a complementary measure to VaR, and tells us the expected worst case cost when the cost exceeds the threshold given by VaR\({}_{\alpha}\) value. For example, CVaR\({}_{95}=8.4\) means that when the cost exceeds the 95th-percentile cost, the average cost for those cases is \(8.4\). It is computed as:
\[\text{CVaR}_{\alpha}=\mathbb{E}[x_{c}|x_{c}>\text{VaR}_{\alpha}]. \tag{5}\]
## 6 Experimental Results
Motivated by the datasets used in the algorithmic recourse literature, we evaluate our method on the following two datasets: Adult Income Dataset (AID) (32561 data points) (Becker and Kohavi 1996) and German Credit Dataset (GCD) (1000 data points) (Hofmann 1994) and show how risk measures vary with different recourse policies. In AID, the recourse is to help individuals earn an income greater than 50,000. In GCD, the recourse is to help get a loan approval by reaching a good credit standing. Here we consider the version of GCD (Kaggle 2016) with 9 features. To process the datasets for G-RSVI, we convert continuous feature values into discrete values (details included in Appendix A.1). We then train random forest classifiers for both datasets. Dataset features, feature state dimensions, and classifier accuracies are reported in Table 2.
Transitions and RewardsWe use qualitative assumptions (domain knowledge) on relative differences in action costs and success likelihood to define the action costs \(r(\cdot)\) and transition model \(p(\cdot)\). Similar to FASTAR (Verma et al. 2022), we assume _improve-education_ or _improve-skill_ actions would lead to an age increase as causal constraints, and we treat "Age" as a mutable but non-actionable feature. These two actions require more time and effort, and therefore the action cost would be larger than
other actions such as to _increase-work-hours_. The transition probabilities are heuristically set by domain knowledge. For example, the probability of earning a Ph.D. degree is lower than earning a Bachelor's. For more details on the model values, we refer the reader to Appendix A.2 for an exhaustive list of model transition probabilities and costs. Results from a different model using the same qualitative assumptions are also provided in Appendix A.5 to show the G-RSVI method and results are not specific to a single model.
BaselinesTo our knowledge, our work is the first to address risks in algorithmic recourse. Among the existing recourse approaches, only FASTAR [20] formulates recourse problems as MDPs _and_ allows for stochastic transitions. FASTAR sets rewards in terms of distance measures between states. No matter what reward function is used--either distance-based or user-defined cost--and how transition probabilities are defined--either extracted from a dataset or tuned domain knowledge--FASTAR only seeks to find the recourse policy that maximizes the expected total rewards (risk-neural). This is what a standard algorithm for MDP (value or policy iteration) would find. In our experiments, the policy that maximizes expected total reward corresponds to the risk-neutral policy (\(\hat{\beta}=0\)), and this is the baseline which risk-averse policies compare against. We select \(\beta=0.25,0.50,0.75\) for generating risk-averse recourse policies, and higher \(\beta\) indicates higher risk-aversion.
Performance EvaluationTable 1 reports the risk measures for each experimental setting. The horizon is set to 12. We also present sparsity (\(L_{0}\) distance) and proximity (\(L_{0}\) distance for nominal features + \(L_{1}\) distance for ordinal and numerical features) between initial and final states. All measures are averaged over the entire dataset, as well as the measures for two example instance (sample from \(\mathcal{X}\)). Recourse policies are computed using the same cost and transition functions for all instances in the dataset. In the results, we see that for both datasets, more risk-averse policies (higher \(\beta\) values) can provide recourse with less variance in cost \(\sigma^{2}_{cost}\) but often require a higher mean cost \(\mu_{cost}\). For the same \(\alpha\) confidence level, risk averse policies give lower costs in VaR and CVaR than risk-neutral policies. Also, for the example instance in GCD, variance in cost \(\sigma^{2}=0.65\) at risk-aversion level \(\beta=0.75\) is significantly lower than the \(\sigma^{2}=3.49\) at \(\beta=0\). For the example instance in AID, increasing risk-aversion to \(\beta=0.75\) would not find a different policy than \(\beta=0.5\), which can happen if the same relative ordering of state values \(V_{h}(s)\) is found at each step. In the results, we observe that low sparsity and proximity does not correspond to risk-averse polices, meaning optimizing for them would not necessarily factor in risks.
Visualizing Risks in Recourse PoliciesIn Figure 3, we use our policy-risk visualization for a set of policies from the GCD dataset. For each policy, we show the most probable outcomes (rollouts), the length of each trace corresponds to the total cost, and the thickness of each trace corresponds to the probability of the outcome. This approach visualizes the variability in cost, which can help a person get an intuition of their risk in addition to the recommended actions.
Exploring Risks across GenderInspired by prior work that investigated disparity in recourse between different groups [23, 24, 25, 26], we now look at the disparity that may exist in risk measures across
\begin{table}
\begin{tabular}{|c|l|c|c|c c|c c|c|c|} \hline
**Dataset** & **Policy** & \(\boldsymbol{\rho_{H=12}}\) & \(\boldsymbol{(\mu_{cost},\sigma^{2}_{cost})}\) & **VaR80** & **CVaR80** & **VaR95** & **CVaR95** & **Spars.** & **Proxi.** \\ \hline \multirow{3}{*}{**Adult Income** (\(n=25923\)) & \(\beta=0.25\) & 0.994 & (**3.49**, 1.23) & 3.81 & 6.31 & 4.76 & 7.53 & **2.09** & **2.87** \\ & \(\beta=0.5\) & 0.994 & (3.51, 0.89) & **3.64** & **6.10** & 4.46 & 7.43 & 2.16 & 3.06 \\ & \(\beta=0.5\) & 0.993 & (3.59, **0.77**) & 3.66 & 6.11 & **4.44** & **7.40** & 2.21 & 3.18 \\ \cline{2-10} & \(\beta=0\) & 1.000 & (**4.63**, 1.86) & 5.80 & 8.54 & 6.80 & 9.80 & 3.92 & **3.92** \\ & \(\beta=0.5\) & 1.000 & (4.79, **0.13**) & **4.80** & **6.80** & **4.80** & **6.80** & **3.00** & 4.86 \\ & \(\beta=0.75\) & 1.000 & (4.79, **0.13**) & **4.80** & **6.80** & **4.80** & **6.80** & **3.00** & 4.86 \\ \hline \hline \multirow{3}{*}{**German Credit** (\(n=281\))} & \(\beta=0\) & 1.000 & (**1.65**, 0.48) & 1.96 & 3.66 & 2.63 & 4.56 & **1.26** & **1.33** \\ & \(\beta=0.25\) & 1.000 & (1.67, 0.35) & **1.87** & **3.51** & 2.51 & 4.50 & 1.34 & 1.43 \\ & \(\beta=0.5\) & 1.000 & (1.70, **0.30**) & 1.90 & 3.56 & **2.48** & **4.40** & 1.40 & 1.50 \\ \cline{2-10} & \(\beta=0\) & 1.000 & (**2.48**, 3.49) & **4.00** & 6.13 & 7.00 & 8.33 & **1.00** & **1.00** \\ \cline{2-10} & \(\beta=0.5\) & 1.000 & (2.81, 1.19) & **4.00** & **5.44** & **5.00** & **6.00** & **1.00** & 2.00 \\ \cline{2-10} & \(\beta=0.75\) & 1.000 & (3.87, **0.65**) & 4.40 & 5.50 & 5.40 & 6.67 & 2.00 & 3.00 \\ \hline \end{tabular}
\end{table}
Table 1: Evaluating recourse policies with different risk-aversion levels \(\beta\) and horizon \(H=12\) for AID, GCD, and two selected example instances. \(n\) denotes the number of instances used for evaluation. For each policy, we first run 100 trails for each instance in the dataset and measure the metrics among the valid recourse trials. Then, the average across all instances for each metric is computed. The best metric values among the policies are highlighted in bold.
\begin{table}
\begin{tabular}{c c c c} \hline
**Dataset** & **Immutable** & **\#States** & **ML Model** \\
**(\#Features)** & **Features** & **(Accuracy)** \\ \hline AdultIncome (8) & Gender, Race, Martial Status & 57600 & Rand.Forest (0.81) \\ GermanCredit (9) & Sex, Purpose, Credit Amount & 147456 & Rand.Forest (0.76) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dataset Overview
two gender groups (male and female) provided in AID and GCD at the same risk-aversion level. Table 3 reports the same risk measures (averaged) for female and male groups in both datasets. The p-values for statistical significance of the difference between the groups are provided in Appendix A.4. We define disparity in VaR between the two gender groups by following the same recourse policy computed with a risk-aversion level \(\beta\) as: \(\Delta\text{VaR}_{95}^{\beta}=|\text{VaR}_{95}^{\text{Female}}-\text{VaR}_{95} ^{\text{Male}}|\). The disparity for other measures is similarly computed.
All measures are in favor for the male group (highlighted in green shades) for both datasets, meaning that for the policy with the same risk-aversion level given to females and males, we expect females would get higher variance in cost (\(\sigma_{cost}^{2}\)), higher cost at the VaR confidence level of \(\alpha=80\) and \(\alpha=95\), and higher costs in the expected worst case scenarios (CVaR) for those confidence levels. In AID, we also noticed that when increasing the risk-aversion, the disparity of risk measures between two groups becomes _larger_. We observe \(\Delta\text{VaR}_{95}^{\beta}\) increases from 1.4 to 1.59 as \(\beta\) increases, and similar trends are observed for the difference in \(\sigma_{cost}^{2}\) and CVaR in AID. This trend indicates that the more we want to achieve risk-aversion, the greater the disparity in risk exposure is between the two gender groups in AID. However, in GCD, the difference in \(\sigma_{cost}^{2}\) and \(\Delta\text{VaR}_{95}^{\beta}\) do not consistently increase with increased risk-aversion. The disparity between male and female across all measures of risk still exists in GCD. We recall that the same action costs and transitions are used for both males and females. The only difference is the decisions made by the model \(f\) for different groups, which affects the number of steps to reach the favorable state (\(f(x)=y^{+}\)). This is something that recourse providers may want to keep in mind, and it motivates further discussion on risk disparity in algorithmic recourse.
## 7 Discussion and Conclusions
The motivation behind our Safer Algorithmic Recourse (SafeAR) is to offer recourse policies with different risk profiles. This enables affected individuals to be aware of the risks and helps them make an informed decision based on their risk tolerance. We connect ideas from risk-sensitive reinforcement learning with the algorithmic recourse literature and propose an algorithm G-RSVI that can provide risk-averse recourse policies for individuals with different risk profiles. In our experiments with the AID and GCD datasets, we showed that the recourse policies generated by G-RSVI were better in terms of the risk measures as compared to the existing risk-neutral approaches. The policy risk was evaluated through cost-variance, VaR, CVaR, and success rate. In addition, we observed that policies with better sparsity and proximity scores need not correspond to risk-averse policies. Lastly, in our experiments, we observed discrepancies between gender groups in risk measures for the same risk-aversion setting, which motivates further studies on recourse fairness in terms of risk exposure.
Figure 3: Policy visualization for an Example Instance in German Credit
\begin{table}
\begin{tabular}{|c|c|c|c|c c|c c|c|c|} \hline
**Dataset** & **Policy** & \(\boldsymbol{\rho_{R=12}}\) & \(\boldsymbol{(\mu_{cost},\sigma_{cost}^{2})}\) & **VaR\({}_{80}\)** & **CVaR\({}_{80}\)** & **VaR\({}_{95}\)** & **CVaR\({}_{95}\)** & **Spars.** & **Proxi.** \\ \hline \multirow{4}{*}{**Adult Income** (Female, \(n=9824\)) & \(\beta=0.25\) & 0.987 & **(4.56, 1.41)** & 4.76 & 7.07 & 5.70 & 8.00 & 2.58 & **3.77** \\ & \(\beta=0.25\) & 0.987 & (4.57, 1.18) & **4.64** & **6.95** & **5.48** & **7.84** & **2.55** & 3.87 \\ & \(\beta=0.5\) & 0.985 & (4.61, **1.11**) & 4.66 & 6.99 & 5.51 & 7.89 & **2.55** & 3.93 \\ \cline{2-10} & \(\beta=0\) & 0.997 & (**2.84**, 1.12) & 3.27 & 5.79 & 4.30 & 7.26 & **1.79** & **2.32** \\ (Male, \(n=16099\)) & \(\beta=0.25\) & 0.997 & (2.87, 0.72) & **3.08** & 5.51 & 3.95 & 7.15 & 1.93 & 2.56 \\ & \(\beta=0.5\) & 0.998 & (2.98, **0.57**) & 3.10 & **5.48** & **3.92** & **7.08** & 2.01 & 2.73 \\ \hline \hline \multirow{4}{*}{**German Credit** (Female, \(n=103\)) & \(\beta=0.25\) & 1.000 & **(1.72**, 0.56)** & 2.08 & 3.70 & 2.84 & 4.66 & **1.25** & **1.36** \\ & \(\beta=0.25\) & 1.000 & (1.75, 0.38) & **2.00** & **3.61** & 2.67 & 4.63 & 1.33 & 1.47 \\ & \(\beta=0.5\) & 1.000 & (1.79, **0.33**) & 2.02 & 3.64 & **2.59** & **4.51** & 1.41 & 1.57 \\ \cline{2-10} & \(\beta=0\) & 1.000 & **(1.61**, **0.43**) & 1.89 & 3.64 & 2.61 & 4.62 & **1.27** & **1.32** \\ \cline{2-10} & \(\beta=0.25\) & 1.000 & (1.62, 0.37) & **1.81** & **3.50** & 2.41 & 4.42 & 1.35 & 1.41 \\ \cline{2-10} & \(\beta=0.5\) & 1.000 & **(1.65, **0.29**)** & 1.84 & **3.50** & **2.33** & **4.29** & 1.40 & 1.46 \\ \hline \end{tabular}
\end{table}
Table 3: Evaluating recourse policies across gender for Adult Income and German Credit datasets; the same evaluation procedures are followed as Table 1. Among risk measures, the cells shaded in green indicate that the corresponding gender group is exposed to less risk under the policy with the same risk-aversion level. |
2301.06619 | Distributionally Robust Learning with Weakly Convex Losses: Convergence
Rates and Finite-Sample Guarantees | We consider a distributionally robust stochastic optimization problem and
formulate it as a stochastic two-level composition optimization problem with
the use of the mean--semideviation risk measure. In this setting, we consider a
single time-scale algorithm, involving two versions of the inner function value
tracking: linearized tracking of a continuously differentiable loss function,
and SPIDER tracking of a weakly convex loss function. We adopt the norm of the
gradient of the Moreau envelope as our measure of stationarity and show that
the sample complexity of $\mathcal{O}(\varepsilon^{-3})$ is possible in both
cases, with only the constant larger in the second case. Finally, we
demonstrate the performance of our algorithm with a robust learning example and
a weakly convex, non-smooth regression example. | Landi Zhu, Mert Gürbüzbalaban, Andrzej Ruszczyński | 2023-01-16T21:56:38Z | http://arxiv.org/abs/2301.06619v3 | # Distributionally Robust Learning with Weakly Convex Losses:
###### Abstract
We consider a distributionally robust stochastic optimization problem and formulate it as a stochastic two-level composition optimization problem with the use of the mean-semideviation risk measure. In this setting, we consider a single time-scale algorithm, involving two versions of the inner function value tracking: linearized tracking of a continuously differentiable loss function, and SPIDER tracking of a weakly convex loss function. We adopt the norm of the gradient of the Moreau envelope as our measure of stationarity and show that the sample complexity of \(\mathcal{O}(\varepsilon^{-3})\) is possible in both cases, with only the constant larger in the second case. Finally, we demonstrate the performance of our algorithm with a robust learning example and a weakly convex, non-smooth regression example.
2023
## 1 Introduction
We consider distributionally robust learning problems of the form
\[\min_{x\in X}\max_{Q\in\mathcal{M}(\mathbb{P})}\mathbb{E}_{D\sim\mathbb{Q}} \left[\ell(x,D)\right], \tag{1}\]
where \(\ell:\mathbb{R}^{n}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is the loss function of the predictor \(x\) on the random data \(D\) with a perturbed distribution with probability law \(\mathbb{Q}\), \(\mathcal{M}(\mathbb{P})\) is a closed convex set of probability measures (the _ambiguity set_) that models perturbations to the reference law \(\mathbb{P}\), and \(X\subset\mathbb{R}^{n}\) is the feasible set. Such formulations allow training predictive models from data that are robust to perturbations in the input data distribution \(\mathbb{P}\), by considering the worst case of the input distribution varying in the set \(\mathcal{M}(\mathbb{P})\). Such a worst-case approach to stochastic optimization; recently, it has also become relevant to machine learning applications. Such applications include but are not limited to convex and non-convex formulations of logistic regression, deep learning, and more generally supervised learning of predictive models in a data-robust fashion with risk minimization Gurbuzbalaban et al. (2022); Zhang et al. (2021, 2022); Laguel et al. (2022); Mehrotra and Zhang (2014); Kuhn et al. (2019). The challenges in these applications are that the model dimension \(n\) and the number of data points may be large, and the loss functions may be both non-smooth and non-convex.
The ambiguity set \(\mathcal{M}(\mathbb{P})\) models the uncertainty about the baseline data distribution \(\mathbb{P}\), and its choice and the structure of the loss function affect the computational tractability of the resulting formulations and the design of optimization algorithms for solving (1). Various possible choices of \(\mathcal{M}(\mathbb{P})\) include the Wasserstein balls around \(\mathbb{P}\) Shafieezadeh Abadeh et al. (2015); Esfahani and Kuhn (2018); Mehrotra and Zhang (2014); Kuhn et al. (2019); Gao et al. (2017); Sinha et al. (2018),
the \(f\)-divergence-based uncertainty sets Bagnell (2005); Duchi and Namkoong (2021); Namkoong and Duchi (2016); Zhang et al. (2021), and approaches based on risk measures such as the conditional value at risk Takeda and Kanamori (2009); Laguel et al. (2022) and the mean-semideviation risk Gurbuzbalaban et al. (2022). In this context, the quality of a first-order optimization algorithm can be assessed in terms of its convergence rate guarantees and sample complexity, i.e., the number of data points to be sampled for finding an approximate first-order stationary solution.
We start with introducing the mean-semideviation based modeling of the uncertainty set \(\mathcal{M}(\mathbb{P})\) before summarizing our contributions. For each model parameter \(x\in X\), we consider the random loss \(Z_{x}=\ell(x,D)\), defined on a sample space \(\Omega\) equipped with a \(\sigma\)-algebra \(\mathcal{F}\). We assume that the expectation \(\mathbb{E}(Z_{x})=\int_{\Omega}Z_{x}(\omega)\ \mathbb{P}(d\omega)\) is finite; i.e \(Z_{x}\in\mathcal{L}_{1}(\Omega,\mathcal{F},\mathbb{P})\). We can evaluate its quality by the mean-semideviation risk measure defined as
\[\rho[Z_{x}]=\mathbb{E}[Z_{x}]+\varkappa\mathbb{E}\Big{[}\max\big{(}0,Z_{x}- \mathbb{E}[Z_{x}]\big{)}\Big{]},\quad\varkappa\in[0,1], \tag{2}\]
which penalizes the expected value of the random losses for large values of losses that exceed the expected value. This risk measure is a coherent risk measure enjoying several desirable properties and is used in many contexts in statistics and stochastic optimization Rockafellar et al. (2006); Gurbuzbalaban et al. (2022). It is well-known that the mean-semideviation risk \(\rho[Z_{x}]\) of the random variable \(Z_{x}\), admits the following dual representation Ruszczynski and Shapiro (2006):
\[\rho[Z_{x}] = \max_{\mu\in\mathcal{A}}\int_{\Omega}Z_{x}(\omega)\mu(\omega)\ \mathbb{P}(d\omega) \tag{3}\] \[= \max_{\mathbb{Q}\ :\ \frac{d\mathbb{Q}}{2\mathcal{P}}\in\mathcal{A} }\int_{\Omega}Z_{x}(\omega)\ \mathbb{Q}(d\omega)=\max_{\mathbb{Q}\ :\ \frac{d\mathbb{Q}}{2 \mathcal{P}}\in\mathcal{A}}\mathbb{E}_{\mathbb{Q}}[Z_{x}],\]
where
\[\mathcal{A}=\big{\{}\mu=\mathbb{1}+\xi-\mathbb{E}[\xi]:\ \xi\in\mathcal{L}_{ \infty}(\Omega,\mathcal{F},\mathbb{P}),\ \|\xi\|_{\infty}\leq\varkappa,\ \xi\geq 0\big{\}}\]
is a convex and closed set. Thus, from (2) and (3), the min-max form (1) with the ambiguity set
\[\mathcal{M}(\mathbb{P})=\Big{\{}\mathbb{Q}:\frac{d\mathbb{Q}}{d\mathbb{P}}\in \mathcal{A}\Big{\}} \tag{4}\]
is equivalent to
\[\min_{x\in X}F(x)=\min_{x\in X}\rho[\ell(x,D)]=\min_{x\in X}\ f(x,h(x)) \tag{5}\]
with the functions
\[f(x,u) =\mathbb{E}\Big{[}u+\varkappa\max\big{(}0,\ell(x,D)-u\big{)}\Big{]}, \tag{6}\] \[h(x) =\mathbb{E}[\ell(x,D)].\]
In this way, using the mean-semideviation risk measure, one can convert the min-max problem into a two-level stochastic optimization problem which achieves an _implicit_ robust formulation, with the level of robustness controlled by the parameter \(\varkappa\): for \(\varkappa=0\) the uncertainty set (4) contains only the original probability measure \(\mathbb{P}\), while for \(\varkappa>0\) the measures \(\mathbb{Q}\in\mathcal{M}(\mathbb{P})\) are distortions of \(\mathbb{P}\). The range of the relative distortions allowed, \(\frac{d\mathbb{Q}}{d\mathbb{P}}-1\), is controlled by \(\varkappa\). The challenge is that this formulation is non-smooth, and typically non-convex when the loss is non-convex. Furthermore,
the use of the expected value inside the nonlinear function \(f(x,\cdot)\) results in a bias of stochastic subgradient estimates.
**Contributions.** In this paper, we develop an optimization algorithm called _stochastic compositional subgradient_ (SCS) for solving (1) based on the reformulation (5), and establish finite-time convergence analysis and sample complexity results. Our algorithm is a variation and a simplified version of the single-time scale method proposed in Gurbuzbalaban et al. (2022), where the sub-gradients are no longer averaged and subgradient tracking is no longer needed. In sections 2 and 3, we assume a continuously differentiable loss function, and build on a projected subgradient descent framework, use linearized tracking, and employ the gradient of the Moreau envelope as our convergence metric. We prove that the SCS method has a sample complexity of order \(\mathcal{O}(\epsilon^{-3})\) in this case. In section 4 we assume a \(\delta\)-weakly convex loss function (see definition (7)) and use the SPIDER estimator from Fang et al. (2018) to estimate the expectation of the losses. It is worth stressing that both \(f(\cdot,\cdot)\) and \(h(\cdot)\) in (5) are nonsmooth and \(h(\cdot)\) is nonconvex in this case. We prove that the SCS method has the same sample complexity of order \(\mathcal{O}(\epsilon^{-3})\), with a larger constant, though.
Related WorkThe convexity and non-smoothness structure of the (two-level) stochastic composite optimization problem (5) is determined by the choice of the loss. When the loss is convex and possibly non-smooth, the composite objective \(F(x)\) will also be convex and non-smooth in \(x\). In this case, multi-level convex stochastic optimization algorithms such as Wang et al. (2017a) will be applicable, implying a sample complexity of \(\mathcal{O}(\epsilon^{-4})\) when the loss is convex and \(\mathcal{O}(\epsilon^{-1.5})\) when the loss is strongly convex. When the loss is non-convex, irrespective of whether it is smooth or not, the composite objective \(F(x)\) will be non-convex and non-smooth in \(x\). The convergence rate for this general setting is not available, but if we only consider a smooth problem, there exist some complexity results. In Wang et al. (2017a), the authors analyzed stochastic gradient algorithms with different assumptions on the objective, and prove sample complexities \(\mathcal{O}(\epsilon^{-3.5})\) and \(\mathcal{O}(\epsilon^{-1.25})\) for smooth convex problems and smooth strongly convex problems, respectively. These rates can be further improved with proper regularization Wang et al. (2017b). In Ghadimi et al. (2020), the authors propose a single time-scale Nested Averaged Stochastic Approximation (NASA) method for smooth nonconvex composition optimization problems and prove the sample complexity of \(\mathcal{O}(\epsilon^{-2})\). For higher-level (more than two) problems, Ruszczynski (2021) establishes asymptotic convergence of a stochastic subgradient method by analyzing a system of differential inclusions, along with a sample complexity of \(\mathcal{O}(\epsilon^{-2})\) when smoothness is assumed. Another level-independent rate of \(\mathcal{O}(\epsilon^{-2})\) is obtained for smooth multi-level problems in Balasubramanian et al. (2022) without the boundedness assumption.
There are also approaches dealing with (1) not based on composite stochastic optimization. In particular, Ho-Nguyen and Wright Ho-Nguyen and Wright (2022) considered linear classification problems subject to Wasserstein ambiguity sets for the "zero-one loss" which is non-convex and non-smooth. The authors showed that this problem is equivalent to minimizing a regularized ramp loss objective and proposed a class of smooth approximations to the ramp loss, where smooth problems can be solved (approximately) with standard continuous optimization algorithms. There are also other approaches which can provide complexity results when the loss is either smooth or convex.
Smooth lossesThe authors in Sinha et al. (2018) formulate \(\mathcal{M}(\mathbb{P})\) as a \(\rho\)-neighborhood of the probability law \(\mathbb{P}\) under the Wasserstein metric. They show that for a smooth loss and small enough robustness level \(\rho\), the stochastic gradient descent (SGD) method can achieve the same rate of
convergence as that in the standard smooth non-convex optimization. In Jin et al. (2021), the authors consider smooth and Lipschitz non-convex losses and use a soft penalty term based on \(f\)-divergence to model the distribution shifts. They analyzed the mini-batch normalized SGD with momentum and proved an \(\mathcal{O}(\epsilon^{-4})\) sample complexity (for the norm of the gradient of the loss to be at most \(\epsilon\)) which also matches the rates that can be obtained in standard smooth non-convex optimization. For a smoothed version of the CVaR, the authors obtain similar convergence guarantees for smooth non-convex losses that are Lipschitz. In Soma and Yoshida (2020), the authors proposed a conditional value-at-risk (CVaR) formulation. They show that for convex, Lipschitz and smooth losses their SGD-based algorithm has a complexity of \(\mathcal{O}(1/\epsilon^{2})\), whereas for non-convex, smooth and Lipschitz losses, the authors obtain a complexity of \(\mathcal{O}(1/\epsilon^{6})\). In Curi et al. (2020), the authors proposed an adaptive sampling algorithm for stochastically optimizing the CVaR of the empirical distribution of the loss, and reformulated this optimization problem as a two-player game based on the dual representation of CVaR. For convex problems, they obtain a regret bound of \(\mathcal{O}(T)\) over \(T\) iterations, and for non-convex problems, they obtain a regret bound of \(\mathcal{O}(T)\) assuming access to an inexact empirical risk minimization (ERM) oracle. However, implementing this oracle for non-convex problems requires solving a weighted empirical loss minimization in every iteration and this is NP-hard in general (Curi et al., 2020, Sec. 4.2).
When the loss is smooth and Lipschitz continuous on the primal space \(X\), sample-based approximations of (1) where the expectation is approximated by a finite average taken over the data points results in smooth non-convex/merely concave min-max optimization problems where the dual space \(\mathcal{M}(P)\) is finite-dimensional when determined by \(f\)-divergences Zhang et al. (2022). In this case, primal-dual algorithms are applicable and the SAPD+ algorithm from Zhang et al. (2022) provides an \(\mathcal{O}(1/\epsilon^{6})\) complexity. Our formulation does not require sample-based approximations and can handle the general case when \(\mathcal{M}(\mathbb{P})\) may be infinite-dimensional and the loss may be non-smooth.
Convex lossesIf formulated as finite-dimensional convex programs Shafieezadeh Abadeh et al. (2015); Esfahani and Kuhn (2018); Mehrotra and Zhang (2014); Kuhn et al. (2019), the distributionally robust problem (1) can be solved in polynomial time. When \(\mathcal{M}(\mathbb{P})\) is defined via the \(f\)-divergences and the loss is convex and smooth, a sample-based approximation of (1) can be solved with a bandit mirror descent algorithm Namkoong and Duchi (2016) with the number of iterations comparable to that of the SGD. For convex losses in the same formulation, conic interior point solvers or gradient descent with backtracking Armijo line-searches Duchi and Namkoong (2021) can also be used but this can be computationally expensive for some applications when the dimension or the number of samples is large. When the uncertainty set \(\mathcal{M}(\mathbb{P})\) is based on the empirical distribution of the data and is defined via the \(\chi^{2}\)-divergence or CVaR, and the loss is convex and Lipschitz, Levy et al. (2020) proposed algorithms that achieve an optimal \(\mathcal{O}(\epsilon^{-2})\) rate which is independent of the training dataset size and the number of parameters.
When \(\ell(\cdot,D)\) is non-convex and non-differentiable, distributionally robust stochastic optimization problems lead to non-convex non-smooth min-max optimization problems. To our knowledge, in this general case, none of the existing algorithms admit provable convergence guarantees to a stationary point of (1) and do not admit iteration complexity bounds. Our results apply to this setting and provide iteration complexity estimates for weakly convex losses that may be non-smooth.
**Notation and Preliminaries.** A function \(q:\mathbb{R}^{n}\to\mathbb{R}\) is called \(\delta\)-weakly convex, if the regularized function \(x\mapsto q(x)+\frac{\delta}{2}\|x\|^{2}\) is convex Nurminskii (1973). This is a broad class of functions that can be non-smooth and non-convex, including all convex functions and smooth functions with
a globally Lipschitz continuous gradient. A \(\delta\)-weakly convex function \(q(x)\) has also the following property: at every point \(x\in\mathbb{R}^{n}\) a vector \(g\in\mathbb{R}^{n}\) exists such that
\[q(y)\geq q(x)+\langle g,y-x\rangle-\frac{\delta}{2}\|y-x\|^{2},\qquad\forall y \in\mathbb{R}^{n}, \tag{7}\]
(see e.g. Davis et al. (2018b)). The set \(\partial q(x)\) of vectors \(g\) satisfying the above relation is the _subdifferential_ of \(q(\cdot)\) at \(x\); it is nonempty, convex, and closed. In fact, it coincides with the Clarke subdifferential for this class of functions Rockafellar and Wets (2009). We say that a continuously differentiable function \(q:\mathbb{R}^{n}\to\mathbb{R}\) is \(L\)-smooth on a convex set \(X\), if \(\nabla q(x)\) is Lipschitz continuous on \(X\), i.e. \(\|\nabla q(x)-\nabla q(y)\|\leq L\|x-y\|\) for all \(x,y\in X\).
## 2 A Stochastic Compositional Subgradient (SCS) Method
We first consider the case when the loss function is continuously differentiable, the non-smooth case will be addressed later in Section 4.
**Assumption 1**: _The set \(X\subset\mathbb{R}^{n}\) is convex and compact._
**Assumption 2**: _For all \(x\) in a neighborhood of the set \(X\):_
* _The function_ \(\ell(x,\cdot)\) _is integrable;_
* _The function_ \(\ell(\cdot,D)\) _is continuously differentiable and integrable constants_ \(\tilde{\Delta}_{h}(D)\) _and_ \(\tilde{\delta}(D)\) _exist such that_ \[\|\nabla\ell(x,D)\|\leq\tilde{\Delta}_{h}(D),\quad\forall\,D\in\mathbb{R}^{d},\] _and_ \[\|\nabla\ell(x,D)-\nabla\ell(y,D)\|\leq\tilde{\delta}(D)\|x-y\|,\quad\forall \,x,y\in X,\quad\forall\,D\in\mathbb{R}^{d}.\]
**Remark 1**: _Since the loss function \(\ell(x,D)\) is \(\tilde{\delta}(D)\)-smooth and the feasible set \(X\) is compact, \(\ell(x,D)\) is also \(\tilde{\delta}(D)\)-weakly convex (see the definition of the weak convexity in the last paragraph of Section 1). Assumption 1 and 2 guarantees that the expected value function \(h(\cdot)\) is well defined and \(\delta\)-smooth, \(\delta\)-weakly convex on \(X\), with \(\delta=\mathbb{E}\left[\tilde{\delta}(D)\right]\)._
Assumptions 1 and 2 are satisfied for many problems in statistical learning including non-convex constrained formulations of various classification and regression tasks such as deep learning, least squares and logistic regression Gurbuzbalaban et al. (2022); Negiar et al. (2020).
If a subgradient of \(f(\cdot)\) and the gradient of \(h(\cdot)\) were known, a subgradient of the composite function \(F(x)=f(x,h(x))\) could be calculated by an application of the chain rule Rockafellar and Wets (2009), i.e. if \(\begin{bmatrix}g_{fx}\\ g_{fu}\end{bmatrix}\in\partial f(x,u)\) and \(g_{h}=\nabla h(x)\), then we would have
\[g_{fx}+g_{fu}g_{h}\in\partial F(x). \tag{8}\]
Unfortunately, in our setting, we neither have access to the subgradients in (8) nor to the value of \(h(x)\); we can only obtain their stochastic estimates. To address this, our proposed method, stochastic compositional subgradient (SCS) generates approximate solutions \(\left\{x^{k}\right\}_{k=1,2,\ldots}\) in \(\mathbb{R}^{n}\) based on a projected stochastic subgradient update rule that estimates the subgradient of the composite function
\(F(x)\) (by relying on the stochastic subgradients of \(f\) and \(h\)) and projects the iterates back to the constraint set \(X\) which ensures that the iterates stay bounded. Our method described in Algorithm 1 also generates random inner function estimates \(\left\{u^{k}\right\}_{k=1,2,\ldots}\) in \(\mathbb{R}\), where we assume access to unbiased stochastic estimates of the subgradients of \(f\) and \(h\) and values of \(h\) with a bounded variance. More precisely, denoting by\(\mathcal{F}_{k}\) the \(\sigma\)-algebra generated by \(\{x^{0},u^{0},x^{1},u^{1},\ldots,x^{k},u^{k}\}\) where \(x^{0}\in X\) and \(u^{0}\in\mathbb{R}\) are the initializations, we make the following assumption.
**Assumption 3**: _For all \(k\), we have access to random vectors \(\tilde{g}_{f}^{k}\), \(\tilde{g}_{h}^{k}\), \(\tilde{J}^{k}\), and random variables \(\tilde{h}^{k}\) satisfying the conditions:_
* \(\tilde{g}_{f}^{k}=g_{f}^{k}+e_{f}^{k},\quad g_{f}^{k}\in\partial f(x^{k},u^{k} ),\quad\mathbb{E}\left[e_{f}^{k}\middle|\mathcal{F}_{k}\right]=0,\quad\mathbb{ E}\left[\left|e_{f}^{k}\middle|^{2}\middle|\mathcal{F}_{k}\right]\leq\sigma^{2}\)_;_
* \(\tilde{g}_{h}^{k}=g_{h}^{k}+e_{h}^{k},\quad g_{h}^{k}=\nabla h(x^{k}),\quad \mathbb{E}\left[e_{h}^{k}\middle|\mathcal{F}_{k}\right]=0,\quad\mathbb{E} \left[\left|e_{h}^{k}\middle|^{2}\middle|\mathcal{F}_{k}\right]\leq\sigma^{2}\)_;_
* \(\tilde{H}^{k}=h(x^{k})+e_{f}^{k},\quad\mathbb{E}\left[e_{f}^{k}\middle| \mathcal{F}_{k}\right]=0,\quad\mathbb{E}\left[\left|e_{f}^{k}\middle|^{2} \middle|\mathcal{F}_{k}\right]\leq\sigma^{2}\)_;_
* \(\tilde{J}^{k}=g_{h}^{k}+E^{k},\quad\mathbb{E}\left\{E^{k}\middle|\mathcal{F}_{ k}\right\}=0,\quad\mathbb{E}\left\{\left|E^{k}\middle|^{2}\middle| \mathcal{F}_{k}\right\}\leq\sigma^{2}\)_;_
_where \(\sigma\) is a constant, and the errors \(e_{f}^{k}\), \(e_{h}^{k}\), \(e_{f}^{k}\), and \(E^{k}\) are conditionally independent, given \(\mathcal{F}_{k}\)._
**Remark 2**: _Under Assumptions 1 and 2, all the values and subgradients of \(f\) and \(h\): \(g_{f}^{k}\), \(g_{h}^{k}\) and \(h(x^{k})\), are bounded. We denote for all \(x\in X\), \(u\in\mathbb{R}\),_
\[\left\|\partial_{x}f(x,u)\right\|\leq\Delta_{fx},\qquad\left\|\nabla h(x) \right\|\leq\Delta_{h},\]
_with \(\Delta_{h}=\mathbb{E}\left[\bar{\Delta}_{h}(D)\right]\). Under Assumption 3, the following stochastic estimate of an element of \(\partial F(x^{k})\) resulting from replacing the true (sub)gradients in (8) with their random estimates has a bounded expected square norm:_
\[\mathbb{E}\left[\left\|\tilde{g}_{fx}^{k}+\tilde{g}_{fu}^{k}\tilde{g}_{h}^{k }\right\|^{2}\middle|\,\mathcal{F}_{k}\right]\leq M^{2}\quad\text{with}\quad M ^{2}=(\Delta_{fx}+\Delta_{h})^{2}+2\sigma^{2}+2\sigma\Delta_{h}, \tag{9}\]
_where we used conditional independence of the random errors, Cauchy-Schwarz inequality and the fact that \(\tilde{g}_{fu}^{k}\in[0,1]\) implied by (12)._
A common setting in statistical learning and stochastic optimization is to estimate the subgradients based on randomly sampled subsets of data points with replacement Bottou (2010). In this setting, when the domain \(X\) is unbounded, it is possible that the variance of such stochastic subgradient estimator can be unbounded Gurbuzbalaban et al. (2021); Jain et al. (2018); Gurbuzbalaban et al. (2022). However, in our setting, the feasible set \(X\) is compact, therefore Assumption 3 will be naturally satisfied (see Section 5.3).
## 3 Convergence rate for continuously differentiable losses
Because of the formulation of the mean-semideviation risk measure involving a non-smooth \(\max(\cdot)\) term, the outer function (6) may be nonsmooth even when the loss function \(\ell(x,D)\) is continuously differentiable. Therefore, a challenge is that the problem (5) is non-smooth and non-convex. However, we will argue that it is weakly convex. For weakly convex objectives, a popular metric for determining first-order stationarity is the norm of the gradient of the Moreau envelope Moreau (1965) which we will introduce next. We first consider an alternative formulation of the main problem (5):
\[\min_{x\in\mathbb{R}^{n}}\ \varphi(x):=F(x)+r(x),\]
```
1: initial point \(x^{0}\in X\), \(u^{0}\in\mathbb{R}\), a constant stepsize \(\tau\in\left(0,1\right]\).
2:for\(k=0,1,...,N-1\)do
3: Sample \(D_{1}^{k+1}\), \(D_{2}^{k+1}\) and \(D_{3}^{k+1}\) conditionally independently on \(\mathcal{F}_{k}\) and obtain the estimates \[G^{k} \in\partial_{x}\ell(x^{k},D_{1}^{k+1}),\] (10) \[\tilde{g}_{fx}^{k} =\begin{cases}0&\text{if }\ell(x^{k},D_{1}^{k+1})<u^{k},\\ \varkappa G^{k}&\text{if }\ell(x^{k},D_{1}^{k+1})\geq u^{k},\end{cases}\] (11) \[\tilde{g}_{fu}^{k} =\begin{cases}1&\text{if }\ell(x^{k},D_{1}^{k+1})<u^{k},\\ 1-\varkappa&\text{if }\ell(x^{k},D_{1}^{k+1})\geq u^{k},\end{cases}\] (12) \[\tilde{g}_{h}^{k} \in\partial_{x}\ell(x^{k},D_{2}^{k+1}),\] (13) \[\tilde{J}^{k} \in\partial_{x}\ell(x^{k},D_{3}^{k+1}),\] (14) \[\tilde{h}^{k} =\frac{1}{3}(\ell(x^{k},D_{1}^{k+1})+\ell(x^{k},D_{2}^{k+1})+ \ell(x^{k},D_{3}^{k+1})).\] (15)
4: Update the solution estimate \[x^{k+1}=\Pi_{X}\left(x^{k}-\tau\big{(}\tilde{g}_{fx}^{k}+\tilde{g}_{fu}^{k} \tilde{g}_{h}^{k}\big{)}^{T}\right),\]
5: Update the inner function estimate \[u^{k+1}=u^{k}+\tau\big{(}\tilde{h}^{k}-u^{k}\big{)}+\tilde{J}^{k}\big{(}x^{k+ 1}-x^{k}\big{)}.\] (16)
6:endfor
7:\(x^{R}\) with \(R\) uniformly sampled from \(\{0,1,\ldots,N-1\}\).
```
**Algorithm 1** SCS method
where \(F(x)=f(x,h(x))\) and \(r(x)\) is the indicator function of the convex and compact feasible set \(X\subset\mathbb{R}^{n}\), i.e. \(r(x)=0\) if \(x\in X\) and \(r(x)=+\infty\) otherwise. The Moreau envelope and the proximal map are defined as
\[\varphi_{\lambda}(x) :=\min_{y}\{\varphi(y)+\frac{1}{2\lambda}\|y-x\|^{2}\},\] \[\operatorname{prox}_{\lambda,\varphi}(x) :=\operatorname*{argmin}_{y}\{\varphi(y)+\frac{1}{2\lambda}\|y- x\|^{2}\},\]
respectively. Since the inner function \(h(x)\) is \(\delta\)-weakly convex and the outer function \(f(x,u)\) is weakly convex with respect to \(x\) and convex and nondecreasing with respect to \(u\) (see Remark 1), the composite function \(F(x)\) is also \(\rho\)-weakly convex with \(\rho=(1+2\varkappa)\delta\). In this case \(\varphi_{\lambda}(x)\) is continuously differentiable for \(\lambda\in(0,\rho^{-1})\) Moreau (1965) with the gradient
\[\nabla\varphi_{\lambda}(x)=\lambda^{-1}(x-\operatorname{prox}_{\lambda\varphi }(x)). \tag{17}\]
It can also be shown that the quantity \(\|\nabla\varphi_{\lambda}(x)\|\) is a measure of stationarity, i.e. when \(\|\nabla\varphi_{\lambda}(x)\|\) is small, \(x\) will be close to some _nearly stationary point_\(\hat{x}\), which in turn, has the subdifferential close to 0 Davis and Drusvyatskiy (2019), i.e. \(\hat{x}\) satisfies the following relations:
\[\left\{\begin{aligned} \|\hat{x}-x\|&=\lambda\| \nabla\varphi_{\lambda}(x)\|,\\ \varphi(\hat{x})&\leq\varphi(x),\\ \operatorname{dist}(0;\partial\varphi(\hat{x}))& \leq\|\nabla\varphi_{\lambda}(x)\|.\end{aligned}\right.\]
Here, \(\operatorname{dist}(0;\partial\varphi(\hat{x}))\) denotes the distance of the origin to the set \(\partial\varphi(\hat{x})\). Therefore, the convergence guarantees for the gradient of the Moreau envelope in this paper, can be converted to guarantees in terms of the subdifferential.
Now we can proceed to prove the convergence rate of the SCS method for a continuously differentiable loss function. First, we quantify how well the inner function estimates \(\{u^{k}\}\) track the sequence \(\{h(x^{k})\}\).
**Lemma 3**: _If Assumptions 1, 2, and 3 hold, the sequence \(\{u^{k}\}\) generated by Algorithm 1 satisfies:_
\[\mathbb{E}\big{[}|u^{k}-h(x^{k})|\big{]} \leq\sigma(1+M)\tau^{1/2}+\delta M\tau+ (1-\tau)^{k}(u^{0}-h(x^{0})),\] \[k=0,1,...,N-1. \tag{18}\]
Under Assumptions 1 and 2, the inner function \(h(x)\) is \(\delta\)-smooth. Therefore,
\[h(x^{k+1})=h(x^{k})+\big{[}g_{h}^{k}\big{]}^{T}(x^{k+1}-x^{k})+A_{k},\quad\|A _{k}\|\leq\delta\|x^{k+1}-x^{k}\|^{2}.\]
From the update rule (16) for \(\{u^{k}\}\), we have
\[u^{k+1}=u^{k}+\tau\big{(}h(x^{k})-u^{k})+\tau e_{\ell}^{k}+\big{[}g_{h}^{k} \big{]}^{T}\big{(}x^{k+1}-x^{k}\big{)}+E^{k}(x^{k+1}-x^{k}).\]
Thus,
\[u^{k+1}-h(x^{k+1})=(1-\tau)\big{[}u^{k}-h(x^{k})\big{]}+\tau e_{\ell}^{k}+E^{ k}(x^{k+1}-x^{k})-A_{k}.\]
By using this equality recursively, we obtain
\[u^{k+1}-h(x^{k+1})=\\ \sum_{j=0}^{k}(1-\tau)^{k-j}\big{(}\tau e_{\ell}^{j}+E^{j}(x^{j+1}- x^{j})-A_{j}\big{)}+(1-\tau)^{k+1}(u^{0}-h(x^{0})). \tag{19}\]
The norms of the martingale terms can be easily bounded:
\[\mathbb{E}\bigg{[}\Big{(}\sum_{j=0}^{k}(1-\tau)^{k-j}\tau e_{\ell}^{j}\Big{)}^{ 2}\bigg{]}\leq\sum_{j=0}^{k}(1-\tau)^{2(k-j)}\sigma^{2}\tau^{2}\leq\frac{\sigma ^{2}\tau^{2}}{1-(1-\tau)^{2}}\leq\sigma^{2}\tau,\]
and thus
\[\mathbb{E}\bigg{[}\Big{|}\sum_{j=0}^{k}(1-\tau)^{k-j}\tau e_{\ell}^{j}\Big{|} \bigg{]}\leq\sigma\tau^{1/2}. \tag{20}\]
Observe that by the conditional independence of \(E^{k}\) and \(x^{k+1}-x^{k}\), we have \(\mathbb{E}\big{[}E^{k}(x^{k+1}-x^{k})\big{|}\mathcal{F}_{k}\big{]}=0\). Furthermore, by the Cauchy-Schwartz inequality and the non-expansiveness of the projection operator
\[\mathbb{E}\Big{[}\big{|}E^{k}(x^{k+1}-x^{k})\big{|}^{2}\,\Big{|}\,\mathcal{F} _{k}\Big{]}\leq\sigma^{2}M^{2}\tau^{2},\]
where \(M\) is the constant from (9). Thus, similar to (20), we obtain
\[\mathbb{E}\bigg{[}\Big{|}\sum_{j=0}^{k}(1-\tau)^{k-j}E^{j}(x^{j+1}-x^{j})\Big{]} \bigg{]}\leq\sigma M\tau^{1/2}. \tag{21}\]
The third sum can be bounded directly:
\[\mathbb{E}\bigg{[}\Big{|}\sum_{j=0}^{k}(1-\tau)^{k-j}A_{j}\Big{|}\bigg{]}\leq \delta M\tau^{2}\mathbb{E}\bigg{[}\Big{|}\sum_{j=0}^{k}(1-\tau)^{k-j}\Big{|} \bigg{]}\leq\delta M\tau. \tag{22}\]
Plugging the estimates (20)-(22) into (19) we conclude that
\[\mathbb{E}\big{|}u^{k+1}-h(x^{k+1})\big{|}\leq\sigma(1+M)\tau^{1/2}+\delta M \tau+(1-\tau)^{k+1}|u^{0}-h(x^{0})|,\]
as required.
We now consider the Moreau envelope \(\varphi_{1/\bar{p}}(x)\) with \(\bar{p}=\rho+(1+\varkappa)\delta\), and obtain a bound for the expected squared norm of its gradient at an iterate \(x^{R}\) that is randomly chosen among the first \(N\) iterates. Complexity results for unconstrained stochastic weakly convex minimization exist Davis et al. (2018a) but such results are not directly applicable to our setting, because we have a two-level non-smooth weakly convex problem in \(x\). Our proof leverages the monotonicity of \(f(x,u)\) with respect to \(u\) to handle its non-smoothness while exploiting the weak convexity of the loss with respect to \(x\).
**Theorem 4**: _Suppose Assumptions 1 to 3 hold. For any given iteration budget \(N\), consider the trajectory \(\{x^{k}\}_{k=0}^{N-1}\) of Algorithm 1. We have_
\[\mathbb{E}[\|\nabla\varphi_{1/\bar{p}}(x^{R})\|^{2}]\leq 2\frac{\varphi_{1/ \bar{p}}(x^{0})-\min_{x\in X}F(x)+2\bar{\rho}|u^{0}-h(x^{0})|+NC_{3}\tau^{3/2} }{N\tau},\]
_where \(\tilde{\rho}=\rho+(1+\varkappa)\delta\), \(C_{3}=2\tilde{\rho}\sigma(1+M)\), the expectation is taken with respect to the trajectory generated by Algorithm 1 and the random variable \(R\) that is uniformly sampled from \(\{0,1,...,N-1\}\) independently of the trajectory._
**Proof** Defining \(\hat{x}^{k}:=\operatorname{prox}_{\varphi/\beta}(x^{k})\), we have
\[f(\hat{x}^{k},h(\hat{x}^{k}))-f(x^{k},u^{k}) =h(\hat{x}^{k})-u^{k}\] \[+\varkappa\mathbb{E}\big{[}\max(0,\ell(\hat{x}^{k},D_{1}^{k+1})-h (\hat{x}^{k}))-\max(0,\ell(x^{k},D_{1}^{k+1})-u^{k})\big{|}\mathcal{F}_{k}\big{]}. \tag{23}\]
On the other hand, according to Algorithm 1, if we denote
\[I^{k}=\begin{cases}1&\text{if}\quad\ell(x^{k},D_{1}^{k+1})\geq u^{k},\\ 0&\text{if}\quad\ell(x^{k},D_{1}^{k+1})<u^{k},\end{cases}\]
we can write
\[\tilde{g}^{k}_{fx}=\varkappa I^{k}G^{k}\quad\text{and}\quad\tilde{g}^{k}_{fu} =1-\varkappa I^{k},\]
where we used the definitions (10), (11), (12). We can also estimate the difference of the "\(\max\)" terms in (23) as
\[\max (0,\ell(\hat{x}^{k},D_{1}^{k+1})-h(\hat{x}^{k}))-\max(0,\ell(x^{k},D_{1}^{k+1})-u^{k})\] \[\geq\begin{cases}\ell(\hat{x}^{k},D_{1}^{k+1})-h(\hat{x}^{k})-( \ell(x^{k},D_{1}^{k+1})-u^{k}),&\text{if}\ \ell(x^{k},D_{1}^{k+1})\geq u^{k},\\ 0,&\text{if}\ \ell(x^{k},D_{1}^{k+1})<u^{k},\end{cases}\] \[=I^{k}\big{(}\ell(\hat{x}^{k},D_{1}^{k+1})-\ell(x^{k},D_{1}^{k+1} )-(h(\hat{x}^{k})-u^{k})\big{)}\] \[\geq I^{k}\big{(}\langle G^{k},\hat{x}^{k}-x^{k}\rangle-\frac{ \tilde{\delta}(D_{1}^{k+1})}{2}\|\hat{x}^{k}-x^{k}\|^{2}-(h(\hat{x}^{k})-u^{k} )\big{)}, \tag{24}\]
where we used the definition (10) of \(G^{k}\) and the inequality (7) with \(q(x)=\ell(x,D_{1}^{k+1})\) due to the weak convexity of \(\ell\). Denoting
\[A^{k}=\varkappa\mathbb{E}\big{[}I^{k}G^{k}\big{|}\mathcal{F}_{k}\big{]},\]
and using (23) and (24), we obtain the following lower bound
\[f(\hat{x}^{k},h(\hat{x}^{k}))-f(x^{k},u^{k}) \geq\langle\hat{x}^{k}-x^{k},A^{k}\rangle-\frac{\varkappa\delta} {2}\|\hat{x}^{k}-x^{k}\|^{2}\] \[\quad+(1-\varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_{k} \big{]})(h(\hat{x}^{k})-u^{k}). \tag{25}\]
Denoting \(B^{k}=(1-\varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_{k}\big{]})g^{k}_{h}\), we can estimate the last term as
\[(1-\varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_{k}\big{]})(h (\hat{x}^{k})-u^{k})\] \[\quad\geq(1-\varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_{k} \big{]})(h(x^{k})-u^{k}+\langle\hat{x}^{k}-x^{k},g^{k}_{h}\rangle-\frac{\delta }{2}\|\hat{x}^{k}-x^{k}\|^{2}) \tag{26}\] \[\quad\geq\langle\hat{x}^{k}-x^{k},B^{k}\rangle-\frac{\delta}{2} \|\hat{x}^{k}-x^{k}\|^{2}+(1-\varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_ {k}\big{]})(h(x^{k})-u^{k}).\]
Combining (25) and (26), we obtain
\[f(\hat{x}^{k},h(\hat{x}^{k}))-f(x^{k},u^{k})\geq\langle\hat{x}^{k}-x ^{k},A^{k}+B^{k}\rangle-\frac{(1+\varkappa)\delta}{2}\|\hat{x}^{k}-x^{k}\|^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(1- \varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_{k}\big{]})(h(x^{k})-u^{k}). \tag{27}\]
Now we consider the change in the Moreau envelope:
\[\mathbb{E}\big{[}\varphi_{1/\tilde{\rho}}(x^{k+1})\,\big{|}\, \mathcal{F}_{k}\big{]}\leq\mathbb{E}\big{[}F(\hat{x}^{k})+\frac{\tilde{\rho}}{ 2}\|\hat{x}^{k}-x^{k+1}\|^{2}\,\big{|}\,\mathcal{F}_{k}\big{]}\] \[\qquad\leq F(\hat{x}^{k})+\frac{\tilde{\rho}}{2}\mathbb{E}\left[ \|x^{k}-\hat{x}^{k}-\tau(\hat{g}^{k}_{fx}+\hat{g}^{k}_{fu}\hat{g}^{k}_{h})^{T }\|^{2}\,\big{|}\,\mathcal{F}_{k}\right]\] \[\qquad\leq F(\hat{x}^{k})+\frac{\tilde{\rho}}{2}\|x^{k}-\hat{x}^{ k}\|^{2}+\tilde{\rho}\,\tau\mathbb{E}\big{[}\langle\hat{x}^{k}-x^{k},\hat{g}^{k}_{ fx}+\hat{g}^{k}_{fu}\hat{g}^{k}_{h}\rangle\,\big{|}\,\mathcal{F}_{k}\big{]}+\frac{ \tilde{\rho}M^{2}}{2}\tau^{2}.\]
Noticing that \(\mathbb{E}[\tilde{g}^{k}_{fx}+\tilde{g}^{k}_{fu}\tilde{g}^{k}_{h}|\mathcal{F} _{k}]=A^{k}+B^{k}\) and plugging in the lower bound (27), we obtain:
\[\mathbb{E}[\varphi_{1/\tilde{\rho}}(x^{k+1})|\mathcal{F}_{k}] \leq\varphi_{1/\tilde{\rho}}(x^{k})+\frac{\tilde{\rho}M^{2}}{2} \tau^{2}+\tilde{\rho}\,\tau(f(\hat{x}^{k},h(\hat{x}^{k}))-f(x^{k},u^{k})\] \[\qquad+\frac{(1+\varkappa)\delta}{2}\|\hat{x}^{k}-x^{k}\|^{2}+(1 -\varkappa\mathbb{E}\big{[}I^{k}\big{|}\mathcal{F}_{k}\big{]})(u^{k}-h(x^{k})))\] \[\leq\varphi_{1/\tilde{\rho}}(x^{k})+\frac{\tilde{\rho}M^{2}}{2} \tau^{2}+\tilde{\rho}\,\tau(F(\hat{x}^{k})-F(x^{k})\] \[\qquad+\frac{(1+\varkappa)\delta}{2}\|\hat{x}^{k}-x^{k}\|^{2})+ \tilde{\rho}\,\tau\|u^{k}-h(x^{k})\|\] \[\qquad+\tilde{\rho}\,\tau(F(x^{k})-f(x^{k},u^{k})).\]
Since the function \(x\mapsto F(x)+\frac{\tilde{\rho}}{2}\|x^{k}-x\|^{2}\) is strongly convex with the parameter \(\tilde{\rho}-\rho>0\), then
\[F(x^{k})-F(\hat{x}^{k})=(F(x^{k})+\frac{\tilde{\rho}}{2}\|x^{k}- x^{k}\|^{2})-(F(\hat{x}^{k})+\frac{\tilde{\rho}}{2}\|x^{k}-\hat{x}^{k}\|^{2})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{\tilde{ \rho}}{2}\|x^{k}-\hat{x}^{k}\|^{2}\geq(\tilde{\rho}-\frac{\rho}{2})\|x^{k}- \hat{x}^{k}\|^{2}.\]
Recalling that \(\tilde{\rho}=\rho+(1+\varkappa)\delta\), and using (17), we obtain
\[\mathbb{E}[\varphi_{1/\tilde{\rho}}(x^{k+1})|\mathcal{F}_{k}] \leq\varphi_{1/\tilde{\rho}}(x^{k})+\frac{\tilde{\rho}M^{2}}{2} \tau^{2}+\tilde{\rho}\,\tau(-(\tilde{\rho}-\frac{\rho}{2})\|x^{k}-\hat{x}^{k} \|^{2}\] \[\qquad+\frac{(1+\varkappa)\delta}{2}\|\hat{x}^{k}-x^{k}\|^{2})+ \tilde{\rho}\,\tau\|u^{k}-h(x^{k})\|\] \[\qquad+\tilde{\rho}\,\tau(F(x^{k})-f(x^{k},u^{k}))\] \[=\varphi_{1/\tilde{\rho}}(x^{k})+\frac{\tilde{\rho}M^{2}}{2} \tau^{2}-\frac{1}{2}\tau\|\nabla\varphi_{1/\tilde{\rho}}(x^{k})\|^{2}\] \[\qquad+\tilde{\rho}\,\tau\|u^{k}-h(x^{k})\|+\tilde{\rho}\,\tau(F(x ^{k})-f(x^{k},u^{k})). \tag{28}\]
Using the fact that \(f(x,u)\) is \(1\)-Lipschitz with respect to \(u\) together with the tracking error bound (18), we have
\[\mathbb{E}\left[F(x^{k})-f(x^{k},u^{k})\right]\leq\mathbb{E}[|h(x^{k})-u^{k}|] \leq C_{2}\tau^{1/2}+(1-\tau)^{k}|u^{0}-h(x^{0})|,\]
where \(C_{2}=\sigma+\sigma M\). In (28), taking the expectation of both sides, we get
\[\mathbb{E}[\varphi_{1/\beta}(x^{k+1})] \leq\mathbb{E}\left[\varphi_{1/\beta}(x^{k})\right]-\frac{1}{2} \tau\mathbb{E}[\|\nabla\varphi_{1/\beta}(x^{k})\|^{2}]+\frac{\tilde{\rho}M^{2} }{2}\tau^{2}+2\rho C_{2}\tau^{3/2}\] \[\qquad+2\tilde{\rho}\tau(1-\tau)^{k}|u^{0}-h(x^{0})|.\]
The summation over \(k\) from \(0\) to \(N-1\) yields
\[\mathbb{E}\left[\varphi_{1/\beta}(x^{N})\right] \leq\varphi_{1/\beta}(x^{0})-\frac{1}{2}\tau\sum_{k=0}^{N-1} \mathbb{E}[\|\nabla\varphi_{1/\beta}(x^{k})\|^{2}]\] \[\qquad+NC_{3}\tau^{3/2}+2\tilde{\rho}|u^{0}-h(x^{0})|,\]
where \(C_{3}=2\rho C_{2}\). Lower bounding the left hand-side by \(\min_{x\in X}F\) and rearranging, we obtain the bound
\[\mathbb{E}[\|\nabla\varphi_{1/\tilde{\rho}}(x^{R})\|^{2}] =\frac{1}{N}\sum_{k=0}^{N-1}\mathbb{E}[\|\nabla\varphi_{1/\tilde{ \rho}}(x^{k})\|^{2}\] \[\leq 2\frac{\varphi_{1/\beta}(x^{0})-\min_{x\in X}F(x)+2\tilde{ \rho}|u^{0}-h(x^{0})|+NC_{3}\tau^{3/2}}{N\tau},\]
which was set out to prove.
**Remark 5**: _If we choose \(\tau=cN^{-2/3}\) for some constant \(c>0\), a consequence of Theorem 4 is that \(\mathbb{E}[\|\nabla\varphi_{1/\beta}(x^{R})\|^{2}]=\mathcal{O}(\frac{1}{N^{1/3}})\) and therefore in order to get an \(\epsilon\)-optimal point, i.e. for \(\mathbb{E}[\|\nabla\varphi_{1/\beta}(x^{R})\|^{2}]\leq\epsilon\), we will need \(\mathcal{O}(\epsilon^{-3})\) iterations. Since we use three data samples at each step in Algorithm 1, we obtain the total sample complexity of \(S_{\epsilon}=24(\varphi_{1/\tilde{\rho}}(x^{0})-\min_{x\in X}F(x)\)\(+2\tilde{\rho}|u^{0}-h(x^{0})|+C_{3})^{3}\epsilon^{-3}=\mathcal{O}( \epsilon^{-3})\)._
## 4 Convergence rate for non-smooth weakly convex loss functions
The SCS method uses a biased estimator of the values of the inner function \(h(x)\), which induces errors that are harder to bound when the loss function is non-smooth. This is an organic difficulty in non-smooth and non-convex composite stochastic optimization, precluding the derivation of the convergence rate. Therefore, instead of updating the inner function estimate \(\{u^{k}\}\) by a linear tracking filter, we construct the SPIDER estimator Fang et al. (2018):
\[\begin{split} u^{0}&=\ell_{\mathcal{B}^{0}}(x^{0}), \\ u^{k}&=u^{k-1}+\ell_{\mathcal{B}^{k}}(x^{k})-\ell_{ \mathcal{B}^{k}}(x^{k-1}),\end{split} \tag{29}\]
where \(\mathcal{B}^{k}\) is a randomly picked mini-batch of data at the \(k\)-th iteration, and \(\ell_{\mathcal{B}^{k}}(x)\equiv\\ \frac{1}{|\mathcal{B}^{k}|}\sum_{D\in\mathcal{B}^{k}}\ell(x,D)\). This estimator operates in epochs and admits three parameters: the epoch length
\(T\), the standard batch size \(b\), and the larger batch size \(B>b\). It restarts at the beginning of each epoch, uses the large batch at the first step, and the standard batch at the following steps. Furthermore, the SPIDER estimator is an unbiased estimator:
\[\mathbb{E}\left[u^{k}-h(x^{k})\right]=0,\]
where the expectation is taken with respect to all random observations up to iteration \(k\). The SCS method with SPIDER is described in Algorithm 2.
```
0: initial point \(x^{0}\in X\), a constant stepsize \(\tau\in\left(0,1\right]\); SPIDER epoch length \(T\), large batch size \(B\) and small batch size \(b\).
1:for\(k=0,1,...,N-1\)do
2:if\(k\bmod T==0\)then
3: Randomly sample a data batch \(\mathcal{B}^{k}\) with \(|\mathcal{B}^{k}|==B\)
4: Restart the inner function estimate: \[u^{k}=\ell_{\mathcal{B}^{k}}(x^{k});\]
5:else
6: Randomly sample a data batch \(\mathcal{B}^{k}\) with \(|\mathcal{B}^{k}|==b\)
7: Update the inner function estimate: \[u^{k}=u^{k-1}+\ell_{\mathcal{B}^{k}}(x^{k})-\ell_{\mathcal{B}^{k}}(x^{k-1}).\]
8:endif
9: Randomly sample \(D_{1}^{k+1}\) and \(D_{2}^{k+1}\) and obtain the estimates \[G^{k}\in\partial_{x}\ell(x^{k},D_{1}^{k+1}),\] \[\bar{g}_{fx}^{k}=\begin{cases}0&\text{if }\ell(x^{k},D_{1}^{k+1})<u^ {k},\\ \varkappa G^{k}&\text{if }\ell(x^{k},D_{1}^{k+1})\geq u^{k},\end{cases}\] \[\bar{g}_{fu}^{k}=\begin{cases}1&\text{if }\ell(x^{k},D_{1}^{k+1})<u^ {k},\\ 1-\varkappa&\text{if }\ell(x^{k},D_{1}^{k+1})\geq u^{k},\end{cases}\] \[\bar{g}_{h}^{k}\in\partial_{x}\ell(x^{k},D_{2}^{k+1}).\]
10: Update the solution estimate \[x^{k+1}=\Pi_{X}\Big{(}x^{k}-\tau\big{(}\bar{g}_{fx}^{k}+\bar{g}_{fu}^{k}\bar{g }_{h}^{k}\big{)}^{T}\Big{)},\] (30)
11:endfor
12:\(x^{R}\) with \(R\) uniformly sampled from \(\{0,1,\dots,N-1\}\).
```
**Algorithm 2** SCS method with SPIDER
For handling non-smooth losses, instead of Assumption 2, we assume that the loss is only weakly convex:
**Assumption 4**: _For all \(x\) in a neighborhood of the set \(X\):_
1. _The function_ \(\ell(x,\cdot)\) _is integrable;_
2. _The function_ \(\ell(\cdot,D)\) _is weakly convex with an integrable constant_ \(\tilde{\delta}(D)\)
**Remark 6**: _Under Assumption 4, the inner function \(h(x)\) is \(\delta\)-weakly convex, where \(\delta:=\mathbb{E}[\tilde{\delta}(D)]\). Since the outer function \(f(x,u)\) is weakly convex with respect to \(x\) and nondecreasing and convex with respect to \(u\), the composite function \(F(x)\) is also weakly convex._
By virtue of Assumptions 1 and 4, the loss function \(\ell(\cdot,D)\) (as a difference of a convex function and a quadratic function) is Lipschitz continuous on the feasible set \(X\) for any arbitrary \(D\), with some Lipschitz constant \(\tilde{L}(D)\). We make an additional assumption about this constant.
**Assumption 5**: _The Lipschitz constant \(\tilde{L}(D)\) of the loss function \(\ell(x,D)\) with respect to \(x\) is square-integrable:_
\[L^{2}\equiv\mathbb{E}[\tilde{L}^{2}(D)]<+\infty.\]
**Remark 7**: _Assumption 5 implies the Mean-Squared Lipschitz (MSL) property Nguyen et al. (2022); Pham et al. (2020) required by the SPIDER estimator:_
\[\mathbb{E}[|\ell(x,D)-\ell(y,D)|^{2}]\leq L^{2}\|x-y\|^{2}.\]
_The composite subgradient bound (9) automatically follows in this case._
Now the loss function \(\ell(x,D)\) becomes \(\tilde{\delta}(D)\)-weakly convex (instead of smooth) with respect to \(x\) at any \(D\), note that everything is still valid in the rate convergence analysis in Section 3, except the tracking error bound (18). Therefore, we need to estimate the tracking errors of the SPIDER estimator; such estimates are already available in the literature as stated in the next result.
**Lemma 8**: _(Fang et al., 2018, Lemma 1) Suppose the loss function \(\ell(x,D)\) is Mean-Squared Lipschitz with a constant \(L\). Then the MSE of the estimator in (29) can be bounded as_
\[\mathbb{E}[|\mu^{k}-h(x^{k})|^{2}]\leq\mathbb{E}[|u^{0}-h(x^{0})|^{2}]+\sum_{ r=1}^{k}\frac{L^{2}}{|\mathcal{B}^{r}|}\mathbb{E}[\|x^{r}-x^{r-1}\|^{2}],\qquad k =1,...,T-1.\]
_If we can control the step lengths \(\|x^{k}-x^{k-1}\|\leq s\) for \(k=1,...,T-1\), then_
\[\mathbb{E}[|u^{0}-h(x^{0})|^{2}]\leq\cdots\leq\mathbb{E}[|u^{T-1}-h(x^{T-1})| ^{2}]\leq\frac{\sigma^{2}}{B}+\frac{TL^{2}s^{2}}{b}.\]
Building on this lemma, we next obtain tracking error bounds for the sequence \(\{u^{k}\}\) for a particular choice of the SPIDER parameters \(T\), \(b\), and \(B\).
**Lemma 9**: _Suppose Assumption 1 and Assumptions 3 to 5 hold. Then for \(B=\frac{2\sigma^{2}}{\tau^{2}}\), \(b=2LM\sigma/\tau\), \(T=\frac{\sigma}{LM\tau}\), the sequence \(\{u^{k}\}\) generated by Algorithm 2 satisfies:_
\[\mathbb{E}\left[|u^{k}-h(x^{k})|\right]\leq\tau,\qquad k=0,1,...,N-1. \tag{31}\]
**Proof** According to the update function (30) and the composite subgradient bound (9), the step lengths are bounded:
\[\|x^{k}-x^{k-1}\|\leq M\tau,\qquad k=0,1,...,N-1.\]
Under Assumption 5, \(\ell(x,D)\) is Mean-Squared Lipschitz (MSL) with a constant \(L\). Also, by virtue of Lemma 8, the tracking errors of \(\{u^{k}\}\) satisfy:
\[\mathbb{E}[|u^{k}-h(x^{k})|^{2}]\leq\frac{\sigma^{2}}{B}+\frac{TL^{2}M^{2} \tau^{2}}{b},\qquad k=0,1,...,N-1.\]
Therefore, if we choose
\[B=\frac{2\sigma^{2}}{\tau^{2}},\qquad b=2LM\sigma/\tau,\qquad T=\frac{\sigma} {LM\tau},\]
we can get an upper bound for the tracking errors:
\[\mathbb{E}[|u^{k}-h(x^{k})|^{2}]\leq\tau^{2}.\]
Using Jensen's inequality, we immediately get (31).
The remaining rate analysis follows the same way as in the continuously differentiable case and we obtain the following result.
**Theorem 10**: _If Assumption 1 and Assumptions 3 to 5 hold,then for every \(N\geq 1\), the random point \(x^{R}\) generated by Algorithm 2 satisfies:_
\[\mathbb{E}[\|\nabla\varphi_{1/\rho}(x^{R})\|^{2}]\leq 2\frac{\varphi_{1/ \rho}(x^{0})-\min_{x\in X}F(x)+NC_{4}\tau^{2}}{N\tau},\]
_where \(C_{4}=2\rho(\frac{1}{4}M^{2}+1)\), the expectation is taken with respect to the trajectory generated by Algorithm 2, and the random variable \(R\) is uniformly sampled from \(\{0,1,\ldots,T-1\}\) and independent of the trajectory._
**Proof** The proof follows in the same way as Theorem 4 with the exception that the tracking bound (18) is to be replaced with the bound (31).
**Remark 11**: _If we choose \(\tau=N^{-1/2}\), in order to get an \(\epsilon\)-optimal point, we will need \(\mathcal{O}(\epsilon^{-2})\) iterations. With the choices of hyperparameters in Lemma 9, the average batch size per iteration will be_
\[b+B/T=4LM\sigma/\tau,\]
_so the total sample complexity we obtain is_
\[S_{\epsilon}=32(\varphi_{1/\rho}(x^{0})-\min_{x\in X}F(x)+C_{4})^{3}LM\sigma \epsilon^{-3}=\mathcal{O}(\epsilon^{-3}).\]
_We conclude that the sample size of \(O(\epsilon^{-3})\) is sufficient for both smooth and non-smooth cases, with only the constant larger in the non-smooth case (see Remark 5)._
## 5 Numerical Experiments
In this section, we report results of numerical experiments that illustrate the convergence and robustness of our SCS method on an adversarial learning task in deep learning and on some logistic regression problems with non-smooth non-convex regularizers. Our numerical results were obtained using Python (Version 3.7) on an Alienware Aurora R8 desktop with a 3.60 GHz CPU (i7-2677M) and 16GB memory.
### Deep Learning
We consider a convolutional neural network applied to the MNIST data set LeCun et al. (2010). The network consists of three convolutional layers followed by a dense layer. All the hidden layers have ELU activations, and the output layer has the softmax activation. The convolutional layers have 16, 32, and 32 kernels of size 8, 6, and 5, respectively. We construct the network in this particular way to generate comparable results to Sinha et al. (2018).
The MNIST dataset consists of handwritten images with an integer label valued from 0 to 9 where the aim is to classify the images. We use the cross entropy loss during training (see reference here). The resulting loss function \(\ell(x,D)\) is a composition of the CNN and the cross-entropy loss, and is continuously differentiable in our setting, as the ELU activation functions are continuously differentiable. We also set a bound on the weights of all hidden units, which is equivalent to choosing the feasibility set \(X=\{x\colon\|x\|_{\infty}\leq 10\}\). Such box constraints are employed frequently in practice for regularization purposes Srivastava et al. (2014).
We train the CNN with different optimizers, namely SGD, SCS (with different values of \(\varkappa\) in Algorithm 1) and another state-of-the-art method Wasserstein Robust Method (WRM) Sinha et al. (2018). To investigate the robustness of the trained networks, we consider two types of (adversarial attacks) perturbations to the test dataset: the PGM attacks Sinha et al. (2018); Madry et al. (2017) and the semi-deviation attacks, which we will describe next.
PGM attack.Given model parameter \(x\) and data point \(D=(a,b)\) with input \(a\) and output \(b\), the main idea is to create an adversarial input data by applying multi-step projected gradient ascent to the loss function in a ball around the data point. Specifically, for every data point \((a_{i},b_{i})\), we iterate
\[\nabla d_{i}^{t}(x) :=\operatorname*{argmax}_{\|\eta\|_{2}\leq\epsilon_{adv}}\{ \nabla_{a}\ell(x;a_{i}^{t},b_{i})^{T}\eta\},\] \[d_{i}^{t+1} :=\Pi_{B(d_{i}^{t})}\{a_{i}^{t}+\tau_{adv}\nabla a_{i}^{t}(x)\},\]
for \(t=1,...,T_{adv}\), where \(B(a_{i}^{t}):=\{a:\|a-a_{i}^{t}\|_{2}\leq\epsilon_{adv}\}\) is a ball around \(a_{i}^{t}\), \(\epsilon_{adv}\) controls the adversarial perturbation level and \(T_{adv}\) is the number of iterations. We refer the reader to Madry et al. (2017) for further details.
Semi-deviation attack.While PGM attacks create a powerful worst-case adversary, this can be pretty conservative if perturbations to the dataset has a random-like nature rather than a worst-case nature. We introduce semi-deviation attacks which create an adversary that is not as conservative by replacing a "good" instance with an "average" instance. More specifically, for every data point
\(D_{i}\), we replace the corresponding loss \(\ell(x,D_{i})\) with \(\ell_{sd}(x,D_{i})\):
\[\bar{\ell}(x) :=\frac{1}{|\mathcal{B}_{test}|}\sum_{D_{i}\in\mathcal{B}_{test}} \ell(x,D_{j}),\] \[\ell_{sd}(x,D_{i}) :=\bar{\ell}(x)+\varkappa_{adv}\max(0,\ell(x,D_{i})-\bar{\ell}(x)),\]
where \(\mathcal{B}_{test}\) denotes the test data set, \(\varkappa_{adv}\) controls the adversarial perturbation level.
We control the number of gradient evaluations in training to be the same across different methods. 1 The training data is the original (uncontaminated) MNIST data, whereas the models are tested with the contaminated data subject to PGM attacks and semi-deviation attacks: for each data point in the test set, we apply the PGM attack and the semi-deviation attacks at different perturbation levels and plot the average test losses of different models at different perturbation levels in Plots (a) and (c) of Figure 1. When \(\varepsilon_{adv}=0.6\) for PGM attacks, or \(\varkappa_{adv}=1\) for semi-deviation attacks, we also calculate the (natural) logarithm of the losses in the test set and show the logarithm of the loss distributions of different models in Plots (b) and (d) of Figure 1. We see that SCS with a proper \(\varkappa\) value generates a better solution than SGD under both types of attacks. It's not as good as WRM under the PGM attacks, which was expected, since WRM is trained against very similar attacks.2 For the same reason, SCS outperforms WRM under semi-deviation attacks.
Footnote 1: We train the model with SCS for 20 epochs and with WRM for 4 epochs, since at each step, SCS evaluates 3 gradients and WRM evaluates 16 gradients.
Footnote 2: At the \(k\)-th step, WRM perturbs the data point \(D^{k}=(a^{k},b^{k})\) with gradient ascent applied to the cost \(a\mapsto\ell(x;a,b^{k})-\gamma_{adv}c(a,a^{k})\), where \(\gamma_{adv}\) is a parameter that controls the robustness and \(c(a,a^{k}):=\|a-a^{k}\|^{2}\) is a regularizer. In the experiment, we iterate 15 times during the gradient ascent at each step.
### Nonconvex penalties
We consider a regression task on the Blog Feedback data set Buza (2014), containing 281 variables extracted from blog posts. The task is to predict the number of comments based on the other 280 (variables) features. The instances in the years 2010 and 2011 are included in the training set (52396 in total), the instances on \(02/01/2012\) are included in the validation set (114 in total), and the instances between \(02/02/2012\) and \(03/31/2012\) are included in the test set (7511 in total). The test set is divided into 60 subsets, each containing instances generated in one day from February or March. We use linear regression with mean absolute difference (MAD) loss as our model, plus a regularization term. The loss function has the form \(\ell(x,D)=|a^{T}x-b|+r(x)\) where \(D=(a,b)\) is the input data, and \(r(x)\) is the regularization term. For different choices of regularization terms, the loss function can be convex or nonconvex. Here we experiment on the Lasso penalty Frank and Friedman (1993) and two non-convex penalties: the SCAD penalty Fan and Li (2001) and the MCP penalty Zhang (2010). The corresponding regularizers are:
* Lasso: \[r(x)=\lambda|x|,\] (32)
* SCAD: \[r(x)=\begin{cases}\lambda|x|&\text{if }|x|\leq\lambda,\\ \frac{\gamma\lambda|x|-0.5(x^{2}+\lambda^{2})}{\gamma-1}&\text{if }\lambda<|x| \leq\lambda\gamma,\\ \frac{\lambda^{2}(\gamma+1)}{2}&\text{if }|x|>\lambda\gamma,\end{cases}\] (33)
* MCP: \[r(x)=\begin{cases}\lambda\left|x\right|-\frac{x^{2}}{2\gamma}&\text{if }\left|x \right|\leq\lambda\gamma,\\ \frac{\lambda^{2}\gamma}{2}&\text{if }\left|x\right|>\lambda\gamma,\end{cases}\] (34) where \(\lambda>0\), and \(\gamma>0\) are parameters. With SCAD or MCP penalties, the loss function becomes non-smooth and weakly convex. We take the constraint set to be \(X=\{x:\|x\|_{\infty}\leq 10\}\).
In Figure 2, we compare the histories of training losses and the distributions of the logarithm of the test losses for these three different penalties together with a plot of the decay of the objective function during the training phase. The method exhibits similar convergence speed and test performance in (Lasso) convex and (SCAD and MCP) nonconvex cases.
### Remarks on the assumptions
For our deep learning experiment, the feasibility set \(X=\{x:\|x\|_{\infty}\leq 10\}\) is clearly convex and compact, and Assumption 1 holds. The input data are normalized and bounded, and the iterates stay in the compact set \(X\) where the gradients of the continuously differentiable loss \(\ell(x,D)\) is continuous
Figure 1: Test losses under PGM attacks and semi-deviation attacks. The training data is the original (uncontaminated) MNIST data, whereas the models are tested with the contaminated data.
and bounded, so the loss \(\ell(x,D)\) is integrable with respect to \(D\) for every fixed \(x\in X\). Therefore, Assumption 2 holds. Furthermore, we observe from (10) to (15) that the sequences \(\bar{J}^{k}\), \(\bar{g}^{k}\), and \(\bar{h}^{k}\) stay uniformly bounded. Therefore, their variance (conditioned on the natural filtration \(\mathcal{F}_{k}\)) is bounded, and the stochastic estimate of an element from the subdifferential \(\partial F(x^{k})\) is also bounded. If we take the expectation of these estimates, as the subdifferentials are bounded sets, we can interchange the subdifferential and the expectation operators (Mikhalevich et al., 1987, Thm. 23.1). Since \(D_{1}^{k}\), \(D_{2}^{k}\) and \(D_{3}^{k}\) are i.i.d. samples from the empirical data distribution, then we can deduce that \(\bar{J}^{k}\), \(\bar{g}^{k}\) and \(\bar{h}^{k}\) are unbiased estimates. From these observations, we conclude that Assumption 3 is also satisfied.
For the regularized logistic regression example, similar to Mei et al. (2018), the constraint set \(X\) is an \(\ell_{\infty}\) ball so that it is convex and compact and Assumption 1 holds. All the penalty functions we considered in (32)-(34) are weakly convex, so that the loss \(\ell(x,D)\) is also weakly convex with
Figure 2: For Lasso, SCAD and MCP penalties, the top image in each plot shows the training loss along iterations, and the bottom image in each plot shows the logarithm of the distribution of the loss on the test data.
respect to \(x\). By similar arguments to those for the deep learning setting, the loss \(\ell(x,D)\) is integrable with respect to \(D\) for every fixed \(x\in X\), and Assumptions 3 and 4 hold. The input data are normalized and bounded, so the stochastic Lipschitz constant \(\tilde{L}(D)\) is also bounded, and Assumption 5 holds. Therefore, our assumptions are satisfied for the numerical experiments conducted in this work.
## 6 Conclusion
In this paper, we considered a distributionally robust stochastic optimization problem where the ambiguity set is defined with the use of the mean-semideviation risk measure. We reformulated this problem as a stochastic two-level non-smooth optimization problem and proposed a single time-scale method called Stochastic Compositional Subgradient (SCS). Our method can support two different ways of inner value tracking: (i) linearized tracking of a continuously differentiable loss function, (ii) tracking of a weakly convex loss function through the SPIDER estimator. We show that the sample complexity of \(\mathcal{O}(\epsilon^{-3})\) is possible in both cases, with only the constant larger in the second case. To our knowledge, this is the first sample complexity result for distributionally robust learning with non-convex non-smooth losses. Finally, we illustrated the performance of our algorithm on a robust deep-learning problem and a logistic regression problem with weakly convex, non-smooth regularizers.
## 7 Acknowledgements
This research is supported in part by the grants Office of Naval Research Award Number N00014-21-1-2244, National Science Foundation (NSF) CCF-1814888, NSF DMS-2053485, NSF DMS-1723085, the National Science Foundation Award DMS-1907522 and by the Office of Naval Research Award N00014-21-1-2161.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.